Statistical inference of agreement coefficient between two raters with binary outcomes
From MaRDI portal
Publication:5077201
DOI10.1080/03610926.2019.1576894OpenAlexW2918131037WikidataQ128352998 ScholiaQ128352998MaRDI QIDQ5077201
Publication date: 18 May 2022
Published in: Communications in Statistics - Theory and Methods (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/03610926.2019.1576894
Related Items (2)
Statistical inference of Gwet’s AC1 coefficient for multiple raters and binary outcomes ⋮ Justification for the use of Cohen's kappa statistic in experimental studies of NLP and text mining
Cites Work
- Unnamed Item
- Homogeneity Score Test for the Intraclass Version of the Kappa Statistics and Sample‐Size Determination in Multiple or Stratified Studies
- Ramifications of a population model for \(\kappa\) as a coefficient of reliability
- Interval Estimation for a Difference Between Intraclass Kappa Statistics
- Weighted Least-Squares Approach for Comparing Correlated Kappa
- 2 x 2 Kappa Coefficients: Measures of Agreement or Association
- Confidence Interval Estimation of the Intraclass Correlation Coefficient for Binary Outcome Data
- Extension of the Kappa Coefficient
- Measuring Agreement for Multinomial Data
- The Measurement of Observer Agreement for Categorical Data
- Measurement of Interrater Agreement with Adjustment for Covariates
- Testing the Homogeneity of Kappa Statistics
This page was built for publication: Statistical inference of agreement coefficient between two raters with binary outcomes