On the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scores
From MaRDI portal
Publication:1952677
DOI10.5402/2012/656390OpenAlexW2108377901WikidataQ58691981 ScholiaQ58691981MaRDI QIDQ1952677
Publication date: 3 June 2013
Published in: ISRN Probability and Statistics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.5402/2012/656390
Statistics (62-XX) Biology and other natural sciences (92-XX) Game theory, economics, finance, and other social and behavioral sciences (91-XX)
Cites Work
- \(k\)-adic similarity coefficients for binary (presence/absence) data
- A note on the linearly weighted kappa coefficient for ordinal scales
- Weighted kappa is higher than Cohen's kappa for tridiagonal agreement tables
- Inequalities between kappa and kappa-like statistics for \(k\times k\) tables
- Agreement between two independent groups of raters
- Cohen's linearly weighted kappa is a weighted average of \(2\times 2\) kappas
- A family of multi-rater kappas that can always be increased and decreased by combining categories
- Equivalences of weighted kappas for multiple raters
- Conditional inequalities between Cohen's kappa and weighted kappas
- Inequalities between multi-rater kappas
- Measuring Agreement for Multinomial Data
- The Measurement of Observer Agreement for Categorical Data
- An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers
- Beyond kappa: A review of interrater agreement measures
- Statistical description of interrater variability in ordinal ratings
This page was built for publication: On the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scores