A family of multi-rater kappas that can always be increased and decreased by combining categories
From MaRDI portal
Publication:2360886
DOI10.1016/j.stamet.2011.08.008zbMath1365.62214OpenAlexW2138829810MaRDI QIDQ2360886
Publication date: 29 June 2017
Published in: Statistical Methodology (Search for Journal in Brave)
Full work available at URL: http://hdl.handle.net/1887/18302
Cohen's kappainter-rater reliabilityHubert's kappaMielke, Berry and Johnston's kappaSchouten-type inequality
Measures of association (correlation, canonical correlation, etc.) (62H20) Contingency tables (62H17)
Related Items
The effect of combining categories on Bennett, Alpert and Goldstein's \(S\) ⋮ Equivalences of weighted kappas for multiple raters ⋮ Corrected Zegers-ten Berge coefficients are special cases of Cohen's weighted kappa ⋮ On the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scores ⋮ A comparison of reliability coefficients for ordinal rating scales
Cites Work
- Unnamed Item
- Unnamed Item
- \(k\)-adic similarity coefficients for binary (presence/absence) data
- A formal proof of a paradox associated with Cohen's kappa
- On similarity coefficients for \(2\times2\) tables and correction for chance
- A note on the linearly weighted kappa coefficient for ordinal scales
- Cohen's kappa can always be increased and decreased by combining categories
- Weighted kappa is higher than Cohen's kappa for tridiagonal agreement tables
- On the equivalence of Cohen's kappa and the Hubert-Arabie adjusted Rand index
- Inequalities between kappa and kappa-like statistics for \(k\times k\) tables
- Agreement between two independent groups of raters
- Ramifications of a population model for \(\kappa\) as a coefficient of reliability
- Cohen's linearly weighted kappa is a weighted average of \(2\times 2\) kappas
- Inequalities between multi-rater kappas
- Measuring Agreement for Multinomial Data
- An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers
- Statistical description of interrater variability in ordinal ratings