Equivalences of weighted kappas for multiple raters
From MaRDI portal
Publication:2360892
DOI10.1016/j.stamet.2011.11.001zbMath1365.62216OpenAlexW2003055974MaRDI QIDQ2360892
Publication date: 29 June 2017
Published in: Statistical Methodology (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.stamet.2011.11.001
Cohen's kappainter-rater reliabilityCohen's weighted kappaordinal agreementMielke, Berry and Johnston's weighted kappaHubert's kappa\(g\)-agreementmultiple raters
Measures of association (correlation, canonical correlation, etc.) (62H20) Contingency tables (62H17)
Related Items (5)
Corrected Zegers-ten Berge coefficients are special cases of Cohen's weighted kappa ⋮ The dependence of chance-corrected weighted agreement coefficients on the power parameter of the weighting scheme: analysis and measurement ⋮ Cohen's weighted kappa with additive weights ⋮ On the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scores ⋮ A comparison of reliability coefficients for ordinal rating scales
Cites Work
- Unnamed Item
- Unnamed Item
- \(n\)-way metrics
- \(k\)-adic similarity coefficients for binary (presence/absence) data
- A formal proof of a paradox associated with Cohen's kappa
- Some paradoxical results for the quadratically weighted kappa
- On similarity coefficients for \(2\times2\) tables and correction for chance
- On multi-way metricity, minimality and diagonal planes
- A note on the linearly weighted kappa coefficient for ordinal scales
- Cohen's kappa can always be increased and decreased by combining categories
- Weighted kappa is higher than Cohen's kappa for tridiagonal agreement tables
- Cohen's kappa is a weighted average
- On the equivalence of Cohen's kappa and the Hubert-Arabie adjusted Rand index
- Inequalities between kappa and kappa-like statistics for \(k\times k\) tables
- Agreement between two independent groups of raters
- Ramifications of a population model for \(\kappa\) as a coefficient of reliability
- Triadic distance models: axiomatization and least squares representation
- A comparison of the multidimensional scaling of triadic and dyadic distances
- Dispersion-weighted kappa: an integrative framework for metric and nominal scale agreement coefficients
- Cohen's linearly weighted kappa is a weighted average of \(2\times 2\) kappas
- A family of multi-rater kappas that can always be increased and decreased by combining categories
- Inequalities between multi-rater kappas
- Measuring Agreement for Multinomial Data
- The Measurement of Observer Agreement for Categorical Data
- An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers
- Beyond kappa: A review of interrater agreement measures
- Statistical description of interrater variability in ordinal ratings
This page was built for publication: Equivalences of weighted kappas for multiple raters