Measuring Agreement for Multinomial Data
From MaRDI portal
Publication:3966937
DOI10.2307/2529886zbMath0501.62045OpenAlexW1972586296MaRDI QIDQ3966937
Publication date: 1982
Published in: Biometrics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.2307/2529886
reliabilitymultinomial datacomplete independenceasymptotic standard errortwo-way layoutmeasure of agreementintraclass correlation coefficientskappa-like statistic
Applications of statistics to biology and medical sciences; meta analysis (62P10) Measures of association (correlation, canonical correlation, etc.) (62H20) Applications of statistics to psychology (62P15)
Related Items (16)
A family of multi-rater kappas that can always be increased and decreased by combining categories ⋮ Equivalences of weighted kappas for multiple raters ⋮ Corrected Zegers-ten Berge coefficients are special cases of Cohen's weighted kappa ⋮ Assessing agreement with multiple raters on correlated kappa statistics ⋮ Statistical inference of agreement coefficient between two raters with binary outcomes ⋮ Modeling the agreement of discrete bivariate survival times using kappa coefficient ⋮ On the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scores ⋮ Inequalities between multi-rater kappas ⋮ Bayesian testing of agreement criteria under order constraints ⋮ Sklar's Omega: A Gaussian Copula-Based Framework for Assessing Agreement ⋮ Beyond kappa: A review of interrater agreement measures ⋮ Statistical description of interrater variability in ordinal ratings ⋮ Cohen's linearly weighted kappa is a weighted average of \(2\times 2\) kappas ⋮ Weighted kappa as a function of unweighted kappas ⋮ Confidence intervals for the interrater agreement measure kappa ⋮ A comparison of reliability coefficients for ordinal rating scales
This page was built for publication: Measuring Agreement for Multinomial Data