An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers
From MaRDI portal
Publication:4130221
DOI10.2307/2529786zbMath0357.62037OpenAlexW4294214797WikidataQ44453930 ScholiaQ44453930MaRDI QIDQ4130221
J Richard Landis, Gary G. Koch
Publication date: 1977
Published in: Biometrics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.2307/2529786
Related Items (27)
A family of multi-rater kappas that can always be increased and decreased by combining categories ⋮ Equivalences of weighted kappas for multiple raters ⋮ Conditional inequalities between Cohen's kappa and weighted kappas ⋮ AGREEMENT ON A TWO–POINT SCALE ⋮ Generalized Symmetry Models for Hypercubic Concordance Tables ⋮ Modeling Clustered Ordered Categorical Data: A Survey ⋮ Communication and efficiency in competitive coordination games ⋮ Penguins Go Parallel: A Grammar of Graphics Framework for Generalized Parallel Coordinate Plots ⋮ A reliability index for presence-absence data ⋮ On the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scores ⋮ Inference for Kappas for Longitudinal Study Data: Applications to Sexual Health Research ⋮ Conditional inference for subject‐specific and marginal agreement: Two families of agreement measures ⋮ Variance estimation of nominal-scale inter-rater reliability with random selection of raters ⋮ A unified treatment of agreement coefficients and their asymptotic results: the formula of the weighted mean of weighted ratios ⋮ Hidden action and outcome contractibility: an experimental test of moral hazard theory ⋮ Cohen's linearly weighted kappa is a weighted average ⋮ Statistical description of interrater variability in ordinal ratings ⋮ Assessing the reliability of ordered categorical scales using kappa-type statistics ⋮ Cohen's linearly weighted kappa is a weighted average of \(2\times 2\) kappas ⋮ Measuring pairwise interobserver agreement when all subjects are judged by the same observers ⋮ Random Effects Modeling Approaches for Estimating ROC Curves from Repeated Ordinal Tests without a Gold Standard ⋮ Weighted Least-Squares Approach for Comparing Correlated Kappa ⋮ Modeling Association plus Agreement among Multi-Raters for Ordered Categories ⋮ Comparing marginal distributions of large, sparse contingency tables ⋮ On population‐based measures of agreement for binary classifications ⋮ Unnamed Item ⋮ The effect of access to clean technology on pollution reduction: an experiment
This page was built for publication: An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers