An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers

From MaRDI portal
Publication:4130221

DOI10.2307/2529786zbMath0357.62037OpenAlexW4294214797WikidataQ44453930 ScholiaQ44453930MaRDI QIDQ4130221

J Richard Landis, Gary G. Koch

Publication date: 1977

Published in: Biometrics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.2307/2529786




Related Items (27)

A family of multi-rater kappas that can always be increased and decreased by combining categoriesEquivalences of weighted kappas for multiple ratersConditional inequalities between Cohen's kappa and weighted kappasAGREEMENT ON A TWO–POINT SCALEGeneralized Symmetry Models for Hypercubic Concordance TablesModeling Clustered Ordered Categorical Data: A SurveyCommunication and efficiency in competitive coordination gamesPenguins Go Parallel: A Grammar of Graphics Framework for Generalized Parallel Coordinate PlotsA reliability index for presence-absence dataOn the equivalence of multirater kappas based on 2-agreement and 3-agreement with binary scoresInference for Kappas for Longitudinal Study Data: Applications to Sexual Health ResearchConditional inference for subject‐specific and marginal agreement: Two families of agreement measuresVariance estimation of nominal-scale inter-rater reliability with random selection of ratersA unified treatment of agreement coefficients and their asymptotic results: the formula of the weighted mean of weighted ratiosHidden action and outcome contractibility: an experimental test of moral hazard theoryCohen's linearly weighted kappa is a weighted averageStatistical description of interrater variability in ordinal ratingsAssessing the reliability of ordered categorical scales using kappa-type statisticsCohen's linearly weighted kappa is a weighted average of \(2\times 2\) kappasMeasuring pairwise interobserver agreement when all subjects are judged by the same observersRandom Effects Modeling Approaches for Estimating ROC Curves from Repeated Ordinal Tests without a Gold StandardWeighted Least-Squares Approach for Comparing Correlated KappaModeling Association plus Agreement among Multi-Raters for Ordered CategoriesComparing marginal distributions of large, sparse contingency tablesOn population‐based measures of agreement for binary classificationsUnnamed ItemThe effect of access to clean technology on pollution reduction: an experiment




This page was built for publication: An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers