Skip to main content
Log in

Inter-observer and intra-observer variability of the Oxford clinical cataract classification and grading system

  • Published:
International Ophthalmology Aims and scope Submit manuscript

Abstract

Intra-observer (within observers) and inter-observer (between observers) variability of the Oxford Clinical Cataract Classification and Grading System were studied. Twenty cataracts were examined and scored independently by four observers. On a separate occasion two of the observers repeated the assessments of the same cataracts in the absence of information from the initial observations. The chance corrected and weighted kappa statistics for observer agreement, both for inter-observer and intra-observer variability demonstrated satisfactory repeatibility of the cataract grading system. The overall intra-observer mean weighted kappa was χw = +0.68 (range SE χ = 0.012–0.052) and the overall inter-observer mean weighted kappa was χw = +0.55 (range SE χ = 0.011–0.043).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Chylack LT, Lee MR, Tung WH, Cheng HM. Classification of human senile cataractous change by the American cooperative cataract research group (CCRG) method, 1. Instrumentation and technique. Invest Opththalmol Vis Sci 1983; 24: 424–431.

    Google Scholar 

  2. Chylack LT, Leske MC. Validity and reliability of photoderived human cataract classification. Invest Ophthalmol Vis Sci 1986; 27 (Suppl - ARVO abstracts): 44.

    Google Scholar 

  3. Cicchetti DV, Sharma Y, Cotlier E. Assessment of observer variability in the classification of human cataract. Yale J Biol Med 1982; 55: 81–88.

    Google Scholar 

  4. Cicchetti DV, Sparrow SS. Development criteria for establishing inter-rater reliability of specific items: Applications to assessment of adaptive behaviour. Am J Ment Def 1981; 86: 127–137.

    Google Scholar 

  5. Cohen J. Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin 1968; 70: 213–220.

    Google Scholar 

  6. Fleiss JL. Statistical Methods for Rates and Proportions, 2nd Ed. New York: John Wiley & Sons, 1981.

    Google Scholar 

  7. Gibson RA, Sanderson HF. Observer variation in ophthalmology. Brit J Ophthalmol 1980; 64: 457–460.

    Google Scholar 

  8. Hall JN. Inter-rater reliability of ward rating scales. Brit J Psychiat 1974; 125: 248–255.

    Google Scholar 

  9. Hill AR. Making decisions in ophthalmology (Chapter 8). In: Chader G, Osborne N, eds. Progress in retinal research, Vol 6. Oxford: Pergamon Press, 1986; P 207–244.

    Google Scholar 

  10. Hiller R, Sperduto RD, Edered F. Epidemiologic associations with cataract in the 1971–1972 national health and nutrition examination survey. Am J Epidemiol 1983; 118: 239–249.

    Google Scholar 

  11. Kahn HA. Diagnostic standardization. Clinical Pharmacol Ther 1979; 25: 703–711.

    Google Scholar 

  12. Kahn HA, Leibowitz H, Ganley JP, Kini M, Colton T, Nickerson R, Dawber TR. Standardizing diagnostic procedures. Am J Ophthalmol 1975; 79: 768–775.

    Google Scholar 

  13. Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977; 33: 159–174.

    Google Scholar 

  14. Leibowitz HM, Krueger DE, Maunder LR, Milton RC, Kini MM, Kahn HA, Nickerson RJ, Pool J, Colton TL, Ganley JP, Loewenstein J.I, Dawber TR. The Framingham eye study monograph. Survey Ophthalmol 1980; 24 (Suppl): 336–610.

    Google Scholar 

  15. Sparrow JM, Bron AJ, Brown NAP, Ayliffe W, Hill AR. The Oxford clinical cataract classification and grading system. International Ophthalmology 1986; 9: 207–225.

    Google Scholar 

  16. West S, Taylor H, Newland H. Evaluation of photographic method for grading lens opacities for epidemiological research. Invest Ophthalmol Vis Sci 1986; 27 (Suppl - ARVO abstracts): 44.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sparrow, J.M., Ayliffe, W., Bron, A.J. et al. Inter-observer and intra-observer variability of the Oxford clinical cataract classification and grading system. Int Ophthalmol 11, 151–157 (1988). https://doi.org/10.1007/BF00130616

Download citation

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00130616

Key words

Navigation