Abstract
This paper provides and applies a conceptual framework and a list of guiding principles for evaluation of generalist education programs. Programs are systematic efforts to achieve specified objectives. Evaluations gather data in order to improve or appraise programs and have a continuum of purposes and methods. Descriptive evaluations characterize the structures, processes, and outcomes of programs; research evaluations definitively assess the effectiveness of a program in terms of outcomes. Intermediate outcomes are changes in knowledge, attitudes, and skills of program participants; conclusive outcomes reflect the quality of performance of graduates in actual clinical situations. Outcomes are affected by inputs—the qualities of students entering the program. Guiding principles of program evaluation ensure that data gathered are useful. The authors illustrate the guiding principles with an actual pilot study that determined that expert pediatricians, general internists, and family practitioners could agree on key generalist competencies and that explores evaluation design based on these competencies. Finally, they consider the implications of undertaking generalist education evaluation.
Similar content being viewed by others
References
Lundberg GD, Lamm RD. Solving our primary care crisis by retraining specialists to gain specific primary care competencies (editorial). JAMA. 1993;270:380–1.
Petersdorf RG. The doctor is in. Acad Med. 1993;68:113–7.
Politzer RM, Harris DL, Gaston MH, Mullan F. Primary care physician supply and the underserved. JAMA. 1991;266:104–9.
Council on Graduate Medical Education, Health Resources and Services Administration, Public Health Service, US Dept of Health and Human Services. Improving Access to Health Care through Physician Workforce Reform: Directions for the 21st Century. Rockville, MD: US Dept of Health and Human Services; October 1992.
Bunker JP, Barnes BA, Mosteller F (eds). Costs, Risks and Benefits to Surgery. New York: Oxford University Press, 1977.
Adams R. Internal-mammary-artery ligation for coronary insufficiency. An evaluation. N Engl J Med. 1958;258:113.
Fink A. Evaluation Fundamentals: Guiding Health Programs, Research, and Policy. Newbury Park, CA: Sage, 1993.
Campbell DT, Stanley JC. Experimental and Quasi-Experimental Designs for Research. Chicago: Rand McNally, 1969.
Laffel G, Blumenthal D. The case for using industrial quality management science in health care organizations. JAMA. 1989;262:2869–73.
Donabedian A. The quality of medical care. Science. 1978;200:856–64.
Donabedian A. Explorations in Quality Assessment and Monitoring: The Definition of Quality and Approaches to Its Assessment. Ann Arbor, MI: Health Administration Press, 1980.
Health Professions Education for the Future: Schools in Service to the Nation. Report of the Pew Health Professions Commission, San Francisco, 1993.
The Pew Health Professions Commission. Healthy America: Practitioners for 2005, Durham, NC, 1991.
Smith LJ, Price DA, Houston IB. Objective structured clinical examination compared with other forms of students assessment. Arch Dis Child. 1984;59:1173–6.
Cass OW, Freeman ML, Peine CJ, et al. Objective evaluation of endoscopy skills during training. Ann Intern Med. 1993;118:40–4.
Fink A, Kosecoff J, Brook RH. Setting standards of performance for program evaluations: the case of the teaching hospital general medicine group practice program. Eval Program Planning. 1986;9:143–50.
US Department of Health and Human Services, Public Health Service, Agency of Health Care Policy and Research, Executive Office Center, Suite 501,2101 East Jefferson Street, Rockville, MD 20852.
US Preventive Services Task Force. Guide to Clinical Preventive Services. Baltimore, MD: Williams & Wilkins, 1989.
Bailar JC, Mosteller F. Guidelines for statistical reporting in articles for medical journals. Ann Intern Med. 1988;108:266–73.
Goodman LJ, Brueschke RC, Bone WH, et al. An experiment in medical education. JAMA. 1991;265:2373–6.
Linn LS, DiMatteo MR, Cope DW, et al. Measuring physicians’ humanistic attitudes, values, and behaviors. Med Care. 1987;25:504–15.
Braitman L. Confidence intervals assess both clinical and statistical significance. Ann Intern Med. 1991;114:515–7.
Mathieu OR, Alpert JJ, Pelton SI. Am J Dis Child. 1989;143:575.
Alpert JJ. The future for pediatric education. Pediatrics. 1990;86:653.
Flexner A. Medical Education in the United States and Canada, A Report to the Carnegie Foundation for the Advancement of Teaching. Carnegie Foundation Bulletin No. 4, New York, 1910.
Author information
Authors and Affiliations
Additional information
With the Generalist Program Evaluation Working Group:WILLIAM BITHONEY, MD. LINDA BLANK, EVANCHARNEY, MD. JACKENDE. MD, DONA L. HARRIS. PHD. PAUL MCCARTHY, MD. STEVENP. SHELOV, MD. DAVIDSWEE, MD
Working group members are from the Department of Medicine in Boston’s Children’s Hospital, Boston, Massachusetts (WB); the American Board of Internal Medicine, Philadelphia, Pennsylvania (LB); the Department of Pediatrics, University of Massachusetts Medical School, Worcester, Massachusetts (EC); the Division of General Internal Medicine, University of Pennsylvania School of Medicine, Philadelphia, Pennsylvania (JE); Michigan State University Kalamazoo Center for Medical Studies, Family Medicine, Kalamazoo, Michigan (DLH); the Department of Pediatrics, Yale University School of Medicine, New Haven, Connecticut (PMcC); the Department of Pediatrics, Albert Einstein College of Medicine, Bronx, New York (SPS); and the Department of Family Medicine, UMDNJ-Robert Wood Johnson Medical School, New Brunswick, New Jersey (DS).
Rights and permissions
About this article
Cite this article
Rubenstein, L.V., Fink, A., Gelberg, L. et al. Evaluating generalist education programs. J Gen Intern Med 9 (Suppl 1), S64–S72 (1994). https://doi.org/10.1007/BF02598120
Issue Date:
DOI: https://doi.org/10.1007/BF02598120