Skip to main content
Log in

Use of the probit model to estimate school performance in student attainment of achievement testing standards

  • Published:
Educational Assessment, Evaluation and Accountability Aims and scope Submit manuscript

Abstract

In the USA, trends in educational accountability have driven several models attempting to provide quality data for decision making at the national, state, and local levels, regarding the success of schools in meeting standards for competence. Statistical methods to generate data for such decisions have generally included (a) status models that examine simple indications of number of students meeting a criterion level of achievement, (b) growth models that explore change over the course of one or more years, and (c) value-added models that attempt to control for factors deemed relevant to student achievement patterns. This study examined a new strategy for student and school achievement modeling that augments the field through the use of the probit model to estimate the likelihood of students meeting an established level standard and estimating the proportion of individuals within a school meeting the standard. Results of the study showed that the probit model was an effective tool both for providing such adjustments, as well as for adjusting them based upon salient demographic variables. Implications of these results and suggestions for further use of the model are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles and news from researchers in related subjects, suggested using machine learning.

References

  • Agresti, A. (2002). Categorical data analysis. New York: Wiley.

    Book  Google Scholar 

  • Aud, S., Wilkinson‐Flicker, S., Kristapovich, P., Rathbun, A., Wang, X., and Zhang, J. (2013). The Condition of Education 2013 (NCES 2013‐037): U.S. department of education, national center for education statistics. Washington, DC. http://nces.ed.gov/pubsearch.

  • Azen, R., & Walker, C. M. (2011). Categorical data analysis for the behavioral and social sciences. New York: Routledge.

    Google Scholar 

  • Baker, M., & Johnston, P. (2010). The impact of socioeconomic status on high stakes testing reexamined. Journal of Instructional Psychology, 37(3), 193–199.

    Google Scholar 

  • Ballou, D., Sanders, W., & Wright, P. (2004). Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioral Statistics, 21, 37–66.

    Article  Google Scholar 

  • Barton, P. E. (2008). The right way to measure growth. Educational Leadership, 65, 70–73.

    Google Scholar 

  • Braun, H. (2005). Using student progress to evaluate teachers: a primer to value-added models. Princeton: ETS.

    Google Scholar 

  • Briggs, D. C., & Weeks, J. P. (2009). The sensitivity of value-added modeling to the creation of a vertical score scale. Education Finance and Policy, 4(4), 385–414.

    Article  Google Scholar 

  • Capraro, R. M., Young, J. R., Lewis, C. W., Yetkiner, Z. E., & Woods, M. N. (2009). An examination of mathematics achievement and growth in a midwestern urban school district: implications for teachers and administrators. Journal of Urban Mathematics Education, 2(2), 46–65.

    Google Scholar 

  • Chiang, H. (2009). How accountability pressure on failing schools affects student achievement. Journal of Public Economics, 93, 1045–1057.

    Article  Google Scholar 

  • Choi, K., & Goldschmidt, P. (2012). A multilevel latent growth curve approach to predicting student proficiency. Asia Pacific Education Review, 13(2), 199–208.

    Article  Google Scholar 

  • IBM Corp. (2010). IBM SPSS Statistics for Windows, version 19.0. Armonk: IBM Corp.

    Google Scholar 

  • Darling-Hammond, L., Amerin-Beardsley, A., Haertel, E., & Rothstein, J. (2012). Evaluating teacher evaluation. Phi Delta Kappan, 93(6), 8–15.

    Google Scholar 

  • De Lisle, J., Smith, P., & Jules, V. (2010). Evaluating the geography of gendered achievement using large-scale assessment data from the primary school system of the Republic of Trinidad and Tobago. International Journal of Educational Development, 30(4), 405–417.

    Article  Google Scholar 

  • Downey, D. B., von Hippel, P. T., & Hughes, M. (2008). Are “failing” schools really failing? Using seasonal comparison to evaluate school effectiveness. Sociology of Education, 81(3), 242–270.

    Article  Google Scholar 

  • Fox, J. (2008). Applied regression analysis and generalized linear models. Thousand Oaks: Sage.

    Google Scholar 

  • Franco, M. S., & Seidel, K. (2012). Evidence for the need to more closely examine school effects in value-added modeling and related accountability policies. Education and Urban Society. doi:10.1177/0013124511432306.

    Google Scholar 

  • Goldschmidt, P., Roschewski, P., Choi, L., Auty, W., Hebbler, S., Blank, R., & Williams, A. (2005). Policymakers’ guide to growth models for school accountability: How do accountability models differ? Washington DC: The Council of Chief State School Officers. http://www.ccsso.org/Resources/Publications/Policymakers%E2%80%99_Guide_to_Growth_Models_for_School_Accountability_How_Do_Accountability_Models_Differ.html. Accessed 14 Jan 2014.

  • Goldschmidt, P., Choi, K., Martinez, F., & Novak, J. (2010). Using growth models to monitor school performance: comparing the effect of the metric and the assessment. School Effectiveness and School Improvement: An International Journal of Research, Policy, and Practice, 21(2), 337–357.

    Article  Google Scholar 

  • Gorard, S. (2011). Now you see it, now you don’t: school effectiveness as conjuring? Research in Education, 86, 39–45.

    Article  Google Scholar 

  • Lee, J. (2010). Tripartite growth trajectories of reading and math achievement: tracking national academic progress at primary, middle, and high school levels. American Educational Research Journal, 47(4), 800–832.

    Article  Google Scholar 

  • Linn, R. L. (2000). Assessments and accountability. Educational Researcher, 29, 4–16.

    Google Scholar 

  • Lockwood, J. R., McCaffrey, D. F., Hamilton, L. S., Stecher, B., Le, V.-N., & Martinez, F. (2006). The sensitivity of value-added teacher effect estimates to different mathematics achievement measures. Santa Monica: RAND.

    Google Scholar 

  • Martineau, J. A. (2006). Distorting value added: the use of longitudinal, vertically scaled student achievement data for value-added accountability. Journal of Educational and Behavioral Statistics, 31, 35–62.

    Article  Google Scholar 

  • McCaffrey, D.F. (2013). Do value-added methods level the playing field for teachers? Carnegie Knowledge Network. http://carnegieknowledgenetwork.org/briefs/value-added/level-playing-field/.

  • McCaffrey, D. F., Lockwood, J. R., Koretz, D., Louis, T. A., & Hamilton, L. (2004). Models for value-added modeling of teacher effects. Journal of Educational and Behavioral Statistics, 29(1), 67–101.

    Article  Google Scholar 

  • Northwest Evaluation Association. (2003). Technical manual for use with measures of academic progress and achievement level tests. Portland: Northwest Evaluation Association.

    Google Scholar 

  • Olson, L. (2004). Value added models gain in popularity. Education Week, 24(12), 14–15.

    Google Scholar 

  • Perry, L. B., & McConney, A. (2010). Does the SES of the school matter? An examination of socioeconomic status and student achievement using PISA 2003. Teachers College Record, 112(4), 1137–1162.

    Google Scholar 

  • Scherrer, J. (2012). What’s the value of VAM (value-added modeling)? Phi Delta Kappan, 93(8), 58–60.

    Google Scholar 

  • Schmidt, W. H., Houang, R. T., & McKnight, C. C. (2005). Value-added research: right idea but wrong solution? In R. Lissitz (Ed.), Value added models in education: theory and practice (pp. 272–297). Maple Grove: JAM.

    Google Scholar 

  • Sidak, Z. (1967). Rectangular confidence regions for the means of multivariate normal distributions. Journal of American Statistical Association, 67(62), 626–633.

    Google Scholar 

  • Tong, H., & Lim, K. S. (1980). Threshold autoregression, limit cycles, and cyclical data. Journal of the Royal Statistical Society, Series B, 42, 245–292.

    Google Scholar 

  • Wainer, H. (2000). Computer adaptive testing: a primer. Mahwah: Lawrence Erlbaum.

    Google Scholar 

  • Weiss, M. J., & May, H. (2012). A policy analysis of the federal growth model pilot program’s measures of school performance: the Florida case. Association for Education Finance and Policy, 7(1), 44–73. doi:10.1162/EDFP_a_00053.

    Article  Google Scholar 

  • Wiggan, G. (2007). Race, school achievement, and educational inequality: toward a student-based inquiry perspective. Review of Educational Research, 77(3), 310–333.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to W. Holmes Finch.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Finch, W.H., Cassady, J.C. Use of the probit model to estimate school performance in student attainment of achievement testing standards. Educ Asse Eval Acc 26, 177–201 (2014). https://doi.org/10.1007/s11092-013-9186-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11092-013-9186-6

Keywords