Abstract
We appreciate the thoughtful reactions of our colleagues to the “no prescriptives, please” proposal of Robinson et al. (2013), as well as the opportunity to respond to them. For the most part, our colleagues seem to agree that a problem exists in terms of unwarranted recommendations for practice appearing too often in educational research journals. At the same time, we realize that because our proposed policy is rather Draconian—in that it seeks to eliminate all recommendations for practice in all primary research journals—every responder mentioned that it was too extreme. This was intentional on our part. If, as Harris (2013) suggested, increased awareness and scrutiny become topics for further discussion among APA editors, then our modest mission will have been accomplished. In this rejoinder, we attempt to restate and clarify our proposal, along with its entailments, followed by brief comments on each responder's response.
Similar content being viewed by others
Notes
Although the incidence of effect size reporting has increased dramatically in education and psychology journals over the past decade, it is evident that such reporting generally occurs in an unthinking, indiscriminate fashion (Peng et al. 2013). Insofar as no randomized intervention experiment was conducted at the time and throughout the decade, it is not possible to determine exactly how much of the increase in authors' provision of effect size and confidence interval information is attributable to the manual's “encouragement” of them per se.
References
Alexander, P. A. (2013). In praise of (reasoned and reasonable) speculation: a response to Robinson et al.’s moratorium on recommendations for practice. Educational Psychology Review. doi:10.1007/s10648-013-9234-2.
Bracht, G. H., & Glass, G. V. (1968). The external validity of experiments. American Educational Research Journal, 5, 437–474.
Cahnmann, M. (2003). The craft, practice, and possibility of poetry in educational research. Educational Researcher, 32(3), 29–36.
Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research. Chicago: Rand McNally.
Carver, R. (1993). The case against statistical significance testing, revisited. The Journal of Experimental Education, 61, 287–292.
Harris, K. R. (2003). Editorial: is the work as good as it can be? Journal of Educational Psychology, 95, 451–452.
Harris, K. R. (2013). Disallowing recommendations for practice and policy: a proposal that is both too much and too little. Educational Psychology Review. doi:10.1007/s10648-013-9235-1.
Hedges, L. V. (2013). Recommendations for practice: justifying claims of generalizability. Educational Psychology Review. doi:10.1007/s10648-013-9239-x.
Kaestle, C. F. (1993). The awful reputation of education research. Educational Researcher, 22(1), 23–31.
Levin, J.R., O'Donnell, A.M., & Kratochwill, T.R. (2003). Educational/psychological intervention research. In I. B. Weiner (Series Ed.) & W. M. Reynolds & G. E. Miller (Volume Eds.). Handbook of psychology: vol. 7. Educational psychology. New York: Wiley, pp. 557–581.
Mosteller, F., Nave, B., & Miech, E. J. (2004). Why we need a structured abstract in education research. Educational Researcher, 33(1), 29–34.
Peng, C.-Y., Chen, L.-T., Chiang, H.-M., & Chiang, Y.-C. (2013). The impact of APA and AERA guidelines on effect size reporting. Educational Psychology Review, 25, 157–209.
Reinhart, A. L., Haring, S. H., Levin, J. R., Patall, E. A., & Robinson, D. H. (2013). Models of not-so-good behavior: yet another way to squeeze causality and recommendations for practice out of correlational data. Journal of Educational Psychology, 105, 241–247.
Renkl, A. (2013). Why practice recommendations are important in use-inspired basic research and why too much caution is dysfunctional. Educational Psychology Review. doi:10.1007/s10648-013-9236-0.
Robinson, D. H., & Levin, J. R. (2013). A “no prescriptives, please” proposal postscript: when desperate times require desperate measures. Educational Psychology Review. doi:10.1007/s10648-013-9238-y.
Schmidt, F. L. (1996). Statistical significance testing and cumulative knowledge in psychology: implications for training of researchers. Psychological Methods, 1, 115–129.
Schraw, G., & Patall, E. A. (2013). Using principles of evidence-based practice to improve prescriptive recommendations. Educational Psychology Review. doi:10.1007/s10648-013-9237-z.
Sroufe, G. E. (1997). Improving the “awful reputation” of education research. Educational Researcher, 26(7), 26–28.
Thompson, B. (1996). AERA editorial policies regarding statistical significance testing: three suggested reforms. Educational Researcher, 25(2), 26–30.
Vaughn, S., & Fuchs, L. S. (2013). Staying within one's data to make recommendations for practice in primary educational research journals. Educational Psychology Review. doi:10.1007/s10648-013-9232-4.
Wainer, H., & Clauser, B. (2013). Reflections on a too extreme idea. Educational Psychology Review. doi:10.1007/s10648-013-9231-5.
Walster, G. W., & Cleary, T. A. (1970). A proposal for a new editorial policy in the social sciences. The American Statistician, 24, 16–19.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Robinson, D.H., Levin, J.R. A “No Prescriptives, Please” Proposal Postscript: When Desperate Times Require Desperate Measures. Educ Psychol Rev 25, 353–359 (2013). https://doi.org/10.1007/s10648-013-9238-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10648-013-9238-y