Skip to main content
Log in

Evaluating multi-treatment programs: theory and evidence from the U.S. Job Training Partnership Act experiment

  • Original Paper
  • Published:
Empirical Economics Aims and scope Submit manuscript

Abstract

This paper considers the evaluation of programs that offer multiple treatments to their participants. Our theoretical discussion outlines the tradeoffs associated with evaluating the program as a whole versus separately evaluating the various individual treatments. Our empirical analysis considers the value of disaggregating multi-treatment programs using data from the U.S. National Job Training Partnership Act Study. This study includes both experimental data, which serve as a benchmark, and non-experimental data. The JTPA experiment divides the program into three treatment “streams” centered on different services. Unlike previous work that analyzes the program as a whole, we analyze the streams separately. Despite our relatively small sample sizes, our findings illustrate the potential for valuable insights into program operation and impact to get lost when aggregating treatments. In addition, we show that many of the lessons drawn from analyzing JTPA as a single treatment carry over to the individual treatment streams.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Abadie A, Imbens G (2006) On the failure of the bootstrap for matching estimators. Unpublished manuscript, University of California at Berkeley

  • Andrews D, Buchinsky M (2000) A three-step method for choosing the number of bootstrap repetitions. Econometrica 68:23–51

    Article  Google Scholar 

  • Angrist J, Krueger K (1999) Empirical strategies in labor economics. In: Ashenfelter O, Card D (eds) Handbook of Labor Economics Vol 3A. North-Holland, Amsterdam, pp. 1277–1366

    Google Scholar 

  • Ashenfelter O (1978) Estimating the effect of training programs on earnings. Rev Econ Stat 6:47–57

    Article  Google Scholar 

  • Black D, Smith J (2004) How robust is the evidence on the effects of college quality? Evidence from matching. J Econ 121:99–124

    Google Scholar 

  • Black D, Smith J, Berger M, Noel B (2003) Is the threat of reemployment services more effective than the services themselves? Evidence from the UI system using random assignment. Am Econ Rev 93:1313–1327

    Article  Google Scholar 

  • Bloom H, Orr L, Cave G, Bell S, Doolittle F (1993) The National JTPA Study: title II-A impacts on earnings and employment at 18 Months. Abt Associates, Bethesda

    Google Scholar 

  • Bloom H, Orr L, Bell S, Cave G, Doolittle F, Lin W, Bos J (1997) The benefits and costs of JTPA title II-A programs: key findings from the National Job Training Partnership Act study. J Hum Resources 32:549–576

    Article  Google Scholar 

  • Card D, Sullivan D (1988) Measuring the effect of subsidized training programs on movements in and out of employment. Econometrica 56:497–530

    Article  Google Scholar 

  • Courty P, Marschke G (2004) An empirical investigation of gaming responses to explicit performance incentives. J Labor Econ 22:23–56

    Article  Google Scholar 

  • Dehejia R, Wahba S (1999) Causal effects in non-experimental studies: re-evaluating the evaluation of training programs. J Am Stat Assoc 94:1053–1062

    Article  Google Scholar 

  • Dehejia R, Wahba S (2002) Propensity score matching methods for non-experimental causal studies. Rev Econ Stat 84:139–150

    Article  Google Scholar 

  • Devine T, Heckman J (1996) The consequences of eligibility rules for a social program: a study of the Job Training Partnership Act. Res Labor Econ 15:111–170

    Google Scholar 

  • Dolton P, Smith J, Azevedo JP (2006) The econometric evaluation of the new deal for lone parents. Unpublished manuscript, University of Michigan

  • Doolittle F, Traeger L (1990) Implementing the National JTPA Study. Manpower Demonstration Research Corporation, New York

    Google Scholar 

  • Dorset R (2006) The New Deal for Young People: effect on the labor market status of young men. Labour Econ 13:405–422

    Article  Google Scholar 

  • Fan J, Gijbels I (1996) Local polynomial modeling and its applications. Chapman and Hall, New York

    Google Scholar 

  • Fisher R (1935) The design of experiments. Oliver and Boyd, London

    Google Scholar 

  • Fitzenberger B, Speckesser S (2005) Employment effects of the provision of specific professional skills and techniques in Germany. IZA Working paper no. 1868

  • Frölich M (2004) Finite sample properties of propensity score matching and weighting estimators. Rev Econ Stat 86:77–90

    Article  Google Scholar 

  • Frölich, M (2006) A note on parametric and nonparametric regression in the presence of endogenous control variables. IZA working paper no. 2126

  • Galdo J, Smith J, Black D (2006) Bandwidth selection and the estimation of treatment effects with nonexperimental data. Unpublished manuscript, University of Michigan

  • Gerfin M, Lechner M (2002) Microeconometric evaluation of active labour market policy in Switzerland. Econ J 112:854–803

    Article  Google Scholar 

  • Heckman J (1979) Sample selection bias as a specification error. Econometrica 47:153–161

    Article  Google Scholar 

  • Heckman J, Hotz VJ (1989) Choosing among alternative nonexperimental methods for estimating the impact of training programs. J Am Stat Assoc 84:862–874

    Article  Google Scholar 

  • Heckman J, Navarro S (2004) Using matching, instrumental variables, and control functions to estimate economic choice models. Rev Econ Stat 86:30–57

    Article  Google Scholar 

  • Heckman J, Smith J (1999) The pre-programme earnings dip and the determinants of participation in a social programme: implications for simple program evaluation strategies. Econ J 109:313–348

    Article  Google Scholar 

  • Heckman J, Smith J (2000) The sensitivity of experimental impact estimates: evidence from the National JTPA Study. In: Blanchflower D, Freeman R (eds) Youth employment and joblessness in advanced countries. University of Chicago Press, Chicago

    Google Scholar 

  • Heckman J, Smith J (2004) The determinants of participation in a social program: evidence from a prototypical job training program. J Labor Econ 22:243–298

    Article  Google Scholar 

  • Heckman J, Todd P (1995) Adapting propensity score matching and selection models to choice-based samples. Unpublished manuscript, University of Chicago

  • Heckman J, Ichimura H, Todd P (1997) Matching as an econometric evaluation estimator: evidence from evaluating a job training program. Rev Econ Stud 64:605–654

    Article  Google Scholar 

  • Heckman J, Ichimura H, Smith J, Todd P (1998a) Characterizing selection bias using experimental data. Econometrica 66:1017–1098

    Article  Google Scholar 

  • Heckman J, Lochner L, Taber C (1998b) Explaining rising wage inequality: explorations with a dynamic general equilibrium model of labor earnings with heterogeneous agents. Rev Econ Dynam 1:1–58

    Article  Google Scholar 

  • Heckman J, Smith J, Taber C (1998c) Accounting for dropouts in evaluations of social programs. Rev Econ Stat 80:1–14

    Article  Google Scholar 

  • Heckman J, LaLonde R, Smith J (1999) The economics and econometrics of active labor market programs. In: Ashenfelter O, Card D (eds) Handbook of Labor Economics, Vol 3A. North-Holland, Amsterdam, pp 1865–2097

    Google Scholar 

  • Heckman J, Hohmann N, Smith J, Khoo M (2000) Substitution and dropout bias in social experiments: a study of an influential social experiment. Q J Econ 115:651–694

    Article  Google Scholar 

  • Heckman J, Heinrich C, Smith J (2002) The performance of performance standards. J Hum Resources 36:778–811

    Article  Google Scholar 

  • Heinrich C, Marschke G, Zhang A (1999) Using administrative data to estimate the cost-effectiveness of social program services. Unpublsihed manuscript, Univerity of Chicago

  • Ho D, Kosuke I, King G, Stuart E (2007) Matching as nonparametric preprocessing for reducing model dependence in parametric causal inference. Forthcoming in: Political Analysis

  • Imbens G (2000) The role of the propensity score in estimating dose-response functions. Biometrika 87:706–710

    Article  Google Scholar 

  • Kordas G, Lehrer S (2004) Matching using semiparametric propensity scores. Unpublished manuscript, Queen’s University

  • Kemple J, Doolittle F, Wallace J (1993) The National JTPA Study: site characteristics and participation patterns. Manpower Demonstration Research Corporation, New York

    Google Scholar 

  • LaLonde R (1986) Evaluating the econometric evaluations of training programs using experimental data. Am Econ Rev 76:604–620

    Google Scholar 

  • Lechner M (2001) Identification and estimation of causal effects of multiple treatments under the conditional independence assumption. In: Lechner M, Pfeiffer P (eds) Econometric evaluation of labour market policies. Physica, Heidelberg

    Google Scholar 

  • Lechner M, Smith J (2007) What is the value added by caseworkers?. Labour Econ 14:135–151

    Article  Google Scholar 

  • Lechner M, Miquel R, Wunsch C (2008) The curse and blessing of training the unemployed in a changing economy: the case of East Germany after unification. Forthcoming in: German Economic Review

  • Lise J, Seitz S, Smith J (2005) Equilibrium policy experiments and the evaluation of social programs. NBER working paper no. 10283

  • Manski C (1996) Learning about treatment effects from experiments with random assignment to treatment. J Hum Resources 31:707–733

    Google Scholar 

  • Michalopolous C, Tattrie D, Miller C, Robins P, Morris P, Gyarmati D, Redcross C, Foley K, Ford R (2002) Making work pay: final report on the Self-Sufficiency Project for long-term welfare recipients. Social Research and Demonstration Corporation, Ottawa

    Google Scholar 

  • Neyman J (1923) Statistical problems in agricultural experiments. J R Stat Soc 2:107–180

    Google Scholar 

  • Orr L, Bloom H, Bell S, Lin W, Cave G, Doolittle F (1994) The National JTPA Study: impacts, benefits and costs of title II-A. Abt Associates, Bethesda

    Google Scholar 

  • Pagan A, Ullah A (1999) Nonparametric econometrics. Cambridge University Press, Cambridge

    Google Scholar 

  • Pechman J, Timpane M (1975) Work incentives and income guarantees: the New Jersey negative income tax experiment. Brookings Institution, Washington DC

    Google Scholar 

  • Plesca M (2006) A general equilibrium evaluation of the employment service. Unpublished manuscript, University of Guelph

  • Quandt R (1972) Methods of estimating switching regressions. J Am Stat Assoc 67:306–310

    Article  Google Scholar 

  • Racine J, Li Q (2005) Nonparametric estimation of regression functions with both categorical and continuous data. J Econ 119:99–130

    Google Scholar 

  • Rosenbaum P, Rubin D (1983) The central role of the propensity score in observational studies for causal effects. Biometrika 70:41–55

    Article  Google Scholar 

  • Rosenbaum P, Rubin D (1984) Reducing bias in observational studies using subclassification on the propensity score. J Am Stat Assoc 79:516–524

    Article  Google Scholar 

  • Rosenbaum P, Rubin D (1985) Constructing a control group using multivariate matched sampling methods that incorporate the propensity score. Am Stat 39:33–38

    Article  Google Scholar 

  • Roy AD (1951) Some thoughts on the distribution of earnings. Oxford Econ Pap 3:135–146

    Google Scholar 

  • Rubin D (1974) Estimating causal effects of treatments in randomized and non-randomized studies. J Educ Psychol 66:688–701

    Article  Google Scholar 

  • Smith J, Todd P (2005a) Does matching overcome LaLonde’s critique of nonexperimental methods? J Econ 125:305–53

    Google Scholar 

  • Smith J, Todd P (2005b) Rejoinder. J Econ 125:365–375

    Google Scholar 

  • Smith J, Whalley A (2006) How well do we measure public job training? Unpublished manuscript, University of Michigan

  • Zhao Z (2004) Using matching to estimate treatment effects: data requirements, matching metrics, and Monte Carlo evidence. Rev Econ Stat 86:91–107

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jeffrey Smith.

Additional information

An earlier version of this paper circulated under the title “Choosing among Alternative Non-Experimental Impact Estimators: The Case of Multi-Treatment Programs”.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Plesca, M., Smith, J. Evaluating multi-treatment programs: theory and evidence from the U.S. Job Training Partnership Act experiment. Empirical Economics 32, 491–528 (2007). https://doi.org/10.1007/s00181-006-0095-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00181-006-0095-0

Keywords

Navigation