Skip to main content

Opportunities, Devices, and Instruments

  • Chapter
  • First Online:

Part of the book series: Springer Series in Statistics ((SSS))

Abstract

What features of the design of an observational study affect its ability to distinguish a treatment effect from bias due to an unmeasured covariate u ij? This topic, which is the focus of Part III of the book, is sketched in informal terms in the current chapter. An opportunity is an unusual setting in which there is less confounding with unobserved covariates than occurs in common settings. One opportunity may be the base on which one or more natural experiments are built. A device is information collected in an effort to disambiguate an association that might otherwise be thought to reflect either an effect or a bias. Typical devices include: multiple control groups, outcomes thought to be unaffected by the treatment, coherence among several outcomes, and varied doses of treatment. An instrument is a relatively haphazard nudge towards acceptance of treatment where the nudge itself can affect the outcome only if it prompts acceptance of the treatment. Although competing theories structure design, opportunities, devices, and instruments are ingredients from which designs are built.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   119.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   159.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Citing Bentham, the Oxford English Dictionary writes: “‘disambiguate’: verb, to remove ambiguity from.”

  2. 2.

    The analysis as I have done it in Table 5.1 is unproblematic because the situation is so dramatic, with t’s either above 10 or below 1. In less dramatic situations, this analysis is not appropriate, for several reasons. First, a more powerful test for effect would use both control groups at once. Second, Table 5.1 performs several tests with no effort to control the rate of misstatements by these tests. Third, Table 5.1 takes the absence of a difference between the two control groups as supporting their comparability, but the failure to reject a null hypothesis is not evidence in favor of that hypothesis. For instance, the two control groups might not differ significantly because of limited power, or else they might differ significantly but the differences might be too small to invalidate their usefulness in the main comparison with the treated group. These issues will be discussed with greater care in Sect. 23.3.

  3. 3.

    Hill [63] used the attractive term “coherence” but did not give it a precise meaning. Campbell [32] used the term “multiple operationalism” in a more technical sense, one that is quite consistent with the discussion in the current section. Trochim [165] uses the term “pattern matching” in a similar way. Reynolds and West [113] present a compelling application.

  4. 4.

    In an obvious way, in adding the two signed rank statistics, one is committing oneself to a particular direction of effect for the two outcomes. If one anticipated gains in math scores together with declines in verbal scores, one might replace the verbal score by their negation before summing the signed rank statistics. Because the two outcomes are ranked separately, approximately equal weight is being given to each of the outcomes. The coherent signed rank statistic may be used with more than two oriented outcomes. It may also be adjusted to include varied doses of treatment [124].

  5. 5.

    Specificity of a treatment effect in Hill’s sense [63] is sometimes understood as referring to the number of outcomes associated with the treatment, but more recent work has emphasized the absence of associations with outcomes the treatment is not expected to affect [118, 172] .

  6. 6.

    Consistency and unbiasedness are two concepts of minimal competence for a test of a null hypothesis H 0 against an alternative hypothesis H A. Consistency says the test would work if the sample size were large enough. Unbiasedness says the test is oriented in the correct direction in samples of all sizes. One would be hard pressed to say the test is actually a test of H 0 against H A if consistency and unbiasedness failed in a material way. To be a 5% level test of H 0, the chance of a P-value less than 0.05 must be at most 5% when H 0 is true. The power of a test of a null hypothesis, H 0, against an alternative hypothesis, H A, is the probability that H 0 will be rejected when H A is true. If the test is performed at the 5% level, then the power of the test is the probability of a P-value less than or equal to 0.05 when H 0 is false and H A is true instead. We would like the power to be high. The test is consistent against H A if the power increases to 1 as the sample size increases—that is, rejection of H 0 in favor of H A is nearly certain if H A is true and the sample size is large enough. The test is an unbiased test of H 0 against H A if the power is at least equal to the level whenever H A is true. If the test is performed at the 5% level, then it is unbiased against H A if the power is at least 5% when H A is true.

  7. 7.

    More precisely, the Kullback–Leibler information in the unaffected outcome is never greater, and is typically much smaller, than the information in the unmeasured covariate itself [118].

  8. 8.

    When \(\mathcal {F}\) was introduced in Chap. 2, treatment was applied at a single dose, and so doses were not mentioned. In general, if there are fixed doses , one dose d i for each pair i, then the doses are also part of \(\mathcal {F}\). Because previous discussions involving \( \mathcal {F}\) had a single dose, we may adopt the new definition that includes doses in \(\mathcal {F}\) without altering the content of those previous discussions.

  9. 9.

    As in Note 8, when \(\mathcal {F}\) was defined in Chap. 2, the potential doses \(\left ( d_{Tij},d_{Cij}\right ) \) were always equal to \( \left ( 1,0\right ) \) and so were not mentioned. In general, if there are potential doses, \(\left ( d_{Tij},d_{Cij}\right ) \), then they are part of \(\mathcal {F}\). Because previous discussions involving \( \mathcal {F}\) had a single dose, we may adopt the new definition that includes doses in \(\mathcal {F}\) without altering the content of those previous discussions.

  10. 10.

    Recall Note 9. If the hypothesis \( H_{0}:r_{Tij}-r_{Cij}=\beta _{0}\left ( d_{Tij}-d_{Cij}\right ) \) is true, then R ij − β 0D ij = a ij is fixed, not varying with Z ij. In other words, because the \(\left ( r_{Tij},r_{Cij},d_{Tij},d_{Cij}\right ) \)’s are part of \(\mathcal {F}\), if H 0 is true, then \(\beta _{0}=\left ( r_{Tij}-r_{Cij}\right ) /\left ( d_{Tij}-d_{Cij}\right ) \) is determined by \( \mathcal {F}\), so using (5.4), the quantity a ij may be calculated from \(\mathcal {F}\).

  11. 11.

    Part III of this book develops the concept of design sensitivity, the limiting sensitivity to bias as the sample size increases, I →. In [43, Table 2, β − β 0 = 0.5], the design sensitivity is \(\widetilde {\varGamma }=1.73\) with 50% compliers, or \(\widetilde {\varGamma }=1.11\) with 10% compliers. In this specific situation, with 10% compliers, results will be sensitive to a bias Γ > 1.11 in sufficiently large samples. These calculations assume that Wilcoxon’s signed rank statistic is the basis for the test. The quoted results from [43] make use of design sensitivity and the Bahadur efficiency of a sensitivity analysis with an instrument. See Sect. 19.5 and [140] for discussion of the Bahadur efficiency of a sensitivity analysis.

Bibliography

  1. Abadie, A., Cattaneo, M.D.: Econometric methods for program evaluation. Ann. Rev. Econ. 10, 465–503 (2018)

    Google Scholar 

  2. Abadie, A., Gardeazabal, J.: Economic costs of conflict: a case study of the Basque Country. Am. Econ. Rev. 93, 113–132 (2003)

    Google Scholar 

  3. Angrist, J.D., Krueger, A.B.: Empirical strategies in labor economics. In: Ashenfelter, O., Card, D. (eds.) Handbook of Labor Economics, vol. 3, pp. 1277–1366. Elsevier, New York (1999)

    Google Scholar 

  4. Angrist, J.D., Lavy, V. : Using Maimonides’ rule to estimate the effect of class size on scholastic achievement. Q. J. Econ. 114, 533–575 (1999)

    Google Scholar 

  5. Angrist, J., Lavy, V.: New evidence on classroom computers and pupil learning. Econ. J. 112, 735–765 (2002)

    Google Scholar 

  6. Angrist, J.D. , Imbens, G.W. , Rubin, D.B. : Identification of causal effects using instrumental variables (with Discussion). J. Am. Stat. Assoc. 91, 444–455 (1996)

    Google Scholar 

  7. Anthony, J.C. , Breitner, J.C., Zandi, P.P., Meyer, M.R., Jurasova, I., Norton, M.C., Stone, S.V.: Reduced prevalence of AD in users of NSAIDs and H2 receptor antagonists. Neurology 54, 2066–2071 (2000)

    Google Scholar 

  8. Ares, M., Hernandez, E.: The corrosive effect of corruption on trust in politicians: evidence from a natural experiment. Res. Politics April–June, 1–8 (2017)

    Google Scholar 

  9. Armstrong, C.S.: Discussion of “CEO compensation and corporate risk-taking: evidence from a natural experiment.” J. Account. Econ. 56, 102–111 (2013)

    Google Scholar 

  10. Armstrong, C.S., Kepler, J.D.: Theory, research design assumptions, and causal inferences. J. Account. Econ. 66, 366–373 (2018)

    Google Scholar 

  11. Armstrong, C.S., Blouin, J.L., Larcker, D.F.: The incentives for tax planning. J. Account. Econ. 53, 391–411 (2012)

    Google Scholar 

  12. Arnold, B.F., Ercumen, A., Benjamin-Chung, J., Colford, J.M.: Negative controls to detect selection bias and measurement bias in epidemiologic studies. Epidemiology 27, 637–641 (2016)

    Google Scholar 

  13. Athey, S., Imbens, G.W.: The state of applied econometrics: causality and policy evaluation. J. Econ. Perspect 31, 3–32 (2018)

    Google Scholar 

  14. Baiocchi, M., Small, D.S., Lorch, S., Rosenbaum, P.R.: Building a stronger instrument in an observational study of perinatal care for premature infants. J. Am. Stat. Assoc. 105, 1285–1296 (2010)

    Google Scholar 

  15. Baiocchi, M., Small, D.S., Yang, L., Polsky, D., Groeneveld, P.W.: Near/far matching: a study design approach to instrumental variables. Health Serv. Outcomes Res. Method 12, 237–253 (2012)

    Google Scholar 

  16. Barnard, J., Du, J.T., Hill, J.L., Rubin, D.B.: A broader template for analyzing broken randomized experiments. Sociol. Methods Res. 27, 285–317 (1998)

    Google Scholar 

  17. Barnard, J., Frangakis, C.E., Hill, J.L., Rubin, D.B. : Principal stratification approach to broken randomized experiments: a case study of School Choice vouchers in New York City. J. Am. Stat. Assoc. 98, 299–311 (2003)

    Google Scholar 

  18. Basta, N.E., Halloran, M.E.: Evaluating the effectiveness of vaccines using a regression discontinuity design. Am. J. Epidemiol. 188, 987–990 (2019)

    Google Scholar 

  19. Battistin, E., Rettore, E. : Ineligibles and eligible non-participants as a double comparison group in regression-discontinuity designs. J. Econometrics 142, 715–730 (2008)

    MathSciNet  MATH  Google Scholar 

  20. Beautrais, A.L., Gibb, S.J., Fergusson, D.M., Horwood, L.J., Larkin, G.L.: Removing bridge barriers stimulates suicides: an unfortunate natural experiment. Austral. New Zeal. J. Psychiatry 43, 495–497 (2009)

    Google Scholar 

  21. Behrman, J.R. , Cheng, Y. , Todd, P.E. : Evaluating preschool programs when length of exposure to the program varies: a nonparametric approach. Rev. Econ. Stat. 86, 108–132 (2004)

    Google Scholar 

  22. Berk, R.A., de Leeuw, J. : An evaluation of California’s inmate classification system using a regression discontinuity design. J. Am. Stat. Assoc. 94, 1045–1052 (1999)

    Google Scholar 

  23. Berk, R.A., Rauma, D. : Capitalizing on nonrandom assignment to treatments: a regression-discontinuity evaluation of a crime-control program. J. Am. Stat. Assoc. 78, 21–27 (1983)

    Google Scholar 

  24. Bernanke, B.S. : The macroeconomics of the Great Depression: a comparative approach. J. Money Cred. Bank 27, 1–28 (1995). Reprinted: Bernanke, B.S. Essays on the Great Depression. Princeton University Press, Princeton (2000)

    Google Scholar 

  25. Bilban, M., Jakopin, C.B. : Incidence of cytogenetic damage in lead-zinc mine workers exposed to radon. Mutagenesis 20, 187–191 (2005)

    Google Scholar 

  26. Black, S. : Do better schools matter? Parental valuation of elementary education. Q. J. Econ. 114, 577–599 (1999)

    Google Scholar 

  27. Bound, J. : The health and earnings of rejected disability insurance applicants. Am. Econ. Rev. 79, 482–503 (1989)

    Google Scholar 

  28. Bound, J., Jaeger, D.A., Baker, R.M.: Problems with instrumental variables estimation when the correlation between the instruments and the endogenous explanatory variable is weak. J. Am. Stat. Assoc. 90, 443–450 (1995)

    Google Scholar 

  29. Brew, B.K., Gong, T., Williams, D.M., Larsson, H., Almqvist, C.: Using fathers as a negative control exposure to test the Developmental Origins of Health and Disease Hypothesis: a case study on maternal distress and offspring asthma using Swedish register data. Scand. J. Public Health 45(Suppl. 17), 36–40 (2017)

    Google Scholar 

  30. Campbell, D.T. : Factors relevant to the validity of experiments in social settings. Psychol. Bull. 54, 297–312 (1957)

    Google Scholar 

  31. Campbell, D.T. : Prospective: artifact and control. In: Rosenthal, R., Rosnow, R. (eds.) Artifact in Behavioral Research, pp. 351–382. Academic, New York (1969)

    Google Scholar 

  32. Campbell, D.T. : Methodology and Epistemology for Social Science: Selected Papers. University of Chicago Press, Chicago (1988)

    Google Scholar 

  33. Card, D. : The causal effect of education. In: Ashenfelter, O., Card, D., (eds.) Handbook of Labor Economics. North Holland, New York (2001)

    Google Scholar 

  34. Choudhri, E.U. , Kochin, L.A.: The exchange rate and the international transmission of business cycle disturbances: some evidence from the Great Depression. J. Money Cred. Bank 12, 565–574 (1980)

    Google Scholar 

  35. Cioran, E.M. : History and Utopia. University of Chicago Press, Chicago (1998)

    Google Scholar 

  36. Cochran, W.G. : The planning of observational studies of human populations (with Discussion). J. R. Stat. Soc. A 128, 234–265 (1965)

    Google Scholar 

  37. Conley, T.G., Hansen, C.B., Rossi, P.E.: Plausibly exogenous. Rev. Econ. Stat. 94, 260–272 (2012)

    Google Scholar 

  38. Cook, T.D. : Waiting for life to arrive: a history of the regression-discontinuity designs in psychology, statistics and economics. J. Econometrics 142, 636–654 (2007)

    Google Scholar 

  39. Davey Smith, G. : Negative control exposures in epidemiologic studies. Epidemiology 23, 350–351 (2012)

    Google Scholar 

  40. Derigs, U. : Solving nonbipartite matching problems by shortest path techniques. Ann. Oper. Res. 13, 225–261 (1988)

    Google Scholar 

  41. Eichengreen, B., Sachs, J. : Exchange rates and economic recovery in the 1930’s. J. Econ. Hist. 45, 925–946 (1985)

    Google Scholar 

  42. Ertefaie, A., Small, D.S., Flory, J.H., Hennessy, S.: A tutorial on the use of instrumental variables in pharmacoepidemiology. Pharmacoepidemiol. Drug Saf. 26, 357–367 (2017)

    Google Scholar 

  43. Ertefaie, A., Small, D.S., Rosenbaum, P.R.: Quantitative evaluation of the trade-off of strengthened instruments and sample size in observational studies. J. Am. Stat. Assoc. 113, 1122–1134 (2018)

    Google Scholar 

  44. Evans, L.: The effectiveness of safety belts in preventing fatalities. Accid. Anal. Prev. 18, 229–241 (1986)

    Google Scholar 

  45. Fenech, M. , Chang, W.P., Kirsch-Volders, M., Holland, N., Bonassi, S., Zeiger, E.: HUMN project: detailed description of the scoring criteria for the cytokinesis-block micronucleus assay using isolated human lymphocyte cultures. Mutat. Res. 534, 65–75 (2003)

    Google Scholar 

  46. Fogarty, C.B.: Studentized sensitivity analysis for the sample average treatment effect in paired observational studies. J. Am. Stat. Assoc. (2019, to appear). https://doi.org/10.1080/01621459.2019.1632072

  47. Fogarty, C.B., Small, D.S.: Sensitivity analysis for multiple comparisons in matched observational studies through quadratically constrained linear programming. J. Am. Stat. Assoc. 111, 1820–1830 (2016)

    Google Scholar 

  48. Frangakis, C.E., Rubin, D.B.: Addressing complications of intention-to-treat analysis in the combined presence of all-or-none treatment noncompliance and subsequent missing outcomes. Biometrika 86, 365–379 (1999)

    MathSciNet  MATH  Google Scholar 

  49. French, B., Cologne, J., Sakata, R., Utada, M., Preston, D.L.: Selection of reference groups in the Life Span Study of atomic bomb survivors. Eur. J. Epidemiol. 32, 1055–1063 (2017)

    Google Scholar 

  50. Friedman, M. , Schwartz, A.J.: A Monetary History of the United States. Princeton University Press, Princeton (1963)

    Google Scholar 

  51. Frye, T., Yakovlev, A.: Elections and property rights: a natural experiment from Russia. Comp. Pol. Stud. 49, 499–528 (2016)

    Google Scholar 

  52. Gangl, M.: Causal inference in sociological research. Ann. Rev. Sociol. 36, 21–47 (2010)

    Google Scholar 

  53. Goetghebeur, E. , Loeys, T.: Beyond intent to treat. Epidemiol. Rev. 24, 85–90 (2002)

    Google Scholar 

  54. Gormley, T.A., Matsa, D.A., Milbourn, T.: CEO compensation and corporate risk-taking: evidence from a natural experiment. J. Account. Econ. 56, 79–101 (2013)

    Google Scholar 

  55. Gould, E.D., Lavy, V., Paserman, M.D.: Immigrating to opportunity: estimating the effect of school quality using a natural experiment on Ethiopians in Israel. Q. J. Econ. 119, 489–526 (2004)

    Google Scholar 

  56. Gow, I.D., Larcker, D.F., Reiss, P.C.: Causal inference in accounting research. J. Account. Res. 54, 477–523 (2016)

    Google Scholar 

  57. Greevy, R. , Silber, J.H. , Cnaan, A. , Rosenbaum, P.R.: Randomization inference with imperfect compliance in the ACE-inhibitor after anthracycline randomized trial. J. Am. Stat. Assoc. 99, 7–15 (2004)

    Google Scholar 

  58. Guo, Z., Kang, H., Cai, T.T., Small, D.S.: Confidence interval for causal effects with invalid instruments using two-stage hard thresholding with voting. J. R. Stat. Soc. B 80, 793–815 (2018)

    Google Scholar 

  59. Hahn, J. , Todd, P. , Van der Klaauw, W. : Identification and estimation of treatment effects with a regression-discontinuity design. Econometrica 69, 201–209 (2001)

    Google Scholar 

  60. Hamermesh, D.S.: The craft of labormetrics. Ind. Labor Relat. Rev. 53, 363–380 (2000)

    Google Scholar 

  61. Hawkins, N.G., Sanson-Fisher, R.W., Shakeshaft, A., D’Este, C., Green, L.W.: The multiple baseline design for evaluating population based research. Am. J. Prev. Med. 33, 162–168 (2007)

    Google Scholar 

  62. Heckman, J., Navarro-Lozano, S. : Using matching, instrumental variables, and control functions to estimate economic choice models. Rev. Econ. Stat. 86, 30–57 (2004)

    Google Scholar 

  63. Hill, A.B. : The environment and disease: association or causation? Proc. R. Soc. Med. 58, 295–300 (1965)

    Google Scholar 

  64. Ho, D.E. , Imai, K. : Estimating the causal effects of ballot order from a randomized natural experiment: California alphabet lottery, 1978–2002. Public Opin. Q. 72, 216–240 (2008)

    Google Scholar 

  65. Holland, P.W. : Causal Inference, path analysis, and recursive structural equations models. Sociol. Method 18, 449–484 (1988)

    Google Scholar 

  66. Holland, P.W. : Choosing among alternative nonexperimental methods for estimating the impact of social programs: comment. J. Am. Stat. Assoc. 84, 875–877 (1989)

    Google Scholar 

  67. Imbens, G.W. : The role of the propensity score in estimating dose response functions. Biometrika 87, 706–710 (2000)

    MathSciNet  MATH  Google Scholar 

  68. Imbens, G.W. : Nonparametric estimation of average treatment effects under exogeneity: a review. Rev. Econ. Stat. 86, 4–29 (2004)

    Google Scholar 

  69. Imbens, G.W.: Instrumental variables: an econometrician’s perspective. Stat. Sci. 29, 323–358 (2014)

    Google Scholar 

  70. Imbens, G.W. , Lemieux, T.: Regression discontinuity designs: a guide to practice. J. Econometrics 142, 615–635 (2008)

    MathSciNet  Google Scholar 

  71. Imbens, G. , Rosenbaum, P.R.: Robust, accurate confidence intervals with a weak instrument: quarter of birth and education. J. R. Stat. Soc. A 168, 109–126 (2005)

    MathSciNet  Google Scholar 

  72. Imbens, G.W., Rubin, D.B.: Causal Inference in Statistics, Social, and Biomedical Sciences. Cambridge University Press, New York (2015)

    Google Scholar 

  73. Imbens, G.W. , Rubin, D.B. , Sacerdote, B.I. : Estimating the effect of unearned income on labor earnings, savings, and consumption: evidence from a survey of lottery players. Am. Econ. Rev. 91, 778–794 (2001)

    Google Scholar 

  74. in ’t Veld, B.A., Launer, L.J., Breteler, M.M.B., Hofman, A., Stricker, B.H.C.: Pharmacologic agents associated with a preventive effect on Alzheimer’s disease. Epidemiol. Rev. 2, 248–268 (2002)

    Google Scholar 

  75. Joffe, M.M. , Colditz, G.A. : Restriction as a method for reducing bias in the estimation of direct effects. Stat. Med. 17, 2233–2249 (1998)

    Google Scholar 

  76. Kang, H.: Matched instrumental variables. Epidemiology 27, 624–632 (2016)

    Google Scholar 

  77. Kang, H., Zhang, A., Cai, T.T., Small, D.S.: Instrumental variables estimation with some invalid instruments and its application to Mendelian randomization. J. Am. Stat. Assoc. 111, 132–144 (2016)

    Google Scholar 

  78. Kang, H., Peck, L., Keele, L.: Inference for instrumental variables: a randomization inference approach. J. R. Stat. Soc. 181, 1231–1254 (2018)

    Google Scholar 

  79. Karmakar, B., Small, D.S., Rosenbaum, P.R.: Using approximation algorithms to build evidence factors and related designs for observational studies. J. Comp. Graph. Stat. 28(3), 698–709 (2019)

    Google Scholar 

  80. Keele, L.: The statistics of causal inference: a view from political methodology. Polit. Anal. 23, 313–335 (2015)

    Google Scholar 

  81. Keele, L., Morgan, J.W.: How strong is strong enough? Strengthening instruments through matching and weak instrument tests. Ann. Appl. Stat. 10, 1086–1106 (2016)

    Google Scholar 

  82. Keele, L., Titiunik, R., Zubizarreta, J.R.: Enhancing a geographic regression discontinuity design through matching to estimate the effect of ballot initiatives on voter turnout. J. R. Stat. Assoc. A 178, 223–239 (2015)

    Google Scholar 

  83. Keele, L., Zhao, Q., Kelz, R.R., Small, D.S.: Falsification tests for instrumental variable designs with an application to the tendency to operate. Med. Care 57, 167–171 (2019)

    Google Scholar 

  84. Keele, L., Harris, S., Grieve, R.: Does transfer to intensive care units reduce mortality? A comparison of an instrumental variables design to risk adjustment. Med. Care 57, e73–e79 (2019)

    Google Scholar 

  85. Khuder, S.A., Milz, S., Jordan, T., Price, J., Silvestri, K., Butler, P.: The impact of a smoking ban on hospital admissions for coronary heart disease. Prev. Med. 45, 3–8 (2007)

    Google Scholar 

  86. LaLumia, S. : The effects of joint taxation of married couples on labor supply and non-wage income. J. Public Econ. 92, 1698–1719 (2008)

    Google Scholar 

  87. Lambe, M., Cummings, P. : The shift to and from daylight savings time and motor vehicle crashes. Accid. Anal. Prev. 32, 609–611 (2002)

    Google Scholar 

  88. Lawlor, D.A., Tilling, K., Davey Smith, G.: Triangulation in aetiological epidemiology. Int. J. Epidemiol. 45, 1866–1886 (2016)

    Google Scholar 

  89. Li, F. , Frangakis, C.E.: Polydesigns and causal inference. Biometrics 62, 343–351 (2006)

    MathSciNet  Google Scholar 

  90. Liew, Z., Kioumourtzoglou, M.A., Roberts, A.L., O’Reilly, E.J., Ascherio, A., Weisskopf, M.G.: Use of negative control exposure analysis to evaluate confounding: an example of acetaminophen exposure and attention-deficit/hyperactivity disorder in Nurses’ Health Study II. Am. J. Epidemiol. 188, 768–775 (2019)

    Google Scholar 

  91. Lipsitch, M., Tchetgen Tchetgen, E.J., Cohen, T.: Negative controls: a tool for detecting confounding and bias in observational studies. Epidemiology 21, 383–388 (2010)

    Google Scholar 

  92. Lorch, S.A., Baiocchi, M., Ahlberg, C.E., Small, D.S.: The differential impact of delivery hospital on the outcomes of premature infants. Pediatrics 130, 270–278 (2012)

    Google Scholar 

  93. Lu, B. , Rosenbaum, P.R.: Optimal matching with two control groups. J. Comput. Graph Stat. 13, 422–434 (2004)

    Google Scholar 

  94. Lu, X., White, H.: Robustness checks and robustness tests in applied economics. J. Econometrics 178, 194–206 (2014)

    MathSciNet  MATH  Google Scholar 

  95. Lu, B., Greevy, R., Xu, X., Beck, C.: Optimal nonbipartite matching and its statistical applications. Am. Stat. 65, 21–30 (2011)

    Google Scholar 

  96. Ludwig, J. , Miller, D.L. : Does Head Start improve children’s life chances? Evidence from a regression discontinuity design. Q. J. Econ. 122, 159–208 (2007)

    Google Scholar 

  97. Manski, C. : Nonparametric bounds on treatment effects. Am. Econ. Rev. 80, 319–323 (1990)

    Google Scholar 

  98. Marquart, J.W. , Sorensen, J.R.: Institutional and postrelease behavior of Furman-commuted inmates in Texas. Criminology 26, 677–693 (1988)

    Google Scholar 

  99. McClellan, M., McNeil, B.J., Newhouse, J.P.: Does more intensive treatment of acute myocardial infarction in the elderly reduce mortality? J. Am. Med. Assoc. 272, 859–866 (1994)

    Google Scholar 

  100. McKillip, J. : Research without control groups: a control construct design. In: Bryant, F.B., et al. (eds.) Methodological Issues in Applied Social Psychology, pp. 159–175. Plenum Press, New York (1992)

    Google Scholar 

  101. Mealli, F., Rampichini, C.: Evaluating the effects of university grants by using regression discontinuity designs. J. R. Stat. Soc. A 175, 775–798 (2012)

    Google Scholar 

  102. Meyer, B.D.: Natural and quasi-experiments in economics. J. Bus. Econ. Stat. 13, 151–161 (1995)

    Google Scholar 

  103. Mill, J.S.: On Liberty. Barnes and Nobel, New York (1859, reprinted 2004)

    Google Scholar 

  104. Milyo, J. , Waldfogel, J.: The effect of price advertising on prices: evidence in the wake of 44 Liquormart. Am. Econ. Rev. 89, 1081–1096 (1999)

    Google Scholar 

  105. Neuman, M.D., Rosenbaum, P.R., Ludwig, J.M., Zubizarreta, J.R., Silber, J.H.: Anesthesia technique, mortality and length of stay after hip fracture surgery. J. Am. Med. Assoc. 311, 2508–2517 (2014)

    Google Scholar 

  106. Newhouse, J.P., McClellan, M.: Econometrics in outcomes research: the use of instrumental variables. Ann. Rev. Public Health 19, 17–34 (1998)

    Google Scholar 

  107. NIDA: Washington DC Metropolitan Area Drug Study (DC*MADS), 1992. U.S. National Institute on Drug Abuse: ICPSR Study No. 2347 (1999). http://www.icpsr.umich.edu

  108. Oreopoulos, P. : Long-run consequences of living in a poor neighborhood. Q. J. Econ. 118, 1533–1575 (2003)

    Google Scholar 

  109. Origo, F.: Flexible pay, firm performance and the role of unions: new evidence from Italy. Labour Econ. 16, 64–78 (2009)

    Google Scholar 

  110. Peto, R., Pike, M. , Armitage, P. , Breslow, N. , Cox, D. , Howard, S. , Mantel, N. , McPherson, K. , Peto, J. , Smith, P. : Design and analysis of randomised clinical trials requiring prolonged observation of each patient, I. Br. J. Cancer 34, 585–612 (1976)

    Google Scholar 

  111. Pimentel, S.D., Small, D.S., Rosenbaum, P.R.: Constructed second control groups and attenuation of unmeasured biases. J. Am. Stat. Assoc. 111, 1157–1167 (2016)

    Google Scholar 

  112. Pinto, D., Ceballos, J.M., García, G., Guzmán, P., Del Razo, L.M., Gómez, E.V.H., García, A., Gonsebatt, M.E. : Increased cytogenetic damage in outdoor painters. Mutat. Res. 467, 105–111 (2000)

    Google Scholar 

  113. Reynolds, K.D. , West, S.G. : A multiplist strategy for strengthening nonequivalent control group designs. Eval. Rev. 11, 691–714 (1987)

    Google Scholar 

  114. Rosenbaum, P.R.: From association to causation in observational studies. J. Am. Stat. Assoc. 79, 41–48 (1984)

    MATH  Google Scholar 

  115. Rosenbaum, P.R.: The consequences of adjustment for a concomitant variable that has been affected by the treatment. J. R. Stat. Soc. A 147, 656–666 (1984)

    Google Scholar 

  116. Rosenbaum, P.R.: Sensitivity analysis for certain permutation inferences in matched observational studies. Biometrika 74, 13–26 (1987)

    MathSciNet  MATH  Google Scholar 

  117. Rosenbaum, P.R.: The role of a second control group in an observational study (with Discussion). Stat. Sci. 2, 292–316 (1987)

    Google Scholar 

  118. Rosenbaum, P.R.: The role of known effects in observational studies. Biometrics 45, 557–569 (1989)

    MathSciNet  MATH  Google Scholar 

  119. Rosenbaum, P.R.: On permutation tests for hidden biases in observational studies. Ann. Stat. 17, 643–653 (1989)

    MATH  Google Scholar 

  120. Rosenbaum, P.R.: Some poset statistics. Ann. Stat. 19, 1091–1097 (1991)

    MathSciNet  MATH  Google Scholar 

  121. Rosenbaum, P.R.: Detecting bias with confidence in observational studies. Biometrika 79, 367–374 (1992)

    MATH  Google Scholar 

  122. Rosenbaum, P.R.: Hodges-Lehmann point estimates in observational studies. J. Am. Stat. Assoc. 88, 1250–1253 (1993)

    MathSciNet  MATH  Google Scholar 

  123. Rosenbaum, P.R.: Comment on a paper by Angrist, Imbens, and Rubin. J. Am. Stat. Assoc. 91, 465–468 (1996)

    Google Scholar 

  124. Rosenbaum, P.R.: Signed rank statistics for coherent predictions. Biometrics 53, 556–566 (1997)

    MATH  Google Scholar 

  125. Rosenbaum, P.R.: Choice as an alternative to control in observational studies (with Discussion). Stat. Sci. 14, 259–304 (1999)

    MATH  Google Scholar 

  126. Rosenbaum, P.R.: Using quantile averages in matched observational studies. Appl. Stat. 48, 63–78 (1999)

    MATH  Google Scholar 

  127. Rosenbaum, P.R.: Replicating effects and biases. Am. Stat. 55, 223–227 (2001)

    MathSciNet  Google Scholar 

  128. Rosenbaum, P.R.: Stability in the absence of treatment. J. Am. Stat. Assoc. 96, 210–219 (2001)

    MathSciNet  MATH  Google Scholar 

  129. Rosenbaum, P.R.: Observational Studies, 2nd edn. Springer, New York (2002)

    MATH  Google Scholar 

  130. Rosenbaum, P.R.: Covariance adjustment in randomized experiments and observational studies (with Discussion). Stat. Sci. 17, 286–327 (2002)

    MATH  Google Scholar 

  131. Rosenbaum, P.R.: Does a dose-response relationship reduce sensitivity to hidden bias? Biostatistics 4, 1–10 (2003)

    MATH  Google Scholar 

  132. Rosenbaum, P.R.: Design sensitivity in observational studies. Biometrika 91, 153–164 (2004)

    MathSciNet  MATH  Google Scholar 

  133. Rosenbaum, P.R.: Heterogeneity and causality: unit heterogeneity and design sensitivity in observational studies. Am. Stat. 59, 147–152 (2005)

    MathSciNet  Google Scholar 

  134. Rosenbaum, P.R.: Exact, nonparametric inference when doses are measured with random errors. J. Am. Stat. Assoc. 100, 511–518 (2005)

    MathSciNet  MATH  Google Scholar 

  135. Rosenbaum, P.R.: Differential effects and generic biases in observational studies. Biometrika 93, 573–586 (2006)

    MathSciNet  MATH  Google Scholar 

  136. Rosenbaum, P.R.: What aspects of the design of an observational study affect its sensitivity to bias from covariates that were not observed? Festschrift for Paul W. Holland . ETS, Princeton (2009)

    Google Scholar 

  137. Rosenbaum, P.R.: Testing one hypothesis twice in observational studies. Biometrika 99, 763–774 (2012)

    MathSciNet  MATH  Google Scholar 

  138. Rosenbaum, P.R.: Nonreactive and purely reactive doses in observational studies. In: Berzuini, C., Dawid, A.P., Bernardinelli, L. (eds.) Causality: Statistical Perspectives and Applications, pp. 273–289. Wiley, New York (2012)

    Google Scholar 

  139. Rosenbaum, P.R.: Using differential comparisons in observational studies. Chance 26(3), 18–23 (2013)

    Google Scholar 

  140. Rosenbaum, P.R.: Bahadur efficiency of sensitivity analyses in observational studies. J. Am. Stat. Assoc. 110, 205–217 (2015)

    MathSciNet  MATH  Google Scholar 

  141. Rosenbaum. P.R.: How to see more in observational studies: some new quasi-experimental devices. Ann. Rev. Stat. Appl. 2, 21–48 (2015)

    Google Scholar 

  142. Rosenbaum, P.R.: Observation and Experiment: An Introduction to Causal Inference. Harvard University Press, Cambridge (2017)

    MATH  Google Scholar 

  143. Rosenbaum, P.R., Silber, J.H.: Using the exterior match to compare two entwined matched control groups. Am. Stat. 67, 67–75 (2013)

    Google Scholar 

  144. Rosenzweig, M.R. , Wolpin, K.I.: Natural ‘natural experiments’ in economics. J. Econ. Lit. 38, 827–874 (2000)

    Google Scholar 

  145. Rothman, K.J. : Modern Epidemiology. Little, Brown, Boston (1986)

    Google Scholar 

  146. Roychoudhuri, R. , Robinson, D., Putcha, V., Cuzick, J., Darby, S., M øller, H.: Increased cardiovascular mortality more than fifteen years after radiotherapy for breast cancer: a population-based study. BMC Cancer 7, 9 (2007)

    Google Scholar 

  147. Rutter, M.: Proceeding from observed correlation to causal inference: the use of natural experiments. Perspect. Psychol. Sci. 2, 377–395 (2007)

    Google Scholar 

  148. Rutter, M.: Identifying the Environmental Causes of Disease: How Do We Decide What to Believe and When to Take Action? Academy of Medical Sciences, London (2007)

    Google Scholar 

  149. Sekhon, J.S.: Opiates for the matches: matching methods for causal inference. Ann. Rev. Pol. Sci. 12, 487–508 (2009)

    Google Scholar 

  150. Sekhon, J.S., Titiunik, R.: When natural experiments are neither natural nor experiments. Am. Pol. Sci. Rev. 106, 35–57 (2012)

    Google Scholar 

  151. Sennett, R. : The Uses of Disorder. Yale University Press, New Haven (1971, 2008)

    Google Scholar 

  152. Shadish, W.R. , Cook, T.D. : The renaissance of field experimentation in evaluating interventions. Annu. Rev. Psychol. 60, 607–629 (2009)

    Google Scholar 

  153. Silber, J.H., Cnaan, A. , Clark, B.J. , Paridon, S.M., Chin, A.J., et al.: Enalapril to prevent cardiac function decline in long-term survivors of pediatric cancer exposed to anthracyclines. J. Clin. Oncol. 5, 820–828 (2004)

    Google Scholar 

  154. Small, D.S. : Sensitivity analysis for instrumental variables regression with overidentifying restrictions. J. Am. Stat. Assoc. 102, 1049–1058 (2007)

    Google Scholar 

  155. Small, D.S. , Rosenbaum, P.R.: War and wages: the strength of instrumental variables and their sensitivity to unobserved biases. J. Am. Stat. Assoc. 103, 924–933 (2008)

    Google Scholar 

  156. Small, D.S. , Rosenbaum, P.R.: Error-free milestones in error-prone measurements. Ann. Appl. Stat. 3, 881–901 (2009)

    Google Scholar 

  157. Sobel, M.E. : An introduction to causal inference. Sociol. Methods Res. 24, 353–379 (1996)

    Google Scholar 

  158. Sommer, A. , Zeger, S.L. : On estimating efficacy from clinical trials. Stat. Med. 10, 45–52 (1991)

    Google Scholar 

  159. Stuart, E.A. , Rubin, D.B. : Matching with multiple control groups with adjustment for group differences. J. Educ. Behav. Stat. 33, 279–306 (2008)

    Google Scholar 

  160. Sullivan, J.M., Flannagan, M.J. : The role of ambient light level in fatal crashes: inferences from daylight saving time transitions. Accid. Anal. Prev. 34, 487–498 (2002)

    Google Scholar 

  161. Summers, L.H.: The scientific illusion in empirical macroeconomics (with Discussion). Scand. J. Econ. 93, 129–148 (1991)

    Google Scholar 

  162. Tan, Z. : Regression and weighting methods for causal inference using instrumental variables. J. Am. Stat. Assoc. 101, 1607–1618 (2006)

    Google Scholar 

  163. Tchetgen Tchetgen, E.J.: The control outcome calibration approach for causal inference with unobserved confounding. Am. J. Epidemiol. 179, 633–640 (2013)

    Google Scholar 

  164. Thistlethwaite, D.L., Campbell, D.T. : Regression-discontinuity analysis. J. Educ. Psychol. 51, 309–317 (1960)

    Google Scholar 

  165. Trochim, W.M.K. : Pattern matching, validity and conceptualization in program evaluation. Eval. Rev. 9, 575–604 (1985)

    Google Scholar 

  166. van Eeden, C. : An analogue, for signed rank statistics, of Jureckova’s asymptotic linearity theorem for rank statistics. Ann. Math. Stat. 43, 791–802 (1972)

    Google Scholar 

  167. Vandenbroucke, J.P. : When are observational studies as credible as randomized trials? Lancet 363, 1728–1731 (2004)

    Google Scholar 

  168. Varian, H.R.: Causal inference in economics and marketing. Proc. Natl. Acad. Sci. 113, 7310–7315 (2016)

    Google Scholar 

  169. Wang, X., Jiang, Y., Zhang, N.R., Small, D.S.: Sensitivity analysis and power for instrumental variable studies. Biometrics 74, 1150–1160 (2018)

    MathSciNet  Google Scholar 

  170. Weed, D.L., Hursting, S.D.: Biologic plausibility in causal inference: current method and practice. Am. J. Epidemiol. 147, 415–425 (1998)

    Google Scholar 

  171. Weiss, N.: Inferring causal relationships: elaboration of the criterion of dose-response. Am. J. Epidemiol. 113, 487–490 (1981)

    Google Scholar 

  172. Weiss, N.: Can the ‘specificity’ of an association be rehabilitated as a basis for supporting a causal hypothesis? Epidemiology 13, 6–8 (2002)

    Google Scholar 

  173. West, S.G. , Duan, N. , Pequegnat, W. , Gaist, P. , Des Jarlais, D.C. , Holtgrave, D. , Szapocznik, J. , Fishbein, M. , Rapkin, B. , Clatts, M. , Mullen, P.D. : Alternatives to the randomized controlled trial. Am. J. Public Health 98, 1359–1366 (2008)

    Google Scholar 

  174. Wintemute, G.J. , Wright, M.A., Drake, C.M. , Beaumont, J.J.: Subsequent criminal activity among violent misdemeanants who seek to purchase handguns: risk factors and effectiveness of denying handgun purchase. J. Am. Med. Assoc. 285, 1019–1026 (2001)

    Google Scholar 

  175. Wolpin, K.I.: The Limits of Inference Without Theory. MIT Press, Cambridge (2013)

    Google Scholar 

  176. Wright, M.A., Wintemute, G.J. , Rivara, F.P.: Effectiveness of denial of handgun purchase to persons believed to be at high risk for firearm violence. Am. J. Public Health 89, 88–90 (1999)

    Google Scholar 

  177. Yang, F., Zubizarreta, J.R., Small, D.S., Lorch, S., Rosenbaum, P.R.: Dissonant conclusions when testing the validity of an instrumental variable. Am. Stat. 68, 253–263 (2014)

    Google Scholar 

  178. Yoon, F.B., Huskamp, H.A., Busch, A.B., Normand, S.L.T.: Using multiple control groups and matching to address unobserved biases in comparative effectiveness research: an observational study of the effectiveness of mental health parity. Stat. Biosci. 3, 63–78 (2011)

    Google Scholar 

  179. Zubizarreta, J.R., Small, D.S., Goyal, N.K., Lorch, S., Rosenbaum, P.R.: Stronger instruments via integer programming in an observational study of late preterm birth outcomes. Ann. App. Stat. 7, 25–50 (2013)

    Google Scholar 

  180. Zubizarreta, J.R., Cerda, M., Rosenbaum, P.R.: Effect of the 2010 Chilean earthquake on posttraumatic stress: reducing sensitivity to unmeasured bias through study design. Epidemiology 7, 79–87 (2013)

    Google Scholar 

  181. Zubizarreta, J.R., Small, D.S., Rosenbaum, P.R.: Isolation in the construction of natural experiments. Ann. Appl. Stat. 8, 2096–2121 (2014)

    Google Scholar 

  182. Zubizarreta, J.R., Small, D.S., Rosenbaum, P.R.: A simple example of isolation in building a natural experiment. Chance 31, 16–23 (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

R. Rosenbaum, P. (2020). Opportunities, Devices, and Instruments. In: Design of Observational Studies. Springer Series in Statistics. Springer, Cham. https://doi.org/10.1007/978-3-030-46405-9_5

Download citation

Publish with us

Policies and ethics