We appreciate the interest Drs. Ojha and Lu express in our recent study. They are concerned that the results of our study may have been affected by immortal time bias or regression to the mean. We agree that both immortal time bias and regression to the mean are important concerns for observational studies in general. However, we do not agree that these issues affected our study results.

We believe the concern about immortal time bias may stem from two different meanings of the term ‘follow-up’. In our paper, we use the term ‘follow-up’ to mean being under observation in the cohort. Accounting for reasons participants may be lost to follow-up, in this sense, is important for avoiding selection bias. However, we believe Drs. Ojha and Lu are using the term ‘follow-up’ to mean the ‘outcome assessment window’. It is this second meaning, the time over which the outcome is assessed, that is relevant for discussions of immortal time bias—if the start of the outcome assessment window precedes ‘treatment assignment’, this can create immortal time bias. For studies that use event risk, event rate, or time-to-event outcomes, the outcome assessment window could match-up with the time under observation in the cohort, and so distinguishing between the two meanings of the term ‘follow-up’ may not matter. In our study, however, the distinction is important because the assessment windows for our outcomes (responses on health-related quality of life and mental health scales in 2017) are not meant to match up to the total time under observation in the study.

We agree that, were it the case that that the start of the outcome assessment window preceded the assessment of ‘treatment assignment’, it would be problematic.

However, this is not what happened in our study. In our study, treatment assignment and the outcome assessment window were synchronized, with both making use of data collected in same round of MEPS interviews in 2017, and covering a similar ‘lookback’ period. Though follow-up, in the sense of observation within the cohort, began in 2016, ‘time zero’ for outcome assessment did not occur prior to assessment of treatment assignment—both occurred at the same time in 2017. This is what prevents immortal time bias in our study. Further, we conceptualized eligibility as first occurring in 2016, with participants remaining eligible through the time of ‘treatment assignment’ and the start of the outcome assessment window, ensuring that all three (eligibility, ‘treatment assignment’ and the start of the outcome assessment window) match up. The situation that occurred in our study occurs in actual clinical trials somewhat commonly, and does not produce bias. For example, in randomized clinical trials with ‘run in’ periods, participants are determined to be eligible, enrolled, and followed up as study participants, and only later undergo treatment assignment and the start of outcome assessment. Overall, the arrangement of eligibility, treatment assignment, and outcome assessment used in our study does not correspond to any of the 4 ‘emulation failures’ described by Hernan et al. (reference #3 in the letter of Drs. Ojha and Lu), and does not produce immortal time bias.

The outcome assessment window we used in this study is reasonable for the study outcomes—responses on health-related quality of life and mental health scales. Things would be different, of course, if our outcomes had been different—for example a time-to-event outcome, or a risk outcome (i.e., the probability of an event occurring in a given period of time) or a rate outcome (i.e., the number of events occurring in a given period of time). Such outcomes would require a different outcome assessment window than the one we used, and would be settings where immortal time bias is more likely to occur. Indeed the references used in the letter all give these kinds of outcomes as examples, not outcomes of the kind used in our study.

Drs. Ojha and Lu also raise a question about whether our results may be affected by regression to the mean. We do not think they are. Regression to the mean typically occurs when participants for a study are selected on the basis of an extreme value of the outcome (e.g., a high number of emergency department visits in the preceding year, for a study with emergency department visits as the outcome). In our study, participants were not selected on the basis of the outcome at all. Further, we used a comparison group, selected in the same way as the group of interest (i.e., we effectively compare individuals who experienced food insecurity in 2016 and not in 2017 to individuals who experienced food insecurity in 2016 and again in 2017). Both of these choices help avoid regression to the mean.

Overall, we agree that immortal time bias and regression to the mean are important concerns, which investigators should take pains to avoid. And because this is an observational study, we think our findings should be interpreted cautiously, for reasons we detail in the paper. But for this study, we do not think immortal time bias or regression to the mean are threats to validity.