INTRODUCTION

Provider burnout has garnered increasing attention and exponential growth in published studies. The triple aim of enhancing patient experience, improving population health, and reducing costs now includes healthcare workforce well-being, known as the quadruple aim.13 Health system responses to the COVID-19 pandemic exacerbated provider burnout.4, 5

Many studies examine burnout predictors; few assess associated patient outcomes. Outcomes can include objective measures (e.g., proportion of diagnosed patients that receive treatment) and subjective assessments (e.g., provider perceptions of patient well-being).2, 6 Studies of subjective measures find that provider burnout leads to worse patient satisfaction,7, 8 worse self-perceived work performance,9 and provider-reported suboptimal patient care.10 A 2022 study found that, counterintuitively, physicians reporting burnout may create better outcomes for patients, noting a complex relationship between burnout and outcomes.11

Published data on objective outcomes appears limited. One study found burnout negatively affected patient-reported experiences of patient-provider communication, but not access or overall provider rating.12 A review found a moderate association between burnout and safety-related quality of care.13 Others found significant bivariate relationships between burnout and outcomes.14 A meta-analysis found high heterogeneity and low-to-moderate study quality in studies of the impact of burnout on patient outcomes.15 Additional limitations include lack of prospective studies, cross-sectional analyses, small samples, and single outcomes (e.g., patient safety, post-discharge recovery time).16, 17

Few high-quality studies document how provider burnout affects patient outcomes, particularly among behavioral health providers (BHPs). BHPs have the highest level of burnout after primary care physicians in the Veterans Health Administration (VHA).18 One study found therapist burnout negatively impacts patient depression and anxiety outcomes.19

To address this information gap, we identified the relationship between BHP burnout and objective and subjective measures of patient mental health outcomes collected quarterly within VHA, Strategic Analytics for Improvement and Learning Value, Mental Health Domain (MH-SAIL). We hypothesized that BHP burnout negatively influences outcomes.

METHODS

Study Design

This study comprises one segment of a project assessing predictors and consequences of VHA BHP burnout.18, 20 We used facility-level quality metrics as outcomes and burnout as primary predictors. A “station” (STA3N) within VHA represents a parent facility and may have several subsidiary medical centers or community-based outpatient clinics assigned to it. Since participants provided anonymous responses, we cannot link data by respondent within any data sources or between surveys. VA Ann Arbor Healthcare System Institutional Review Board approved this study.

Data Sources

We used 2014–2019 data from Annual All Employee Survey (AES) and Mental Health Provider Survey (MHPS); 2015–2019 facility-level Mental Health Onboard Clinical (MHOC) staffing and productivity data; and 2015–2019 MH-SAIL data. After merging sources, the study included 127 out of 138 (92%) VHA parent facilities with available data.

AES

National Center for Organizational Development (NCOD) administers AES to all VHA employees annually to assess workplace perceptions and satisfaction. Further information on AES creation, measures, and how it informs VHA appear elsewhere.21 Since 2001, AES represent best practices among large organization surveys.22 All AES responses remain anonymous. We included BHPs: psychologists, psychiatrists, social workers. During the study period, AES had response rates of 54% in psychiatrists, 66% in psychologists, and 67% in social workers.20

MHPS

Office of Mental Health and Suicide Prevention (OMHSP) invites all VHA BHPs to complete the MHPS annually to assess perceptions about access to and quality of mental health care, and job satisfaction.23 Analyses found MHPS data reliable, valid, and consistent.24 The MHPS response rate during the study period exceeded 50%.20

MHOC

OMHSP developed a staffing model that estimates full-time equivalent (FTE) mental health staff per 1000 Veterans treated in outpatient mental health settings, a population-based measure (staffing ratio).25 MHOC includes a measure of provider productivity calculated as the sum of work Relative Value Units (wRVUs) divided by time spent providing direct clinical care in outpatient mental health settings (productivity).26

CDW

We used VA Corporate Data Warehouse (CDW) to create a facility indicator of rural/urban location and a three-part facility complexity measure. Complexity levels include high (high-volume, high-risk patients, most complex clinical programs, large research and teaching programs), medium (medium volume, low-risk patients, few complex clinical programs, small or no research and teaching programs), and low (low volume, low-risk patients, with few or no complex clinical programs, small or no research and teaching programs).

MH-SAIL

In 2010, VHA implemented the SAIL monitoring system to provide VHA management with high-level indicators of health care quality.27 MH-SAIL incorporates a composite of three component measures, each of which represents a composite of constituent measures (see Appendix Table 1). Three components include population coverage, continuity of care, and experience of care. Experience of care includes four provider (collaborative MH care; job satisfaction; quality of MH care; timely access to MH care) and two patient experience subcomponents (MH appointment access; patient-centered MH care). VHA developed components tailored to its intended coverage, available data, and candidate measures identified during selection and refinement. Each component represents measures with moderate to high internal consistency.24

Study Measures

Dependent Variables

We used four MH-SAIL metrics as outcomes.

Population Coverage

An objective measure representing access to care, which combines 16 individual metrics (constituent items) with denominators representing the number of Veterans experiencing specific diagnoses and numerators representing receiving targeted services, treatments, and/or visits.

Continuity of Care

An objective measure combining 11 individual metrics with denominators representing the number of Veterans experiencing specific diagnoses and treatments for and numerators representing continuity of care such as number of follow-up visits within a specified period or amount of continuous medication coverage.

Experience of Care

A subjective measure that includes both provider and patient perspectives combining 32 individual survey responses including provider responses assessing collaborative MH care (6 items), job satisfaction (2 items), quality of MH care (5 items), timely access to MH care (6 items), and Veteran responses assessing MH appointment access (5 items) and patient-centered MH care (8 items).

We used the three domain scores (population average, continuity of care, experience of care) generated by VHA developers of the tracking system every quarter. Within experience of care domain, we also used subdomain scores of provider satisfaction and patient satisfaction. The domain scores each use weighted averages of standardized constituent items where each item’s scores represent quarterly changes from the score of the last quarter of the prior fiscal year within each facility, divided by the standard deviation of the prior year last quarter facility score, and thus have a mean of 0 and standard deviation of 1.24 The standardization combines constituent items with different denominators and statistical distributions into like units to generate each domain scores. The domain scores indicate overall direction of change in a facility’s performance for the specific domain within-facilities.24

Overall Mental Health

This measure represents an overall measure calculated as the equally weighted averages of the two objective and one subjective domain scores.24

Key Independent Variable: Provider Burnout

For AES and MHPS, we defined employee burnout as a dichotomous variable using validated approaches to define burnout.20, 28 We obtained the facility burnout percentage for each survey by averaging dichotomous burnout data among facility survey responses. We analyzed burnout data from AES and MHPS separately because we cannot identify respondents in each survey or link participants who completed the two sets of measures.

AES

We classified whether respondents reported burnout according to methods used by other VHA researchers.28 The approach used two burnout questions: emotional exhaustion (“I feel burned out from my work”) and depersonalization (“I worry that this job is hardening me emotionally”). Each of these two burnout questions had a 7-point response scale (1 = never; 2 = a few times a year or less; 3 = once a month or less; 4 = a few times a month; 5 = once a week; 6 = a few times a week; 7 = every day). We generated a dichotomous variable such that if the respondent answered either question with 5 or higher (once a week or higher frequency), we classified the response as endorsing burnout; otherwise, we classified respondents as not endorsing burnout, as in our prior study.28

MHPS

We generated a dichotomous variable to classify respondent burnout using the sole burnout question of “Overall, based on your definition of burnout, how would you rate your level of burnout?” The response options from 1 to 5 appeared as follows: 1 = I enjoy my work. I have no symptoms of burnout; 2 = Occasionally I am under stress, and I don't always have as much energy as I once did, but I don't feel burned out; 3 = I am definitely burning out and have one or more symptoms of burnout, such as physical and emotional exhaustion; 4 = The symptoms of burnout that I’m experiencing won’t go away. I think about frustration at work a lot; 5 = I feel completely burned out and often wonder if I can go on. I am at the point where I may need some changes or may need to seek some sort of help. We generated the dichotomous burnout variable by response of ≥ 3. Our prior work indicated that facility-level MHPS burnout rate using ≥ 3 as the cutoff showed the highest correlation to facility-level burnout rate in AES across yearly data from 2015 to 2018.20

MHOC

We used two facility-level variables (staffing ratio and productivity) as possible predictors of the relationship between self-reported work-environment characteristics and burnout. Details outlining the purpose, origins, and definitions of these metrics appear elsewhere.25, 26, 29

Statistical Analysis

We summarized facility-level characteristics (annually), burnout (annually), and MH-SAIL domain scores (annually by averaging four quarterly scores). As annual burnout percentages represent summary data for BHPs who responded to the burnout items and do not include non-responders to the burnout items, we summarized the yearly burnout percentages as (1) crude average of the facility burnout percentages among BHPs, (2) average weighted by the number of facility survey responders, and (3) average weighted by inverse of the facility response rate of burnout items, using this final approach in our adjusted models.

We assessed relationships between burnout and MH-SAIL outcomes using multiple regression analysis with facility-level prior year burnout percentages among BHPs as predictors and weighted by the number of facility survey responders. We examined impact of prior year burnout on subsequent year MH-SAIL outcomes to disentangle temporal ordering of provider burnout and outcomes. We repeated analyses using burnout percentages based on yearly AES and on MHPS separately to assess consistency. For each year, we estimated raw and covariate adjusted facility-level burnout effect on each of the four MH-SAIL composite outcomes. For meaningful interpretations of the burnout regression coefficient, we divided facility burnout percentage by 5. A one-unit increment in burnout corresponded to a 5% increment in burnout. In adjusted models, we included as covariates facility complexity, rurality, staffing ratio, and productivity. Finally, we pooled data across years and obtained a summary burnout effect on each MH-SAIL outcome (the four composite measures and six composite experience of care measures) using generalized linear models with generalized estimating equations (GEE) to account for repeated data over years within each facility, adjusting for covariates and year.

RESULTS

Table 1 summarizes facility productivity and staffing ratio, burnout levels in AES and in MHPS respondents, and MH-SAIL scores by year. Facilities were mostly urban (88%), and 66% had high complexity, 13% medium complexity, and 20% low complexity. Annual burnout levels in AES respondents fluctuated from 32% in 2018 to 38% in 2017; weighted yearly burnout levels did not differ notably from the unweighted levels. Yearly burnout levels in MHPS responders also fluctuated across years. Burnout levels between AES and MHPS surveys differed somewhat within the same year, with burnout levels reported in MHPS higher by over 4% in 2014, 2018, and 2019.

Table 1 Yearly Facility Characteristics, Burnout Proportions of Behavioral Health Providers (BHPs), and MH-SAIL composite Scores; Summary Statistics Calculated as Yearly Averages of the Facility Values (N = 127 each year)

MH-SAIL scores of 0 reflect no change from last year, positive values represent improvement, and negative values represent worsening. Facility scores showed wider ranges for subjective experiences of care than the objective scores, suggesting that facility-level experiences of care showed greater fluctuations from quarter to quarter than objective measures.

Table 2 presents regression coefficients for burnout predicting each MH-SAIL score and the overall mental health by year, adjusting for facility characteristics (unadjusted analyses appear in Appendix Table 2). For AES and MHPS, facility burnout had a significant negative relationship with facility experiences of care, and inconsistent, nonsignificant relationships across years for continuity of care and population coverage. During 2015–2019, coefficients of AES burnout predicting facility experiences of care ranged from − 0.10 in year 2018 to − 0.18 in year 2019, and the coefficients of MHPS burnout ranged from − 0.11 in year 2015 to − 0.20 in year 2017; negative coefficients indicated worsening facility experiences of care compared to prior year with higher facility burnout among BHPs. AES burnout coefficient of − 0.10 in year 2018 indicated that facilities with 5% higher burnout among BHPs in prior year had an estimated − 0.10 standard deviation worse facility experience of care relative to prior year last quarter of that facility. Burnout coefficients also remained significantly negative for the overall mental health score across all study years, which represents an average of all three domains.

Table 2 Multivariable1 Models Estimating the Impacts of Prior Year AES and MHPS Burnout on MH-SAIL Outcomes

Pooled across years, a 5% higher facility-level burnout in AES and MHPS each had a 0.05 and 0.09 standard deviation worse facility experiences of care from prior year last quarter, respectively. Each of the pooled AES and MHPS burnout coefficients was somewhat attenuated from the corresponding annual estimates but remained highly significant (Table 2 and Figs. 1 and 2). We found no significant relationship between AES burnout and continuity of care (coefficient =  − 0.0009; p = 0.91) and AES burnout and population coverage (coefficient =  − 0.01; p = 0.11) in the pooled analysis. Similarly, we did not find relationships between MHPS burnout and continuity of care or between MHPS burnout and population coverage.

Figure 1
figure 1

Pooled multivariable GEE models with facility clusters and control for facility characteristics: Mental health domain.

Figure 2
figure 2

Pooled multivariable GEE models with facility clusters and control for facility characteristics: Experiences of care. X-axis: Burnout measure. Y-axis: Prior year burnout slope. Lines: green, AES burnout; orange, MHPS burnout. Abbreviations: AES, All Employee Survey; MHPS, Mental Health Provider Survey. Adjusted for complexity, rurality, staff ratio, productivity. Experiences of care is composed of (1) Veteran access experience, (2) Veteran MH care experience; (3) provider experience, quality care; (4) provider experience, collaboration; (5) provider experience, access; (6) provider job experience.

Figure 1 illustrates relationships between prior year burnout and MH-SAIL metrics based on the pooled analysis where the negative coefficient appeared larger in magnitude for subjective domain of experience of care than two objective domains for AES and MHPS burnout. Figure 2 demonstrates that burnout did not affect Veteran satisfaction but did negatively affect provider satisfaction within the experiences of care domain.

DISCUSSION

In a sizable study of burnout among BHPs over time in one of the largest mental health systems nationally, we found higher prior year facility-level burnout associated with lower scores on subjective but not objective measures of quality of mental health care. The finding corresponds to facilities with higher burnout showing a negative change provider but not patient experiences of care relative to the prior year, with larger effects in the relationship between MHPS-burnout and provider experience of care than the similar relationship in the AES-burnout analyses. Differences in magnitudes of results in the two surveys may reveal variation in the respondent pools in each measure. Further, since the analyses accounted for variation that could have occurred due to differences in facility complexity, staffing, rurality, and productivity, future work will need to identify other explanations for why providers who feel burned out perceive a decrease in the quality of care they provide and how health systems can respond effectively.

This work adds to literature on the relationship between burnout and patient outcomes. One review study focused on objectively measured outcomes including quality of care and medical errors found no significant relationship in six studies and negative relationships between burnout and patient outcomes in four studies (namely less effective provider-patient communication; more referrals, increased standardized mortality ratio, and increased hospitalizations for ambulatory care sensitive conditions).30 Other researchers suggested that when providers conceptualize negative outcomes, they include factors that coders consider to be process variables instead of outcome variables.31, 32

Intervention studies support the limited impact of burnout on objective patient outcomes. One mixed method, randomized, comparative effectiveness study tested two competing approaches to improve care, one addressing clinician burnout and the other addressing how clinicians interact with consumers, in community mental health centers. They found no difference in effectiveness between interventions on burnout, patient-centered processes, or other outcomes.33 Another study found that work-life interventions improving clinician satisfaction and well-being do not reduce errors or improve quality, suggesting a need for longer, more focused interventions to produce meaningful improvements in patient care.34

Our findings combined with the literature suggest an uncertain future for studies and science in this area. How should the field address the seemingly incongruent findings that provider burnout may not negatively affect patient outcomes, when accounting for staffing and productivity levels, even while it harms provider experiences? Others indicate that quality of care remains preserved at great personal cost to providers.11 Across 5 years of data, greater BHP burnout in a year had an association with poorer changes in experience of care in the following year. This suggests downstream effects of burnout not immediately apparent. Even if burnout does not have a consistent association with all patient outcomes, providers’ experiences of care matter.

Limitations

This study has limitations. We examined how facility-level provider burnout influenced facility-level provider and patient experiences. We did not have individual-level patient or provider data to assess the relationship between an individual provider’s burnout and a specific set of patients and their outcomes, so we cannot know if a particular provider treated a particular patient. We did not include data from after the onset of the global pandemic, COVID-19, so we do not know how the relationships between BHP burnout and outcomes measures changed in that context. We did not conduct an analysis of the individual MH-SAIL measures, which could provide a more granular relationship between burnout and individual outcomes. Not every VHA facility provides all relevant data sources; missing facilities could have different provider and patient experiences and outcomes than the included centers. Finally, objective measures receipt of service rather than quality of care; subjective measures may capture a different dimension of quality of services provided and received. Despite limitations, two similar, distinct surveys yielded nearly identical results, increasing robustness of identified associations.

CONCLUSION

BHP burnout negatively affected subjective provider outcomes. Our findings align with others suggesting providers may ensure high-quality patient care, even with a significant impact on provider well-being.11 That burnout may affect providers experiences of care remains a meaningful finding and consideration for future policies and interventions.