Measuring institutional transformation: a multifaceted assessment of a new faculty development program

David E. Favre (Center for Teaching Excellence, University of Virginia, Charlottesville, Virginia, USA)
Dorothe Bach (Center for Teaching Excellence, University of Virginia, Charlottesville, Virginia, USA)
Lindsay B. Wheeler (Center for Teaching Excellence, University of Virginia, Charlottesville, Virginia, USA)

Journal of Research in Innovative Teaching & Learning

ISSN: 2397-7604

Article publication date: 29 April 2021

Issue publication date: 26 November 2021

2048

Abstract

Purpose

This study aims to understand the extent to which a faculty development program that includes a week-long course design experience followed by sustained support changes new faculty's perceptions, beliefs and teaching practices. The authors employed the teacher professional knowledge and skill (TPK&S) framework and characteristics of effective educational development interventions to drive the program development, implementation and assessment.

Design/methodology/approach

This study utilized a mixed methods approach. Data sources include pre-/mid-/post-program responses to a validated survey, pre-/post-program course syllabi analyzed using a validated rubric and pre-/post-classroom observations collected using the Classroom Observation Protocol for Undergraduate STEM (COPUS) instrument.

Findings

Findings indicate transformative effects for participants' beliefs about their teaching and changes to their instructional practices. Significant and practical effects were observed across different portions of the program for increases in participants' self-efficacy, endorsement of a conceptual change approach toward teaching and perceptions of institutional support. Participants produced more learning-focused syllabi and many moved toward more student-centered instructional approaches in their teaching practices.

Research limitations/implications

Due to the voluntary nature of the new faculty development program, this study may have been limited by participant self-selection bias and differential sample sizes for the study's individual measures. Future research should consider designs which maximize faculty participation in measurement across all data sources.

Originality/value

This study addresses shortcomings in prior studies which utilized limited data sources to measure intervention impact and answers the call for more rigorous research to obtain a more complete picture of instructional development in higher education.

Keywords

Citation

Favre, D.E., Bach, D. and Wheeler, L.B. (2021), "Measuring institutional transformation: a multifaceted assessment of a new faculty development program", Journal of Research in Innovative Teaching & Learning, Vol. 14 No. 3, pp. 378-398. https://doi.org/10.1108/JRIT-04-2020-0023

Publisher

:

Emerald Publishing Limited

Copyright © 2021, David E. Favre, Dorothe Bach and Lindsay B. Wheeler

License

Published in Journal of Research in Innovative Teaching & Learning. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


Introduction

An abundance of studies demonstrates the importance of implementing active learning in undergraduate courses (Freeman et al., 2014; Kuh et al., 2008). However, lecture still predominates in the majority of undergraduate classrooms (Stains et al., 2018). One reason for the limited adoption of active learning in higher education may be the use of educational development strategies that are ineffective in promoting changes in instructional practices (Henderson et al., 2011; Kezar et al., 2015; Pallas et al., 2017). These researchers suggest interventions should build upon best practices in professional development, align with research about effective teaching and address individual and institutional factors that may promote or inhibit change. However, there is a lack of rigorous research on these interventions to identify what works, what does not and why (Kezar, 2014; Stes et al., 2007) and limited use of theory to drive understandings in educational development interventions (Henderson et al., 2011). This present study aims to address the limitations identified in the literature by describing the design of an intervention for new faculty based on best practices in educational development and exploring its impact on faculty beliefs, perceptions and teaching practices.

Promoting change in higher education

Contrary to the required teacher preparation courses for K-12 teachers, postsecondary faculty typically receive no formal pedagogical training (Robert and Carlsen, 2017; Walker et al., 2008; Weidman et al., 2001). Many higher education institutions have teaching centers that offer programs to new and experienced faculty to support teaching development. Depending on the resources, center programing supports individuals and groups of faculty, and intervention formats encompass one-on-one teaching consultations, workshops, multiday institutes and year-long faculty learning communities (Sunal et al., 2001).

In a review of studies exploring the impact of different change strategies on science, technology, engineering and mathematics (STEM) faculty's instructional practice, Henderson et al. (2011) describe key features of effective educational development interventions and areas for improving the development and impact of these interventions. Further, the researchers conclude that studies often do not build “on prior empirical or theoretical work; and most published results claim success of the change strategy studied, but the evidence presented is often not strong” (p. 977). The present study addresses these limitations by developing an intervention based on the theoretical and empirical research and creating a multifaceted assessment approach that identifies changes in teaching beliefs and uses direct measures of changing instructional practices.

Theoretical and empirical research base for educational development interventions

As part of their review, Henderson et al. (2011) identify three characteristics of effective educational development strategies. They must (a) address individual characteristics such as faculty's beliefs about teaching; (b) involve interventions that last at least one semester and (c) require understanding that colleges or universities are complex systems and designing a strategy that is appropriate for such a system. In order to shift instructional practices in higher education, interventions developed to promote change in practice should attend to both internal (i.e. beliefs) and external factors (i.e. context). We use the three characteristics of effective change described in Henderson's analytical review to frame the research underlying the development of the present study's intervention.

Critical factors in educational development

A recent report of the American Academy of Arts and Sciences claims that the attention of teaching centers is “overwhelmingly rooted in general pedagogical knowledge, and indifferent to specific disciplines and subjects and their distinctive concepts and ideas” (Pallas et al., 2017, p. 2). As we will show, this is not the case in more sustained interventions offered by many teaching centers. In this study, we use the concept of teacher professional knowledge and skill (TPK&S) to understand these factors, one of which is teaching beliefs that influence instructional practice. The TPK&S framework addresses the limitations of the original conceptualization of Shulman's (1986) general pedagogical knowledge (GPK) and pedagogical content knowledge (PCK). TPK&S consists of two main components: teacher professional knowledge bases (TPKB) and topic-specific professional knowledge (TSPK) (Gess-Newsome, 2015).

TPKB includes, but is not limited to, general knowledge about assessment (e.g. knowledge of formative assessment and how to use it to drive instruction), content (e.g. knowledge of the disciplinary content), students (e.g. knowledge of student development and approaches to inclusive and equitable teaching) and curriculum (e.g. knowledge of course design processes) identified from the literature. In higher education development, ideas such as backward and integrated course design (Fink, 2013; Wiggins and McTighe, 2005), educative assessment (Huba and Freed, 2000; Wiggins, 1998), active learning (Bonwell and Eison, 1991), student motivation (Elliot et al., 2017; Schunk et al., 2007), inclusive teaching (Walton and Cohen, 2011; Burgstahler, 2015) and transparent assignments (Winkelmes et al., 2016) form the literature base for TPKB (Figure 1).

As the name suggests, TPKB is the basis for but is also informed by TSPK. Similar to PCK, TSPK encompasses knowledge specific to a disciplinary topic and includes relevant teaching strategies (e.g. knowledge of effective practices that can be used in particular contexts), understanding of students (e.g. knowledge of student understanding around a course topic) and disciplinary practices (e.g. knowledge of disciplinary practices and big ideas). However, TSPK is fundamentally different from PCK in that “TSPK is canonical, generated by research or best practice, and can have a normative function in terms of what we want teachers to know about a topic- and context-specific instruction” (Gess-Newsome, 2015, p. 33). In higher education, knowledge of different evidence-based instructional practices (EBIPs) (Brookfield and Preskill, 2012; National Research Council, 2012; Lund and Stains, 2015; Nilson, 2016; Sternglass, 2017; Sunal et al., 2014), knowledge about student misconceptions and conceptual change strategies (Brown et al., 2018; McConnell et al., 2006), and knowledge of scientific and engineering practices (Carmel et al., 2019; National Research Council, 2012) and reflective and contemplative practice (Boud et al., 2013; Gunnlaugson et al., 2014) form the basis for TSPK.

The TPK&S model includes “amplifiers and filters,” factors that influence how the teacher enacts TPKB and TSPK in practice. Higher education research suggests that faculty beliefs about teaching and professional identity are formed as students (Luft et al., 2004; Trede et al., 2012) and that those beliefs shape the decisions faculty make in their teaching (Brown et al., 2006; Robert and Carlsen, 2017). Additionally, external factors such as departmental and institutional support also play a role in faculty's pedagogical choices (Michael, 2007; Shadle et al., 2017; Sturtevant and Wheeler, 2019). As a result, faculty often hold beliefs that can be described as traditional: seeing their role in delivering content, favoring lecture over active learning and viewing students as passive consumers of knowledge. Since beliefs are important predictors for teaching practice, educational development initiatives focus on shifting instructors' orientation from traditional beliefs to increased alignment with evidence-based practice (Sunal et al., 2001).

The TPK&S framework described above was originally developed for K-12 teaching contexts; however, it has been applied to various higher education contexts to understand undergraduate STEM instruction (Auerbach and Andrews, 2018). To our knowledge, TPK&S has not been used as a framework for describing the design of educational development interventions.

Effective delivery formats of educational development interventions

When considering the importance of duration on effective change strategies (Henderson et al., 2011), the research shows that extended interventions such as week-long course design institutes (CDIs) are effective in shifting instructors' course design practices from a content focus to a learning focus (Palmer et al., 2016). When offered by university-wide centers for teaching and learning (CTLs) for an interdisciplinary group of faculty, these interventions largely focus on building TPKB. However, there are some CDI designs that build in opportunities for increasing TSPK through differentiated activities and grouping faculty according to discipline or shared pedagogical challenges.

The research also demonstrates that educational development interventions lasting at least one semester are more impactful than shorter delivery formats (Henderson et al., 2011; Sunal et al., 2001) and that sustained engagement through faculty learning communities (FLCs) provide access to teaching resources, collegial community and time and structure for exploring new pedagogical strategies (Daly, 2011; Lee, 2010; Meizlish et al., 2018). FLCs can have a disciplinary focus honing in on TSPK. However, even cross-disciplinary FLCs provide opportunities to cultivate both TPKB and TSPK through differentiated activities.

Finally, receiving formative feedback in the context of individual consultations has also been shown to enhance instructional practice (Finelli et al., 2011; Trower, 2012) and allows for learning of TPK&S. Interventions that use a combination of all three (CDIs, FLCs and individual consultations) have the potential to provide varied opportunities for improving both TPKB and TSPK. The present study aims to explore changes in faculty beliefs and practices following an educational development intervention that combines all three effective intervention strategies.

Aligning and designing system-appropriate intervention strategies

Alignment with the particularities of complex institutional contexts is important for the success of educational development interventions. In research-intensive institutions, faculty are rewarded primarily for research productivity. In addition, studies found that faculty struggle to maintain high productivity in both teaching and research (Fairweather, 2008). This is in tension with an increased scrutiny of educational outcomes (Boyer Commission, 1998) and a call to close the education gap for marginalized populations (Ladson-Billings, 2006). In this constellation, research-intensive institutions need to reconsider reward structures and support effective, equitable and efficient teaching practices, particularly for new faculty who are expected to be highly productive in order to obtain tenure. Efforts directed to train new faculty, particularly in times of high faculty renewal, present institutions with the opportunity to reset the culture of teaching and learning. Furthermore, research shows that focusing on new faculty can be particularly impactful (Beach et al., 2016) as they may be more open to adopting effective teaching practices than more established colleagues (Ebert-May et al., 2011). In the present study, we address this call through the development and assessment of an educational development intervention for new faculty.

Assessment of educational development interventions

Although studies on educational development interventions have demonstrated their impact (Cilliers and Herman, 2010; Meizlish et al., 2018), assessment efforts that systematically move beyond participant satisfaction and perceptions are scarce. For example, published reports of faculty development initiatives often provide descriptive overviews (Sorcinelli et al., 2006), make inferences from program participation and levels of participant satisfaction (Amundsen and Wilson, 2012) or rely on limited sources of evidence to assess program effectiveness (Cilliers and Herman, 2010).

These shortfalls, however, are not surprising. Charged with developing and facilitating such programs, CTLs have historically had limited resources to conduct research on their interventions (Chism et al., 2012; Hines, 2009, 2011). In addition, the challenges of tracing the path from intervention to changes in teaching practices to improvements in student learning are “legion” as Mary Taylor Huber reminds us (Condon et al., 2016). In the present study, we aim to expand the literature by reporting on the design of an educational development initiative and findings from a multipronged assessment study to measure its impact.

Purpose

This article addresses limitations found in the previous research reviewed above. First, it responds to the call that theoretical and empirical research be used to develop and implement an educational development intervention by describing a research-based intervention design (Henderson et al., 2011). Second, the study on the intervention's impact addresses gaps in program assessment by reporting on the design of and findings generated by a multifaceted assessment approach.

Design of a research-based intervention (The New Faculty Program)

The New Faculty Program was created in 2015 to prepare large numbers of newly hired faculty during their first three years of teaching. It included an intensive, week-long CDI (35 contact hours) and a semester or year-long FLC (18 contact hours). This design was aligned with research on professional development, effective teaching and change strategies. It seeks to change faculty's beliefs about teaching through extended engagement over the course of at least one semester. It also utilized a learning community to support change and employed strategies that are largely accepted in our institution. Further, it aligned with the TPK&S framework, seen in Figure 2 and described below.

Course design institute

The intensive week-long course design boot camp was based on higher education research and aligned with the TPK&S framework to support faculty in designing evidence-based, learning-focused courses (see supplemental appendix for CDI overview). While the institute largely focused on TPKB that cuts across disciplines, it was also structured to continuously engage faculty in exploring PCK specific to their fields (i.e. TSPK). For example, participants were grouped according to disciplines (e.g. science, humanities) or specific pedagogical challenges or interests (e.g. contemplative pedagogy, community engagement, technology-enhanced learning). Further, self-guided learning activities throughout the week offered choices and discipline-specific examples. Thus, our CDI provided instructors opportunities to improve their TPKB and TSPK.

Faculty learning community

After CDI, program participants engaged in a semester- or year-long FLC consisting of a half-day retreat, eight 90-min meetings and an hour-long individual teaching consultation (see supplemental appendix for details on the FLC). The FLC topics and structure supported participants' translation of TPKB and TSPK into their course instruction. This translation was done both through program assignments and FLC meeting interactions. For example, before an FLC meeting on assessment, participants selected a formative learning assessment technique (Angelo and Cross, 2005; Barkley and Major, 2015) and implemented it in their course. During the FLC meeting, the faculty discussed with their peers how to refine the technique for the future. As another example, participants read a chapter on metacognition prior to an FLC meeting (Nilson, 2016) and selected a metacognitive strategy appropriate to one of their class objectives. They then created a plan for implementing the strategy in their course to improve students' metacognitive skills. During the FLC meeting, participants were grouped according to class size and provided each other feedback on the plans. In the following meeting two weeks later, participants reported on the successes and failures of implementation and received additional feedback from the group.

Program assessment and research questions

The present study aims to understand the extent to which the previously described educational development intervention changes new faculty's perceptions, beliefs (i.e. amplifiers and filters of TPKB and TSPK) and practices. Through a multifaceted assessment approach detailed below, the study seeks to address gaps in program assessment literature. The research questions for the study are as follows:

  1. To what extent does participation in the New Faculty Program improve participants' self-perceived confidence with and attitudes toward learning-focused, evidence-based teaching practices?

  2. How does participation in the New Faculty Program impact instructors' perception of institutional support for teaching?

  3. To what extent does participation in the New Faculty Program increase instructors' use of learning-focused course design principles and evidence-based teaching practices?

Methods

This mixed methods study was conducted at a research-intensive public institution in the Mid-Atlantic region of the USA (see appendix A for student and faculty demographic composition). Data sources include three surveys, course syllabi and observations.

Participants

Study participants (n = 105) came from the first three cohort years (2015–2017) of the program. All participants were faculty who were employed by the university for no more than three years. Participation in the New Faculty Program was voluntary but strongly encouraged by the university's administration. Faculty were also encouraged to participate by word-of-mouth recommendations from former participants and the content of the New Faculty Program's description and curriculum (see supplemental appendix). A chi-squared goodness-of-fit test was conducted to determine the program's representativeness of the new faculty population during the years of their eligibility for enrollment in the New Faculty Program (2012–2017). Analyses determined that program faculty were statistically equivalent to new faculty in their race/ethnicity and gender demographics (p > 0.05) (Appendix A) but differed statistically in their faculty rank, tenure status and discipline area. New faculty in the New Faculty Program were predominantly from the STEM disciplines and had the faculty rank of assistant professor (Table 1), while new university faculty were predominantly from the humanities disciplines and had a more balanced representation among assistant professor and instructor/lecturer ranks [1].

Data sources and collection

As aligned with the research questions, our data sources aimed to capture evidence of change in the amplifiers and filters that, according to the TPK&S framework, may impact the ways in which instructors teach. Survey data were collected from participants via Qualtrics at three different times: just before joining their program (pre-program), after completing their course design experience (post-CDI) and at the completion of their FLC program (post-program). Syllabi and classroom observations were collected for participants pre- and post-program (see Table 2 for an overview of the data collection process). Participation in the program's developmental and research components were both strictly voluntary for new faculty at the university. Faculty could elect to participate or not in any or all stages of data collection throughout the study. Consequently, due to the voluntary nature of study participation, we gathered a limited number of complete pre-/post data in some areas of assessment. Sample sizes ranged from a high of 53 (50.48% of participants) for one measure (syllabus analysis) to a low of 23 (21.90% of participants) for another measure (classroom observations). We address these differential participation rates in our findings and discussion. Incomplete responses from participants for specific assessments were excluded from the dataset.

Surveys

Surveys consisted of three instruments to assess participants' teaching self-efficacy (teaching appraisal inventory [TAI], Balam, 2006), their attitudes toward teaching (revised approaches to teaching inventory [ATI], Trigwell et al., 2005) and their sense of belonging to the institution (adapted from the classroom community scale [CCS], Rovai, 2002) (Appendix B).

Part A of the TAI, consisting of 43 Likert scale items, was used to measure participant teaching self-efficacy. The TAI consists of seven dimensions: assessment, class facilitation, effective assignments, goals and objectives, learning activities, learning environment and overall teaching.

The revised ATI, consisting of two independent 11-item scales, was used to assess the way participants go about teaching in a specific context, subject or course. The conceptual change/student-focused (CCSF) scale consists of questions around the idea that quality learning occurs when students change their conceptions of phenomena. The information transmission/teacher-focused (ITTF) scale consists of questions around the idea that effective learning occurs when knowledge is transmitted from teachers to students.

A revised version [2] of the CCS was used to measure participants' sense of community or belonging at the university. This 20-item instrument consists of two subscales with ten items each that assess faculty's connectedness and learning beliefs. The connectedness subscale measures feelings of “connectedness, cohesion, spirit, trust, and interdependence,” while the learning subscale measures feelings of shared “values and beliefs concerning the extent to which their educational goals and expectations are being satisfied” (Rovai, 2002, pp. 206–207). Qualitative data were collected in an end-of-program questionnaire that assessed participants' beliefs about continued institutional support of their pedagogical development. They were asked to respond to the open-ended question: “Do you have any additional comments, concerns, or suggestions regarding support for your teaching after the New Faculty Program?” Additional qualitative data were gathered from commentary by the university's vice president and provost about the importance of the New Faculty Program and the institution's culture of teaching and learning.

All dimensions for the TAI (seven dimensions), ATI (two dimensions) and CCS (two dimensions) have Cronbach's alpha (α) values > 0.75, demonstrating acceptable to excellent levels of internal consistency (DeVellis, 2016). Based on prior studies that employ these instruments, individual questions were either averaged (TAI) or summed (ATI and CCS) for each dimension. We do not report on changes by individual items.

Course syllabi

For each participants' pre- and post-program syllabus, we used a rubric (Palmer et al., 2014) to assess the course's learning-focused orientation represented in the syllabus. The rubric includes 13 components within four main dimensions: Learning goals and objectives, assessment activities, schedule and the overall learning environment of promise, tone and inclusivity. A total of two raters independently scored each syllabus on the 13 components, compared their scores and discussed differences until consensus was reached. Each syllabus received an average score from the two raters on a scale from 1 (content focused) to 46 (learning focused). [In contrast to traditional contractual syllabi that focus on content coverage, grading procedures and policies, learning-focused syllabi clearly communicate what students will gain from the course, what they will do to achieve the course objectives, how they will be evaluated and how to best study and seek support].

Classroom observations

The Classroom Observation Protocol for Undergraduate STEM (COPUS) (Smith et al., 2013) was used to determine the presence or absence of 12 instructor and 13 student behaviors that occurred during 2-min time intervals over the course of the allotted class time. Observers, trained on COPUS [3], observed participants' instruction a minimum of once pre-program and once post-program.

Data analysis

Post hoc power analyses were performed using G*Power to determine if sample sizes were sufficient to detect a large effect size (r = 0.5) with an alpha value set at 0.05. When there was insufficient power (<0.80), the power analysis was reported and descriptive statistics were used. Data were also tested to determine if assumptions of parametric testing were met. When assumptions were not met, appropriate nonparametric analyses were performed.

Surveys

We sought to identify changes in participants' seven dimensions of self-efficacy, two dimensions of teaching approaches and two dimensions related to sense of belonging across time. A one-way repeated measures ANOVA was conducted to identify these changes between the three time points (pre-program, post-CDI, post-program). We calculated effect sizes (η2p) for each significant change observed, and post hoc comparisons using the Bonferroni correction identified specific differences between the three time points.

Course syllabi

Normalized gains were calculated for each participant's pre-/post-program syllabi scores using Hake's (1998) formula: <g> = 100*(post – pre)/(46 – pre), where 46 is the maximum score possible. Each syllabus also received a classification as content focused (0–16.5), transitional (17–30.5) or learning focused (31–46). A Wilcoxon signed-rank test was performed to test the significance of differences for total and criterion-level pre- and post-program scores because data were not normally distributed. Effect sizes were calculated (r = z/√N), as suggested by Fritz et al. (2012).

Classroom observations

From the COPUS data gathered, % total time was calculated for each of the 25 behaviors. For example, if the observer coded for the presence of lecture for 20 of the 50 min of class, the % total time spent lecturing would be 40%. These percentages were then analyzed using the COPUS Analyzer (http://www.copusprofiles.org/) to generate a distinct instructional profile characterizing instructional practice as didactic (>80% lecture), interactive lecture (lecture with some group work) or student centered (group or individual activities used throughout class) (Stains et al., 2018).

Results

Below we report the data organized by research question. Additional statistical tables on post hoc Bonferoni pairwise comparisons can be found in Appendix C.

Changes in teaching self-efficacy and attitudes

Results indicate statistically significant (p < 0.05) mean score differences for all seven TAI dimensions of teaching self-efficacy (Figure 3), with effect sizes (η2p) ranging from 0.232 to 0.294, demonstrating large practical effects (Cohen, 1988; Norouzian and Plonsky, 2017) for participants in all areas. Post hoc Bonferoni pairwise comparisons revealed significant gains (p < 0.05) across all seven self-efficacy dimensions between pre-program and post-CDI and between pre-program and post-program; however, no significant changes occurred in any of the seven dimensions post-CDI to post-program. Thus, participants' self-efficacy increased following CDI and was maintained across the FLC.

Results also indicate significant change in participants' mean scores on the conceptual change dimension (CCSF) of the ATI, F (2,72) = 7.55, p = 0.001, η2p = 0.173 (Table 3). Post hoc Bonferoni pairwise comparisons demonstrate that participants made significant gains pre-program to post-CDI and pre- to post-program. However, there was no significant change between post-CDI and post-program. Similar to participants' self-efficacy scores, participants' attitudes toward student-focused instruction significantly improved following CDI and were maintained across the FLC.

Conversely, no statistically significant differences existed in participants' mean scores on the information transmission dimension (ITTF) of the ATI, F (2,72) = 0.62, p = 0.543, η2p = 0.017.

Changes in course design and teaching practices

Pre-program syllabus total scores ranged from 0 to 43 (46 maximum points) with a mean score of 12.75 (SD = 11.46). The majority of pre-program syllabi fell into the content-focused category (Table 4). Post-program syllabi total scores ranged from 0.5 to 46 points, with a mean score of 30.26 (SD = 10.16). Results indicated that overall syllabi scores were significantly higher post-program compared to pre-program (z = 5.96, p < 0.001, r = 0.58). These significant improvements were observed for all four syllabus subcomponents (i.e. learning goals, assessment activities, schedule and learning environment). When exploring participants' pre–post-program normalized gains, mean gains were 49.5% (SD = 33.4%). In other words, participants, on average, gained half of the possible points post-program based on their pre-program scores. Additionally, 13 participants achieved low gains (<g> ≤ 30%), 25 participants achieved moderate gains (30% < <g> < 70%) and 15 participants achieved high gains (70% ≤ <g> ≤ 100%).

Pre-program, nearly half (n = 11, 47.8%) of participants' observed instructional practices were characterized as didactic (>80% lecture). Nearly another half (n = 10, 43.5%) of participants' pre-program instructional practices were characterized as interactive lecture. The remaining three participants (8.7%) were observed using student-centered instructional practices pre-program, characterized by group work integrated throughout the class. Post-program, ten (43.5%) participants' instructional practices were didactic, seven (30.4%) participants utilized interactive lecture and six (26.1%) participants were observed implementing student-centered instruction. When exploring individual changes in participants' instructional practices pre- and post-program, the majority (n = 12, 52.2%) of participants' instructional practices did not change (Figure 4). However, of those participants that did shift, more shifted toward student-centered instruction (n = 8, 34.8%) than to didactic instruction (n = 3, 8.7%).

The percentage of total class time instructors spent on specific activities was compared pre- and post-program (Table 5). While descriptive in nature, participants appeared to spend less time lecturing and more time engaging students in group work and clicker questions.

Change in perception of institutional support

There was a significant increase in participants' sense of being connected to the institutional community F (2,48) = 6.60, p = 0.003, η2p = 0.216, with an increase in mean scores over time (Table 6). A significant increase to participants' belief that learning about teaching is supported by the institution was also observed F (2,48) = 5.93, p = 0.005, η2p = 0.198, with mean scores also increasing over time. Post hoc pairwise comparisons demonstrated that participants felt significantly more connected to the institution pre-program to post-CDI and pre- to post-program; however, no significant changes occurred post-CDI to post-program. Participants' sense of the institutional support available for learning about teaching significantly improved pre- to post-program and post-CDI to post-program.

Participants generally expressed a belief that their continued pedagogical development was valued and would be supported by the university. One participant stated,

In the past (primarily at another institution) I felt very isolated as I tried to grow as a teacher. At some point I had a goal to try something new in every class I taught, but I got overwhelmed, sometimes lazy, and unsupported by my colleagues. While I already felt much more supported as a teacher in my department here at the university, being in the New Faculty Program has really cultivated my enthusiasm for working toward being a great teacher.

Another participant commented on her new willingness to seek support from within her department: “I have begun to talk with colleagues more about their approach to teaching and discuss some of the challenges I have with my own class.” Support from the university's administration was clear and encouraging in a video endorsement of the New Faculty Program's value. In the video, the vice president and provost stated,

As a new faculty myself and getting ready to teach my first course in the fall, I'm excited to participate in the Ignite Program and get that added insight to be the best teacher I can be, but also to be a part of what makes the culture of teaching and learning so special here at UVA. It is something different from the rest of our peers, and you should join and be a part of it (CTE UVa, 2016).

Discussion

In summary, participants' teaching self-efficacy, attitudes about teaching with a conceptual change approach (CCSF) and connectedness to the university all significantly improved following CDI and were maintained across the New Faculty Program's FLC. Participants' perceptions of institutional support for learning about teaching significantly improved over the course of the FLC. Participants designed more learning-focused courses as evidenced by their more learning-focused syllabi. Finally, due to the small sample size of faculty participating in classroom observations, we were not able to draw conclusion about changes in instructional practices.

Instructor self-efficacy and attitudes

Participants' self-efficacy was measured across seven important dimensions for implementing learning-focused practices. Significant increases in faculty's self-efficacy occurred following their CDI experience, with gains maintained over the course of the FLC as they integrated these new learning-focused practices into their teaching. Early maintenance of teaching self-efficacy gains is remarkable and runs counter to the well-documented pattern where teachers' self-efficacy increases during skill acquisition and then dips during implementation when new skills are incorporated into their professional practices (Favre and Knight, 2016; Tschannen-Moran and McMaster, 2009; Woolfolk Hoy and Burke Spero, 2005). Rogan (2007) suggests that FLCs provide scaffolding and support during the crucial time when instructors integrate innovations into practice. Given that reflective practice is central to faculty teaching (Kane et al., 2004), we hypothesize that the opportunity to reflect upon and discuss their experiences during the implementation phase of our FLC may have offered faculty enough support to insulate them from the predictable dip in self-efficacy during their initial attempts to integrate new teaching practices. Furthermore, the New Faculty Program may have provided faculty with opportunities to translate TPKB and TSPK into practice through targeted assignments and reflection. Future research exploring the reflective practice of participating faculty could provide insight into how TPKB and TSPK interact and reinforce each other. More generally, research comparing the reflective practices of participating and nonparticipating faculty is also warranted.

Participant responses on the ATI indicate an increase in their endorsement of a conceptual change approach to teaching after their CDI experience. These gains were also maintained through their continued participation in the FLC. This outcome contradicts a recent claim that faculty development programs have minimal impact on their endorsement of a conceptual change approach and may actually promote a stronger endorsement of an information transmission approach (Ödalen et al., 2019). Rather, our findings are consistent with earlier research demonstrating the efficacy of faculty development for increasing adoption of student-centered attitudes toward instruction (Gibbs and Coffey, 2004; Hanbury et al., 2008) and that other factors beyond pedagogical content knowledge and skill influence practice (Gess-Newsome et al., 2019).

Participants in this study did not change their endorsement of the informational transmission approach. Mean scores remained stable and significantly lower than the mean scores for the conceptual change approach. The concepts connected to this approach were not supported by any of the learning-focused curriculum of the New Faculty Program. The observed stability of this approach along with the noted increase for endorsing the conceptual change approach in this study supports the independence of these two dimensions in the ATI.

Instructional practices

In our syllabus analysis, participants' overall normalized gain (<<g>>) of 49.5% following the New Faculty Program translates to an average gain of 22.8 points for faculty scoring the lowest points possible for a content-focused syllabus (0.0) and an average gain of 14.4 points for those scoring the lowest points possible for a transitional syllabus (17.0). In other words, the average participant in this study was expected to move up one category along the content-focused to learning-focused syllabi continuum. This outcome compares to an earlier larger scale study of the impact of CDI on faculty of all ranks and levels of experience (Palmer et al., 2014), where <<g>> was reported at 60.4% (SD = 22.4%). The normalized gains for program participants were slightly lower. We speculate that new faculty may have had more exposure to learning-focused teaching practices and may achieve less gains than more seasoned faculty who were less likely to be inducted into their careers with these methods.

Classroom observations conducted with a limited sample of program participants (n = 23) revealed that participants appeared to spend less time lecturing and more time engaging students in group work and clicker questions. While descriptive in nature, nearly half of the faculty observed moved to more student-centered instructional approaches following the New Faculty Program, while a slight majority of faculty's practice did not shift, and three faculty members shifted from an interactive lecture to a didactic format. This variation is not surprising as Stains et al. (2018) suggest at least four classroom observations per participant to obtain an accurate assessment of their COPUS profile and repertoire of teaching strategies. Thus, more observations would need to be conducted for each participant and with a larger sample size to generate more conclusive results for the classrooms observation portion of this study.

Previous work suggests that pedagogical knowledge of affective and metacognitive strategies (i.e. knowledge of students within TPKB) may be important for using student-centered instructional approaches in higher education and conclude that “teaching professional development for active-learning instruction that does not help instructors plan for the cognitive, affective, and metacognitive dimensions of active learning will fall short of promoting effective instruction” (Auerbach and Andrews, 2018, p. 18). In the present study, the New Faculty Program components are intended to support new faculty in translating these dimensions into practice, particularly the affective and metacognitive dimensions. While we did not measure “effective instruction,” nor did we capture evidence of student achievement, we may be able to speculate that those instructors who shifted to active learning have been doing so in a way that was effective for their students. Future work exploring the relationship between instructional practices and student outcomes is needed, particularly for programs such as the one in this study that attend to all dimensions of learning.

Institutional culture

Participants experienced significant increases to both dimensions of their sense of belonging to the institution with large practical effects. Participants increased their sense of connectedness after participating in CDI and maintained these levels throughout their FLC experience. For most participants, CDI marked the beginning of their cohort experience and with it their sense of being connected with members of the institution outside of their departments. The present study extends previous higher education research on learning communities (Kezar and Gehrke, 2017) to suggest that forming a cross-disciplinary FLC following a course design experience can impact new faculty's sense of being connected to a larger network at their institution. Our findings demonstrate that the FLC offered a continuation of a cohort experience and increased awareness of available continued support. Future research exploring engagement of these faculty in other learning communities is needed to understand the long-term impact of the program's FLC.

Conversely, after their CDI participation, there was no change to participants' beliefs that their continued pedagogical development was supported by the institution, but they did experience significant increases to these beliefs after their FLC experience. These increases may be attributed to our institutional context. University leadership is highly supportive of center work in general and of the New Faculty Program in particular, actively signaling that an investment in teaching is important for new faculty. The provost as well as most deans and department chairs invite and recommend new colleagues to the New Faculty Program. Research on the impact of institutional support for teaching suggests that low levels of support can be a large barrier for faculty in feeling connected to the university and in implementing learning-focused instruction (Shadle et al., 2017). The present study supports Henderson's finding that effective educational development interventions are designed in a way that is appropriate for a particular university system. Conversely, it may also suggest that small changes to how an institution signals the importance of teaching may have long-ranging impacts on faculty's sense of belonging. While not tested in the present study, institutional support may also explain improvements observed in other measured outcomes.

Assessing intervention impact

Our study addresses the shortcomings in prior assessment studies focused on limited data sources as a measure of intervention impact (Amundsen and Wilson, 2012; Cilliers and Herman, 2010). This study also addresses the call for research on university faculty to measure both beliefs and practices to obtain a complete picture of faculty and instruction in higher education (Kane et al., 2002). However, one limitation of the present study was the inability to connect individual faculty across data sources. Our future research aims to explore the impact of educational development interventions at the instructor level to understand the extent to which the intervention translates to understandings, practices and student outcomes.

Implications and future research

Our study demonstrates that educational development interventions designed by CTLs such as the New Faculty Program can be developed using both the empirical and theoretical literature. Through an intensive, sustained format (i.e. CDI combined with a semester-long learning community and individual teaching consultations), the New Faculty Program intentionally targeted multiple components of the TPK&S framework. However, we did not explicitly measure TPKB or TSPK. While some measures exist to assess TSPK components (Gess-Newsome et al., 2019; Seung et al., 2012), they have not been applied broadly in higher education, nor have they been coupled with assessments of TPKB or effective educational development interventions. Our work sets the foundation for integrating a theoretical framework into educational development interventions. Future research should explore the relationship between TPKB, TSPK, amplifiers and filters, and instructional practice in higher education.

Because participation in the New Faculty Program and its assessment is voluntary, there were several limitations to this study. First, due to a selection bias, the New Faculty Program faculty were likely more receptive to changing their teaching practices because they chose to participate in a program aimed at pedagogical development. Other current research in our center seeks to explore differences in faculty instructional practices and student success for the New Faculty Program faculty compared to faculty who have not engaged in our center (Wheeler and Bach, 2021). Second, voluntary participation in the program's assessment contributed to the differential sample sizes for the study's individual measures. Faculty may have experienced some fatigue in their participation persistence due to the high frequency of data collection inherent in our study design. The limited sample size prevented us from making inferences from observational data or determining a correlation between the endorsements of learning-focused course design principles and the use of evidence-based teaching practices. However, consistent with Kreber and Brook's (2001) caution, the intent of this study was not to make inferences to other contexts based on our results but rather provide insight into the ability of the New Faculty Program to meet the needs of participants and their students in our particular institutional culture. Finally, the second author of this study was in charge of program implementation, and there may have been some bias in reporting during data collection. Although a research team collected and anonymized the data, there may have been perceptions of being evaluated, particularly in the classroom observation phase of the study.

Future research should consider designs which maximize faculty participation in measurement across all data sources. This could include, among others, strategies for incentivizing participation in all assessments and for allaying concerns about being evaluated. Larger sample sizes and more sensitive observation protocols would allow for correlational studies to determine how changes in faculty beliefs translate into changes in teaching practice. The present study provides the groundwork for addressing calls for future research using approaches such as social network analysis (SNA) (Kezar, 2014) to understand the role CTLs play in higher education change.

Conclusion

In conclusion, our study has several practical applications for institutions, educational development practitioners and researchers. Institutions are well advised to heed the call to support research-based faculty development interventions and invest resources in sustained programming. Furthermore, intensive course design experiences followed by an FLC with targeted assignments and reflection can support faculty in translating TPKB and TSPK into practice. Institutions, and particularly those in key leadership positions at institutions, should actively signal that investment in teaching is important for new faculty. Finally, comprehensive and systematic research on educational development interventions is labor intensive and presents challenges for CTLs with limited resources. When possible, CTLs should look for opportunities to collaborate with educational research units on campus and align their assessment with institutional assessment plans to make use of existing resources. They might also leverage institutional data to illustrate the impact of their program.

Figures

Overview of teacher professional knowledge and skills (TPK&S) framework within the context of higher education

Figure 1

Overview of teacher professional knowledge and skills (TPK&S) framework within the context of higher education

Alignment of the program with TPK&S framework. TPKB components listed stem from higher education research and practice. TSPK activities stem from research and practice in professional development and educational development

Figure 2

Alignment of the program with TPK&S framework. TPKB components listed stem from higher education research and practice. TSPK activities stem from research and practice in professional development and educational development

Participants' self-efficacy mean scores over time (n = 35)

Figure 3

Participants' self-efficacy mean scores over time (n = 35)

Observed pre-/post-program COPUS profile differences (n = 23)

Figure 4

Observed pre-/post-program COPUS profile differences (n = 23)

Overview of “the program” faculty and new faculty by school and rank

“The program” new facultyAll new faculty
n%n%
SchoolProfessional schools222110618
Social science212011119
Humanities and arts141318831
STEM454312721
Administrative/cross-disciplinary336611
Faculty rankLecturer/instructor161520835
Assistant professor817726344
Associate professor666711
Professor226010

Sample sizes for data collection

ConstructMeasureTime line
Pre-programCDIPost-CDIFLCPost-program
Internal factorsTeaching self-efficacy+35 35 35
Approaches to teaching+37 37 37
Instructional practicesSyllabi analysis*53 53
Classroom observation*23 23
External factorsSense of belonging+25 25 25

Note(s): *Direct measures; +included on the survey

Changes in participants’ approaches toward teaching

ATI dimensionPre-programPost-CDIPost-program
MeanSDMeanSDMeanSD
Conceptual change/student focused (CCSF)37.278.2939.46*8.6040.81**8.46
Information transmission/teacher focused (ITTF)33.146.8933.466.4134.286.34

Note(s): *significant from pre-program, p < 0.05; **significant from pre-program, p < 0.01. Each scale includes 11 individual Likert questions ranging from 1 = rarely to 5 = almost always (n = 37)

Overall changes in participants' syllabi on a content- to learning-focused continuum

Syllabus category (score)Pre-programPost-program
n%n%
Content focused (0–16.5)3669.759.4
Transitional (17–30.5)1120.82343.4
Learning focused (31–46)611.32547.2

Note(s): Total n = 53

COPUS profile criteria changes

COPUS codesPre-programPost-program
% Total timeSD% Total timeSD
StudentGroup work14.9121.0717.9916.50
Asking questions15.229.3512.269.18
InstructorLecturing71.1630.0961.3730.81
Asking clicker questions3.526.495.108.18
Posing questions27.2219.0927.2514.05
One-to-one discussion2.537.380.170.60

Note(s): Individual COPUS categories were chosen based on those that were used to create COPUS profiles (n = 23)

Overview of participants' perceptions of institutional support

CCS dimensionPre-programPost-CDIPost-program
MeanSDMeanSDMeanSD
Connectedness22.245.2525.00*5.7925.84*5.32
Learning22.907.0523.69+6.8327.30**4.67

Note(s): *significant from pre-program, p < 0.05; **significant from pre-program, p < 0.01; +significantly different from post-program, p < 0.05 (n = 25). Each scale includes ten individual Likert questions ranging from 0 = strongly disagree to 4 = strongly agree

Notes

1.

Because of the center's preexisting relationships with STEM departments through earlier programming, disciplinary overrepresentation of STEM faculty was expected. Although the program was open to faculty new to the institution regardless of rank, the overrepresentation of assistant professors in our sample may be explained through the larger interest of novice teachers in participating in intensive faculty development programs, while more senior hires may perceive that they possess relevant skills (Honey et al., 2014).

2.

The classroom community scale wording was revised to reference the specific community of the university where participants were employed and also highlights the negative wording of reverse coded items.

3.

Training included videotaped practice observations, live observations and feedback from the trainer. All observers achieved a Cohen's kappa (κ), >0.80 with a master observer, which demonstrated a strong to almost perfect level of agreement (McHugh, 2012).

Appendix

The Appendix is available online for this article.

References

Amundsen, C. and Wilson, M. (2012), “Are we asking the right questions? A conceptual review of the educational development literature in higher education”, Review of Educational Research, Vol. 82 No. 1, pp. 90-126, doi: 10.3102/0034654312438409.

Angelo, T.A. and Cross, K.P. (2005), Classroom Assessment Techniques, 2nd ed., Jossey Bass Wiley, San Francisco, CA.

Auerbach, A.J.J. and Andrews, T.C. (2018), “Pedagogical knowledge for active-learning instruction in large undergraduate biology courses: a large-scale qualitative investigation of instructor thinking”, International Journal of STEM Education, Vol. 5 No. 1, p. 19.

Balam, E. (2006), “Professors' teaching effectiveness in relation to self-efficacy beliefs and perceptions of student rating myths”, (Doctoral Dissertation), Auburn University Electronic Theses and Dissertations, available at: https://etd.auburn.edu/handle/10415/1320.

Barkley, E.F. and Major, C.H. (2015), Learning Assessment Techniques: A Handbook for College Faculty, John Wiley & Sons, San Francisco, CA.

Beach, A., Sorcinelli, M.D., Austin, A. and Rivard, J. (2016), Faculty Development in the Age of Evidence, Stylus, Sterling, VA.

Bonwell, C.C. and Eison, J.A. (1991), “Active learning: creating excitement in the classroom”, 1991 ASHE-ERIC Higher Education Report No. 1, George Washington University Press, Washington, DC.

Boud, D., Keogh, R. and Walker, D. (2013), Reflection: Turning Experience into Learning, Routledge, New York, NY.

Boyer Commission on Educating Undergraduates in the Research University, Stoney Brook, NY (1998), Reinventing Undergraduate Education: A Blueprint for America's Research Universities, Boyer Commission on Educating Undergraduates in the Research University.

Brookfield, S.D. and Preskill, S. (2012), Discussion as a Way of Teaching: Tools and Techniques for Democratic Classrooms, John Wiley & Sons, San Francisco, CA.

Brown, P.L., Abell, S.K., Demir, A. and Schmidt, F.J. (2006), “College science teachers' views of classroom inquiry”, Science Education, Vol. 90 No. 5, pp. 784-802, doi: 10.1002/sce.20151.

Brown, S., Montfort, D., Perova-Mello, N., Lutz, B., Berger, A. and Streveler, R. (2018), “Framework theory of conceptual change to interpret undergraduate engineering students' explanations about mechanics of materials concepts”, Journal of Engineering Education, Vol. 107 No. 1, pp. 113-139, doi: 10.1002/jee.20186.

Burgstahler, S. (Ed.) (2015), Universal Design in Higher Education: From Principles to Practice, Harvard Education Press, Cambridge, MA.

Carmel, J.H., Herrington, D.G., Posey, L.A., Ward, J.S., Pollock, A.M. and Cooper, M.M. (2019), “Helping Students to “do Science”: characterizing scientific practices in general chemistry laboratory curricula”, Journal of Chemical Education, Vol. 96 No. 3, pp. 423-434, doi: 10.1021/acs.jchemed.8b00912.

Chism, N.V.N., Holley, M. and Harris, C.J. (2012), “Researching the impact of educational development: basis for informed practice”, To Improve the Academy, Vol. 31, pp. 129-145.

Cilliers, F.J. and Herman, N. (2010), “Impact of an educational development programme on teaching practice of academics at a research-intensive university”, International Journal for Academic Development, Vol. 15 No. 3, pp. 253-267, doi: 10.1080/1360144X.2010.497698.

Cohen, J. (1988), Statistical Power Analysis for the Behavioral Sciences, 2nd ed., Erlbaum, Hillsdale, NJ.

Condon, W., Iverson, E.R., Manduca, C.A., Rutz, C. and Willett, G. (2016), Faculty Development and Student Learning: Assessing the Connections, Indiana University Press, Bloomington, IN.

CTE UVa (2016), Ignite: Launching the Next Generation of UVa Faculty for Teaching Excellence [Video], YouTube, available at: https://youtu.be/UL8bPT3D_uM.

Daly, C.J. (2011), “Faculty learning communities: addressing the professional development needs of faculty and the learning needs of students”, Currents in Teaching and Learning, Vol. 4 No. 1, pp. 3-16.

DeVellis, R.F. (2016), Scale Development: Theory and Applications, Vol. 26, Sage Publications, Thousand Oaks, CA.

Ebert-May, D., Derting, T.L., Hodder, J., Momsen, J.L., Long, T.M. and Jardeleza, S.E. (2011), “What we say is not what we do: effective evaluation of faculty professional development programs”, BioScience, Vol. 61 No. 7, pp. 550-558, doi: 10.1525/bio.2011.61.7.9.

Elliot, A.J., Dweck, C.S. and Yeager, D.S. (Eds) (2017), Handbook of Competence and Motivation: Theory and Application, Guilford Publications, New York, NY.

Fairweather, J. (2008), Linking Evidence and Promising Practices in Science, Technology, Engineering, and Mathematics (STEM) Undergraduate Education, A status report for the National Academies National Research Council Board of Science Education, Washington, DC.

Favre, D.E. and Knight, S.L. (2016), “Teacher efficacy calibration in education reform: when highly efficacious teachers don’t spell ‘implement’”, International Journal of Educational Reform, Vol. 25 No. 4, pp. 361-383, doi: 10.1177/105678791602500402.

Finelli, C.J., Pinder-Grover, T. and Wright, M.C. (2011), “Consultations on teaching: using student feedback for instructional improvement”, in Cook, C.E. and Kaplan, M. (Eds), Advancing the Culture of Teaching at a Research University: How a Teaching Center Can Make a Difference, Stylus Publishing, LLC, Sterling, VA, pp. 65-79.

Fink, L.D. (2013), Creating Significant Learning Experiences, Revised and Updated: An Integrated Approach to Designing College Courses, Jossey-Bass, San Francisco, CA.

Freeman, S., Eddy, S.L., McDonough, M., Smith, M.K., Okoroafor, N., Jordt, H. and Wenderoth, M.P. (2014), “Active learning increases student performance in science, engineering, and mathematics”, PNAS Proceedings of the National Academy of Sciences of the United States of America, Vol. 111 No. 23, pp. 8410-8415, doi: 10.1073/pnas.1319030111.

Fritz, C.O., Morris, P.E. and Richler, J.J. (2012), “Effect size estimates: current use, calculations, and interpretation”, Journal of Experimental Psychology, Vol. 141 No. 1, pp. 2-18, doi: 10.1037/a0024338.

Gess-Newsome, J. (2015), “A model of teacher professional knowledge and skill including PCK”, in Berry, A., Friedrichsen, P. and Loughran, J. (Eds), Re-examining Pedagogical Content Knowledge in Science Education, Routledge, New York, NY, pp. 28-42.

Gess-Newsome, J., Taylor, J.A., Carlson, J., Gardner, A.L., Wilson, C.D. and Stuhlsatz, M.A.M. (2019), “Teacher pedagogical content knowledge, practice, and student achievement”, International Journal of Science Education, Vol. 41 No. 7, pp. 944-963, doi: 10.1080/09500693.2016.1265158.

Gibbs, G. and Coffey, M. (2004), “The impact of training of university teachers on their teaching skills, their approach to teaching and the approach to learning of their students”, Active Learning in Higher Education, Vol. 5 No. 1, pp. 87-100, doi: 10.1177/1469787404040463.

Gunnlaugson, O., Sarath, E.W., Scott, C. and Bai, H. (Eds) (2014), Contemplative Learning and Inquiry across Disciplines, SUNY Press, Albany, NY.

Hake, R.R. (1998), “Interactive-engagement versus traditional methods: a six-thousand- student survey of mechanics test data for introductory physics courses”, American Journal of Physics, Vol. 66 No. 1, pp. 64-74.

Hanbury, A., Prosser, M. and Rickinson, M. (2008), “The differential impact of UK accredited teaching development programmes on academics' approaches to teaching”, Studies in Higher Education, Vol. 33 No. 4, pp. 469-483.

Henderson, C., Beach, A. and Finkelstein, N. (2011), “Facilitating change in undergraduate stem instructional practices: an analytic review of the literature”, Journal of Research in Science Teaching, Vol. 48 No. 8, pp. 952-984, doi: 10.1002/tea.20439.

Hines, S.R. (2009), “Investigating faculty development program assessment practices: what's being done and how can it be improved?”, Journal of Faculty Development, Vol. 23, pp. 5-19.

Hines, S.R. (2011), “How mature teaching and learning centers evaluate their services”, To Improve the Academy, Vol. 30, pp. 277-289.

Honey, M., Pearson, G. and Schweingruber, H. (Eds) (2014), Stem Integration in K-12 Education: Status, Prospects, and an Agenda for Research, National Academies Press, Washington, DC.

Huba, M.E. and Freed, J.E. (2000), Learner-Centered Assessment on College Campuses: Shifting the Focus from Teaching to Learning, Allyn & Bacon, Needham Heights, MA.

Kane, R., Sandretto, S. and Heath, C. (2002), “Telling half the story: a critical review of research on the teaching beliefs and practices of university academics”, Review of Educational Research, Vol. 72 No. 2, pp. 177-228, doi: 10.3102/00346543072002177.

Kane, R., Sandretto, S. and Heath, C. (2004), “An investigation into excellent tertiary teaching”, Higher Education, Vol. 47 No. 3, pp. 283-310.

Kezar, A. (2014), “Higher education change and social networks: a review of research”, The Journal of Higher Education, Vol. 85 No. 1, pp. 91-125, doi: 10.1353/jhe.2014.0003.

Kezar, A. and Gehrke, S. (2017), “Sustaining communities of practice focused on STEM reform”, The Journal of Higher Education, Vol. 88 No. 3, pp. 323-349, doi: 10.1080/00221546.2016.1271694.

Kezar, A., Gehrke, S. and Elrod, S. (2015), “Implicit theories of change as a barrier to change on college campuses: an examination of STEM reform”, The Review of Higher Education, Vol. 38 No. 4, pp. 479-506, doi: 10.1353/rhe.2015.0026.

Kreber, C. and Brook, P. (2001), “Impact evaluation of educational development programmes”, International Journal for Academic Development, Vol. 6 No. 2, pp. 96-108, doi: 10.1080/13601440110090749.

Kuh, G.D., Cruce, T.M., Shoup, R., Kinzie, J. and Gonyea, R.M. (2008), “Unmasking the effects of student engagement on first-year college grades and persistence”, The Journal of Higher Education, Vol. 79 No. 5, pp. 540-563, doi: 10.1353/jhe.0.0019.

Ladson-Billings, G. (2006), “From the achievement gap to the education debt: understanding achievement in US schools”, Educational Researcher, Vol. 35 No. 7, pp. 3-12.

Lee, V.S. (2010), “Program types and prototypes”, in Gillespie, K.J. and Robertson, D.L. (Eds), A Guide to Faculty Development, 2nd ed., Jossey-Bass, San Francisco, CA, pp. 21-34.

Luft, J.A., Kurdziel, J.P., Roehrig, G.H. and Turner, J. (2004), “Growing a garden without water: graduate teaching assistants in introductory science laboratories at a doctoral/research university”, Journal of Research in Science Teaching, Vol. 41 No. 3, pp. 211-233.

Lund, T.J. and Stains, M. (2015), “The importance of context: an exploration of factors influencing the adoption of student-centered teaching among chemistry, biology, and physics faculty”, International Journal of STEM Education, Vol. 2 No. 1, pp. 1-13, doi: 10.1186/s40594-015-0026-8.

McConnell, D.A., Steer, D.N., Owens, K.D., Knott, J.R., Dick, J. and Heaney, P.J. (2006), “Using concept tests to assess and improve student conceptual understanding in introductory geoscience courses”, Journal of Geoscience Education, Vol. 54, pp. 61-68.

McHugh, M.L. (2012), “Interrater reliability: the kappa statistic”, Biochemia Medica, Vol. 22 No. 3, pp. 276-282.

Meizlish, D.S., Wright, M.C., Howard, J. and Kaplan, M.L. (2018), “Measuring the impact of a new faculty program using institutional data”, International Journal for Academic Development, Vol. 23 No. 2, pp. 72-85, doi: 10.1080/1360144X.2017.1364644.

Michael, J. (2007), “Faculty perceptions about barriers to active learning”, College Teaching, Vol. 55 No. 2, pp. 42-47, doi: 10.3200/CTCH.55.2.42-47.

National Research Council (2012), Discipline-based Education Research: Understanding and Improving Learning in Undergraduate Science and Engineering, National Academies Press, Washington, DC.

Nilson, L.B. (2016), Teaching at its Best: A Research-Based Resource for College Instructors, 4th ed., John Wiley & Sons, San Francisco, CA.

Norouzian, R. and Plonsky, L. (2017), “Eta-and partial eta-squared in l2 research: a cautionary review and guide to more appropriate usage”, Second Language Research, Vol. 34 No. 2, pp. 257-271, doi: 10.1177/0267658316684904.

Ödalen, J., Brommesson, D., Erlingsson, G.Ó., Schaffer, J.K. and Fogelgren, M. (2019), “Teaching university teachers to become better teachers: the effects of pedagogical training courses at six Swedish universities”, Higher Education Research and Development, Vol. 38 No. 2, pp. 1-15, doi: 10.1080/07294360.2018.1512955.

Pallas, A.M., Neumann, A. and Campbell, C.M. (2017), “Policies and practices to support undergraduate teaching improvement”, American Academy of Arts & Sciences, Cambridge, MA, available at: https://www.amacad.org/sites/default/files/academy/multimedia/pdfs/publications/researchpapersmonographs/CFUE_Undergraduate-Teaching/CFUE_Undergraduate-Teaching.pdf.

Palmer, M.S., Bach, D.J. and Streifer, A.C. (2014), “Measuring the promise: a learning-focused syllabus rubric”, To Improve the Academy: A Journal of Educational Development, Vol. 3 No. 1, pp. 14-36, doi: 10.1002/tia2.20004.

Palmer, M.S., Streifer, A.C. and Williams-Duncan, S. (2016), “Systematic assessment of a high-impact course design institute”, To Improve the Academy, Vol. 35 No. 2, pp. 339-361.

Robert, J. and Carlsen, W.S. (2017), “Teaching and research at a large university: case studies of science professors”, Journal of Research in Science Teaching, Vol. 54 No. 7, pp. 937-960, doi: 10.1002/tea.21392.

Rogan, J.M. (2007), “How much curriculum change is appropriate? Defining a zone of feasible innovation”, Science Education, Vol. 91 No. 3, pp. 439-460.

Rovai, A.P. (2002), “Development of an instrument to measure classroom community”, Internet and Higher Education, Vol. 5, pp. 197-211.

Schunk, D.H., Pintrich, P.R. and Meece, J.R. (2007), Motivation In Education: Theory, Research, and Applications, 3rd ed., Prentice Hall, Upper Saddle River, NJ.

Seung, E., Bryan, L.A. and Haugan, M.P. (2012), “Examining physics graduate teaching assistants' pedagogical content knowledge for teaching a new physics curriculum”, Journal of Science Teacher Education, Vol. 23 No. 5, pp. 451-479, doi: 10.1007/s10972-012-9279-y.

Shadle, S.E., Marker, A. and Earl, B. (2017), “Faculty drivers and barriers: laying the groundwork for undergraduate STEM education reform in academic departments”, International Journal of STEM Education, Vol. 4 No. 8, pp. 1-13.

Shulman, L.S. (1986), “Those who understand: knowledge growth in teaching”, Educational Researcher, Vol. 15 No. 2, pp. 4-14.

Smith, M.K., Jones, F.H., Gilbert, S.L. and Wieman, C.E. (2013), “The classroom observation protocol for undergraduate STEM (COPUS): a new instrument to characterize university stem classroom practices”, CBE-Life Sciences Education, Vol. 12 No. 4, pp. 618-627.

Sorcinelli, M.D., Austin, A.E., Eddy, P. and Beach, A. (2006), Creating the Future of Faculty Development: Learning From the Past, Understanding the Present, Wiley/Jossey-Bass, San Francisco, CA.

Stains, M., Harshman, J., Barker, M.K., Chasteen, S.V., Cole, R., DeChenne-Peters, S.E. and Young, A.M. (2018), “Anatomy of STEM teaching in North American universities”, Science, Vol. 359 No. 6383, pp. 1468-1470, doi: 10.1126/science.aap8892.

Sternglass, M.S. (2017), Time to Know Them: A Longitudinal Study of Writing and Learning at the College Level, Routledge, New York, NY.

Stes, A., Clement, M. and Van Petegem, P. (2007), “The effectiveness of a faculty training programme: long-term and institutional impact”, International Journal for Academic Development, Vol. 12 No. 2, pp. 99-109, doi: 10.1080/13601440701604898.

Sturtevant, H. and Wheeler, L. (2019), “The STEM faculty instructional barriers and identity survey (FIBIS): development and exploratory results”, International Journal of STEM Education, Vol. 6 No. 1, pp. 35, doi: 10.1186/s40594-019-0185-0.

Sunal, D.W., Hodges, J., Sunal, C.S., Whitaker, K.W., Freeman, L.M., Edwards, L., Johnston, R.A. and Odell, M. (2001), “Teaching science in higher education: faculty professional development and barriers to change”, School Science and Mathematics, Vol. 101 No. 5, pp. 246-257.

Sunal, D.W., Sunal, C.S., Wright, E.L., Mason, C.L. and Zollman, D. (Eds) (2014), Research Based Undergraduate Science Teaching, Information Age Publishing, Charlotte, NC.

Trede, F., Macklin, R. and Bridges, D. (2012), “Professional identity development: a review of the higher education literature”, Studies in Higher Education, Vol. 37 No. 3, pp. 365-384.

Trigwell, K., Prosser, M. and Ginns, P. (2005), “Phenomenographic pedagogy and a revised approaches to teaching inventory”, Higher Education Research and Development, Vol. 24 No. 4, pp. 349-360, doi: 10.1080/07294360500284730.

Trower, C.A. (2012), Success on the Tenure Track: Five Keys to Faculty Satisfaction, The Johns Hopkins University Press, Baltimore, MD.

Tschannen-Moran, M. and McMaster, P. (2009), “Sources of self-efficacy: four professional development formats and their relationship to self-efficacy and implementation of a new teaching strategy”, The Elementary School Journal, Vol. 110 No. 2, pp. 228-245.

Walker, G.E., Golde, C.M., Jones, L., Bueschel, A.C. and Hutchings, P. (2008), The Formation of Scholars, Jossey-Bass, San Francisco, CA.

Walton, G.M. and Cohen, G.L. (2011), “A brief social-belonging intervention improves academic and health outcomes of minority students”, Science, Vol. 331 No. 6023, pp. 1447-1451.

Weidman, J.C., Twale, D.J. and Stein, E.L. (2001), “Socialization of graduate and professional students in higher education: a perilous passage?”, Higher Education Report, Jossey-Bass Higher and Adult Education Series. Jossey-Bass Publishers, San Francisco, CA, Vol. 28 No. 3, p. 138, available at: https://files.eric.ed.gov/fulltext/ED457710.pdf.

Wheeler, L. and Bach, D. (2021), “Understanding the impact of educational development interventions on classroom instruction and student success”, International Journal of Academic Development, Vol. 26 No. 1, pp. 24-40, doi: 10.1080/1360144X.2020.1777555.

Wiggins, G. (1998), Educative Assessment. Designing Assessments to Inform and Improve Student Performance, Jossey-Bass Publishers, San Francisco, CA.

Wiggins, G. and McTighe, J. (2005), Understanding by Design, 2nd ed., Association for Supervision and Curriculum Development, Alexandria, VA.

Winkelmes, M.A., Bernacki, M., Butler, J., Zochowski, M., Golanics, J. and Harriss Weavil, K. (2016), “A teaching intervention that increases underserved college students' success”, Peer Review, Vol. 18 Nos 1/2, pp. 31-36.

Woolfolk Hoy, A. and Burke Spero, R. (2005), “Changes in teacher efficacy during the early years of teaching: a comparison of four measures”, Teaching and Teacher Education, Vol. 21 No. 4, pp. 343-356.

Acknowledgements

Disclosure statement: The authors have no associations/relationships with the sponsors or any other associations which might lead to a potential conflict of interest.

Funding: This research was funded in part by the University of Virginia’s Jefferson Trust Foundation.

Corresponding author

David E. Favre can be contacted at: davidfavre001@gmail.com

Related articles