ASCB logo LSE Logo

General Essays and ArticlesFree Access

A Cocurricular Program That Encourages Specific Study Skills and Habits Improves Academic Performance and Retention of First-Year Undergraduates in Introductory Biology

    Published Online:https://doi.org/10.1187/cbe.20-06-0117

    Abstract

    Students must master content for success in science, technology, engineering, and mathematics (STEM), but “how to” is rarely taught in college. Faculty are reluctant to sacrifice class time, believe such instruction is remedial, or assume students possess or will attain these skills independently. To determine whether explicit instruction would improve skills and performance by first-year undergraduates likely to major in STEM, we invited all students in an introductory biology course to participate in an 8-week Co-Curricular (CoC) program. Students who participated improved time management, used more methods to plan and organize their study, and used a variety of active-learning strategies. A validated model was used to predict students’ probability of achieving a “C+” or better in the course. The model, based on 5 years of data, used students’ demographic characteristics and previous academic performance to provide a measure of their preparedness. Students with low and medium preparedness who participated in CoC performed better than those who did not participate. All students who participated were retained in the course compared with 88.7% of students who did not participate. Specific behavioral changes at the start of STEM gateway courses can dramatically improve student metacognition, retention, and academic performance, particularly for students underrepresented in the discipline.

    INTRODUCTION

    Introductory science, technology, engineering, and mathematics (STEM) courses in the 21st century have a growing problem: More students than not are entering unprepared for college-level work, but professors are still holding onto outdated notions that the classroom is not the proper place to address this issue. Students’ perceptions that instructors understand, care for, and accept them, particularly within the context of a course, very strongly impact student academic success (Cavanagh et al., 2018). Students’ first experiences in gateway courses are thus crucial for persistence and retention in the sciences. Poor performance, which is common in these courses (Koch, 2018), has many deleterious effects on students—it decreases interest, expands self-doubt, and increases time to degree because classes must be repeated (Minchella et al., 2002).

    The lack of diversity among STEM majors in college and within the STEM workforce is not due to differences in disciplinary interest, but rather disparate outcomes, which begin long before students enter higher education (if they make it that far; Hurtado et al., 2006). Lack of certain high school course offerings, a variable typically outside student control, can put a student at an immediate disadvantage in first-year college STEM courses (Kudish et al., 2016). Students who are the first in their families to go to college or seek a STEM degree do not know what they do not know; they lack the culture capital required for success in academia (Collier and Morgan, 2008).

    Exacerbating these issues is the fact that the study skills that served a student well in high school may not necessarily translate to the college level. In high school, successful students typically study to memorize rather than for deep understanding of the material. These mediocre methods suffice, particularly in the public school system, because instruction is often aimed at the weakest performers (Conley, 2008; Barnes et al., 2010). The skills needed for college success are sustained focus and time, the ability to understand academic presumptions and concepts that are often unstated, and the self-initiative to make use of the plethora of resources available: office hours, advising, student success programs, peers, and upper-class students. Too often, the first-year student’s focus on home, due to homesickness or continued responsibilities, and desire/encouragement to engage in extracurricular life are at odds with these skills. For students who come from historically minoritized groups, the disconnect is even greater, resulting in a persistent achievement gap.

    While all students are important, a group that does not always command our attention is the group of students who fall into the “murky middle.” This category is defined as those students whose academic outcomes are unpredictable despite being “academically qualified” and a good fit for the “campus culture and offerings” (Murky Middle Project, 2014; Tyson, 2014). These students typically receive “B’s” or “C’s” in their first gateway course, but their performance declines over time (Murky Middle Project, 2014; Tyson, 2014) or they are encouraged to continue despite lower grades than their peers and never excel as they might have with better skills. Because most college interventions focus on “at-risk” students or first-year retention, these students fall through the cracks. Murky middle students tend to drop out in the second or third year (Xianglei, 2013; Murky Middle Project, 2014; Tyson, 2014), a time when advisors expect these students now understand what is expected at the college level, hope that they will pick up the pace, and count on the fact that if they do so, their first-year academic struggles will be forgiven. Providing these students with the opportunity to develop study skills at the start of college would reasonably be expected to make a great difference in their success in STEM majors.

    Gateway courses that students encounter in their first semesters of undergraduate study often serve as a barrier to career aspirations (Malcolm and Feder, 2016). The typical sequential prerequisite nature of these courses (i.e., each subsequent course relies on mastery of material in the course that came before) necessitates that students who fail not only leave the major, but ultimately their STEM career goals (Weston et al., 2019). For students from diverse ethnic, cultural, and socioeconomic backgrounds, gateway courses can be particularly unwelcoming. Traditionally, these courses consist of lectures that “relay decontextualized scientific minutiae” and “presuppose a familiarity with implicit premises and values that are culturally narrow” (Kudish et al., 2016, p. 10, 1). Students do not seek help because “they perceive [questioning] reveals a deficit in their knowledge base and exposes them as an outsider” (Kudish et al., 2016, p. 6).

    For underrepresented minorities specifically, “the lecture format undermines their abilities due to the burden of social isolation, low confidence, and stereotype threat these students feel” (Ballen et al., 2017, p. 1). Lecturing discourages and marginalizes, which selects against students who might have pursued careers in science (Tobias, 1990; Johnson, 2007). How instructors structure courses has a far-reaching impact, starting with student performance in the course, spreading to students’ attitudes in subsequent science courses, and ultimately, students’ overall experiences at an institution (Minchella et al., 2002; Maloof and White, 2005; Ballen et al., 2017). For students from historically minoritized groups, their course experience may be the sole determining factor governing retention, persistence, and future interest in STEM.

    Active-learning pedagogies, implemented both within and outside the classroom, have a long-standing demonstrated benefit for all students. In many cases, active learning disproportionally benefits students from historically minoritized groups who are underrepresented in STEM (Higher Education Research Institute, 2010; Ballen et al., 2017; Theobald et al., 2020). Active-learning pedagogies in the classroom motivate, excite, and help students to learn (Cavanagh et al., 2018), but there are fewer data about interventions focused on study skills relevant to STEM content and course work. Skills such as time management, note-taking, test-taking, reviewing, organizing, and group learning improve student performance (Kudish et al., 2016; Sebesta and Speth, 2017; Wienhold and Branchaw, 2018) but are rarely a component of instruction in gateway courses or STEM-specific institutional programs. Although gateway courses typically have additional support such as tutoring programs, there is rarely time for tutors to address study skills due to the vast amount of content that must be learned.

    Programs focused on study skills share common outcomes. First, academic performance improves for students in these programs: they earn higher grades on summative assessments, such as exams, and in a course; typically a one-third increase in grades (Minchella et al., 2002; Chen et al., 2017). They are less likely to receive a “D”, “F”, or withdraw (Belzer et al., 2003; Wienhold and Branchaw, 2018). In some cases, these benefits are even greater for students predicted not likely to excel in these courses (Wienhold and Branchaw, 2018). In addition, students are retained within the department/major (Minchella et al., 2002; Kudish et al., 2016), college (Minchella et al., 2002), and potentially the discipline. As an added benefit, these interventions also improve “psychological processes such as pre-exam negative affect and perceived control over performance” for specific assessments such as exams (Chen et al., 2017, p. 783) as well as general motivation (Belzer et al., 2003; Wienhold and Branchaw, 2018).

    There are themes with regard to the strategies that students embrace; these in turn mediate their academic success. Students prefer strategies that focus on content knowledge, such as outlining and note revision (Belzer et al., 2003). If trained properly, students also value discussing their understanding of emerging concepts with peers (Belzer et al., 2003; Bailey et al., 2018). Students also appreciate strategies that help them manage their time and create a study plan (Steiner, 2016; Chen et al., 2017). Teaching students test-taking skills (Belzer et al., 2003) and how to tailor their study to expected assessment (Chen et al., 2017) bolstered student accuracy in prediction (Chen et al., 2017) and performance (Belzer et al., 2003). Such interventions not only increase the time and priority students give to their studies, they make their time more efficient (Belzer et al., 2003; Chen et al., 2017), which translates into better performance throughout the course and their collegiate tenure. Importantly, underprepared students also differ in their perceived benefit of these interventions (Kudish et al., 2016). Students in the “at-risk”, “murky middle”, and “likely to pass” groups differ in their approach to gateway courses and metacognition, resulting in subpar learning strategies for students in the murky middle and at-risk categories, which can be detrimental (Kritzinger et al., 2018). Successful strategies give students more ownership over the learning process, which may increase student self-efficacy and thus improve academic performance in introductory biology courses (Sebesta and Speth, 2017).

    Our study sought to intervene with an 8-week Co-Curricular (CoC) program focused on building specific study skills. This program was offered in the first half of the first required course for biology, neuroscience, and biochemistry and molecular biology majors. This course is somewhat unique in that it has no textbook. Instead, students are given primary and secondary literature readings and extensive study guide questions to complete before each class session. Understanding is checked at the start of each class with a four-question, multiple-choice, low-stakes quiz. There are many cocurricular programs of this kind at large public universities, but we are only aware of one other such program at a small liberal arts college like Ursinus (Kudish et al., 2016). Most programs span the duration of the course, but we wanted to determine whether a short-term program, starting at the beginning of the semester before any summative assessment, would be effective.

    To analyze this program, we used a validated model to predict students’ probability of achieving a “C+” or better in the course when they entered. The model used students’ demographic characteristics and previous academic performance to provide a measure of their preparedness. Participation in the program was voluntary but strongly encouraged for students with low and medium preparedness who are likely to be in the “murky middle.” The program was run by a faculty member in the biology department and an upper-level biology major. The program was augmented with drop-in hours staffed by the upper-level biology major. We found that all students who participated improved time management, used more methods to plan and organize their study, and used a variety of active-learning strategies to understand course material. Among students with low and medium preparedness, students who participated in CoC performed better in the course than those who did not. We also noted that, while none of the students who attended CoC dropped the course, 11.3% of students who did not participate in CoC withdrew from the course, and this effect was more pronounced for students with low preparedness (32.5% dropped).

    METHODS

    Educational Setting

    This study took place during the first of three mandatory courses in the introductory biology sequence at Ursinus College. Bio101, Ecology and Evolution, was an one-semester, four-credit, 100-level course with no prerequisites. The study spanned two semesters of this course, Fall 2017 and Fall 2018. In each year of the study, approximately 200 students were enrolled, in five sections with approximately 40 students per lecture section. The course was primary literature–based with no textbook.

    Recruitment

    During the first week of the semester, the opportunity to apply to participate in the CoC program was announced to students in class and via email (see Supplemental Material). First-year advisors were also contacted via email (see Supplemental Material) and encouraged to suggest the program to advisees who might benefit based on high school grades, Scholastic Aptitude Test (SAT) scores, or the student’s own opinion. To encourage commitment toward the program, students had to complete an application. The link to the CoC application was included on the course learning management system (Canvas) in a module for CoC materials. The online student application is included as Supplemental Material. All students who applied to the CoC program were accepted and given a section to attend, either 1:30–2:50 pm or 3–4:20 pm on Fridays. Of those who applied, 48.85% attended seven or eight sessions and were counted as full participants.

    The CoC Program

    The CoC program met once a week on Fridays, for 80 minutes each session, for eight consecutive weeks (see Supplemental Material for schedule of weekly topics). Each session was led by a faculty member (C.F.), assisted by one or two undergraduate “peer leaders.” The faculty member was a part of the Biology Department, but did not teach the introductory biology course. Undergraduate peer leaders had at least sophomore standing, received at least a “B” (an above-average grade) in Bio101, and were recommended by two Ursinus College science faculty members (online peer leader recruitment email and application included in Supplemental Material). Students received extra credit for attending seven to eight sessions (0.5% of grade in 2017, 1% in 2018). If students missed a session, a drop-in session was available the following week. At drop-in sessions, students could ask the undergraduate peer leader(s) for help applying the study strategies learned in the CoC program or they could use drop-in hours as a meeting time/place for study groups. Drop-in sessions lasted for 2 hours, but students were not required to attend for any set length of time. Only students who participated in all eight CoC sessions (or a combination of eight CoC sessions and drop-in sessions as makeup) were included in the study.

    Data Collection and Analyses

    Before institutional review board approval, C.F. developed the pre and post assessment surveys in consultation with K.G. and an evaluation consultant from outside the institution. The study protocol was approved by the Ursinus College Institutional Review Board (protocol file no. KG-BIO-1216x). Attendance was recorded for each CoC and drop-in session. At the first CoC session, students filled out a paper copy of the pre assessment survey. After the CoC program ended, students were requested to fill out the post assessment survey online. Students were informed about the post assessment survey at the last CoC session and again via email during the last week of classes and finals week. The pre and post assessment surveys asked two questions to determine whether there were differences in study strategies before and after the student participated in the CoC program. The questions differ between the pre and post assessment because we presumed the students would not be familiar with the study strategies we were planning to teach in the CoC program. The qualitative data collected were obtained via open-ended and forced-choice questions. We performed inductive content analysis (Mayring, 2000) on open-ended and forced-choice questions in the pre and post assessment surveys. C.F. and K.G. coded the data separately using the categories developed by C.F. A third party (Ann Breen) also coded the data and demonstrated 88.5% interrater reliability.

    Students were told that they needed to fill out the survey in order to receive the extra credit they had earned for attending CoC. Their names were removed from their survey responses. Student responses about study methods were analyzed for emergent themes (Rybczynski and Schussler, 2011) and categorized as active or passive (Roediger, 2013). Pre assessment data for the Fall 2017 and Fall 2018 CoC programs were aggregates and post assessment data for the Fall 2017 and Fall 2018 CoC programs were aggregated.

    To compare course performance between low-, medium-, and high-preparedness groups of students, we developed a logistic regression model based on past course performances that predicted the binary likelihood that a student would receive a “C+” or better in the course. We chose the “C+” cohorts because descriptive analyses of previous cohorts suggested that students who achieved at least that benchmark in Bio101 persisted into Biology 102 at higher rates and performed better in the course. For example, 86% of students who received a “C”-level grade persisted into BIO 102 compared with 71% of students with “D”-level grades. Of the “C”-level students who persisted, 85% received at least a “C”-level grade in 102 compared with only 42% of “D”-level students. Among the “C”-level grades, a “C+” was chosen because it had the highest percentage of positive outcomes and it provided a reasonably even distribution for our binary outcome variable. Additionally, its grade point average (GPA) of 2.33 is a positive contribution to keeping a student’s overall term GPA above the school-defined probation level of 2.00.

    We developed the model in two phases. The first step was a training phase that used data from the 2013–2016 cohorts (n = 638). We began with a wide variety of available academic and demographic variables and began testing various combinations of these variables to see which ones best predicted whether or not a student received a “C+” in previous cohorts. In the end, we determined that a model that contained sex, first-generation status, Pell eligibility, underrepresented minority status, athlete status, a grade of at least a “C” in a high school calculus course, unweighted core high school GPA, and SAT score (out of 1600) were the best at predicting our desired outcome. Receiving a “C” or better in calculus in high school, high school GPA, and SAT score were all significant predictors for performance in Bio101, but the other predictors were retained during model development, because, despite their lack of significance, they still contributed predictive power to the model. We recommend that institutions wishing to replicate this approach consider which variables might be applicable on their own campuses and build a model that best predicts the performance of their own students.

    A note about standardized testing: For students who submitted ACT performance only, validated concordance tables were used to convert the ACT score to SAT score. Additionally, the SAT itself changed significantly in March 2016. To address this issue, we used validated concordance tables to convert the score of any student submitting an SAT score after March of 2016 to what the score would have been on the previous version of the test; for most students this meant a decrease of 30–60 points off the 1600-point score. Because Ursinus is a test-optional institution, neither SAT nor ACT scores were available for a small percentage of students. To still have viable information with which to develop and assess interventions, we developed a second model that omitted testing as a predictor for this small subset of students.

    The equation that resulted from this training phase determined what weight to give to each of the various predictors in the model. The next phase of the model was the testing phase. In the testing phase, we applied the model we developed in the training phase to a set of students it had not “seen” before and assessed how well it was able to predict which students achieved a “C+” or higher. We used the 2017 cohort as our test cases by applying the model developed in the training phase, generating a probability of achieving a “C+” for each enrolled student in 2017, and seeing how successful it was at predicting which students fell into the “C+” and above category and the “C” and below category. For the 2017 cohort, the model correctly classified students in 84% of cases. Thus, we felt the model was accurate and a valid way to predict students’ success in Bio101. The equation for calculating the predicted probability (P) of achieving at least a “C+” in Bio 101 for students with reported testing is as follows:

    Using the equation determined by the logistic regression model in the testing and training phases, we assigned each first-year, first-time student in the 2018 cohort in Bio101 a predicted probability of achieving a “C+” or better at the beginning of the semester. Students were then placed into three groups according to these probabilities: low (≤ 25% chance of success), medium (26-84% chance of success), and high (≥ 85% chance of success).  Student grades were recorded at the end of each of the first two exams and at the conclusion of the course (final GPA). The final GPA is a composite of grades on four in-class exams, a final exam, daily quizzes, and the laboratory portion of the course. Exam and final grades of each group of students in CoC were compared with nonparticipants via t-test. Final grade differences are calculated after removing the extra-credit benefit awarded for full CoC participation. Grades for the Fall 2017 and Fall 2018 cohorts were aggregated. We also followed students after Bio101 to assess their levels of persistence into and performance in the next two courses in the introductory biology sequence (102 and 201) relative to both their predicted probability of success in Bio101 and their participation in the CoC program.

    RESULTS

    Relative to Their Preparation, Do Students Participating in CoC Perform or Persist in Introductory Biology Better than Students Who Do Not Participate?

    We developed a logistic regression model to predict the probability that a given first-year, first-time student would receive a “C+” or better in Bio101 as described earlier. The model was used to analyze the effectiveness of the program. In 2018, we used the model to identify students who would benefit most from the program. We recruited students who had a probability of 60% or lower to receive a “C+” or better in the course, most likely those in the murky middle (medium preparedness) and at-risk (low preparedness) groups.

    Low-, medium-, and high-preparedness groups contained 55, 124, and 109 students, respectively. For each preparedness group, the exam 1 grade, exam 2 grade, and final GPA of students who never attended CoC (“None”) was compared with students who attended all 8 weeks (“Full”). This provided a test of the efficacy of CoC for students at several levels of preparedness. Students who attended only a few times were not included in this study, because the number of times they attended varied widely.

    The effect of full CoC attendance on students’ Bio101 grades varied with student preparedness. We analyzed the first two and last (final) of five total exams in the course. For the low-preparedness group, there was a trend toward a benefit on exam 1 for the 15 students who were full participants in CoC (62.7 ± 11.17) relative to the 40 students who did not participate (55.8 ± 13.65), t(53) = 1.747, p = 0.086. There was a statistically significant benefit for students with both low and medium preparedness for exam 2 and final grades (Figure 1B). Compared with students who did not participate in CoC, students with low preparedness who attended CoC for 8 weeks increased their letter grade from an “F” to a “D−” on exam 1, an “F” to a “D” on exam 2, and a “D” to a “C−” final grade. For students with medium preparedness, the letter grade increased from a “C−” to a “B−” on exam 2 and a high “C” to a high “B−” final grade. There was no significant benefit of attending the CoC program for all 8 weeks for students with high preparedness (Figure 1). We also noted that, while none of the students who attended CoC dropped the course at midterm, 11.3% of students who did not participate in CoC withdrew from the course and received a final grade of “W” or “WF” (n = 27/238). Only 26 are shown dropping the course in Figure 1, because “WF” is included in the final grade/GPA calculation while “W” is not per Ursinus grading rules. This effect is more pronounced for students with low preparedness: none of the full participants in this group withdrew from the course (n = 0/15) compared with 32.5% of nonparticipants (n = 13/40).

    FIGURE 1.

    FIGURE 1. Effect of full CoC attendance (8 weeks) on (A) exam 1, (B) exam 2, and (C) final Bio101 grades for students with low, medium, and high preparedness. For the low-preparedness group, there was a trend toward a benefit on exam 1 for the 15 students who were full participants in CoC (62.7 ± 11.17) relative to the 40 students who did not participate (55.8 ± 13.65), t(53) = 1.747, p = 0.086, and on final grades once extra credit for participation was removed (1.73 ± 0.70 full participants vs. 1.26 ± 0.86 nonparticipants), t(40) = 1.807, p = 0.078. Low-preparedness students scored significantly better on exam 2 (64.1 ± 13.03) than nonparticipants (56.9 ± 10.66), t(53) = 2.074, p = 0.043. For the medium-preparedness group, the 23 full CoC participants scored significantly higher on exam 2 (82.2 ± 11.42) than the 101 nonparticipants (73.3 ± 14.99), t(122) = 3.116, p = 0.003, and on the final course grade (2.88 ± 0.94 full participants vs. 2.24 ± 0.94 nonparticipants), t(111) = 2.911, p = 0.004.  Bars show mean percentage (A, B) or GPA (C), error bars represent SD, sample sizes are embedded in the bars, and p values from t-tests are shown above each comparison.

    Relative to Their Preparation, Do Students Participating in CoC Perform or Persist in Future Biology Courses Better than Students Who Do Not Participate?

    CoC participants persisted in the second course in the introductory biology sequence (Bio102, Cell Biology) and in the third course in the introductory Biology sequence (Bio201, Genetics) at higher rates than nonparticipants. Ninety percent of CoC participants enrolled in Bio102, and 100% completed the course compared with 86% enrollment and 98% completion for nonparticipants. Sixty-eight percent of CoC participants enrolled in Bio201 and 97% completed the course compared with 56% enrollment and 96% completion for nonparticipants. The effect on persistence was especially pronounced for students who we determined had low and medium preparedness in Bio101. Considering students with low preparedness, 87% and 47% of full CoC participants enrolled in Bio102 and Bio201, respectively, compared with 68% and 10% of nonparticipants. Considering students with medium preparedness, the effect was only pronounced for Bio201. Seventy percent of full CoC participants enrolled compared with 49% of nonparticipants. Note that, for students who did not persist, we are unable to distinguish between students who did not persist in biology course work but remained at Ursinus and students who did not persist because they left the school. Students with low and medium preparedness who participated in CoC outperformed nonparticipants in subsequent biology course work. In Bio102, for students with low and medium preparedness, the average grade was a 2.28 and 2.88, respectively, compared with 1.98 and 2.39 for nonparticipants. In Bio201, for students with low and medium preparedness, the average grade was a 2.19 and 3.02, respectively, compared with 1.50 and 2.76 for nonparticipants.

    What Study and Time Management Strategies Were Most Useful to Students in CoC?

    We also assessed which study and time management strategies were most useful for students who participated in the CoC program. The strategies we used have been published elsewhere (Round and Campbell, 2013; Frank, 2016; Steiner, 2016; Heideman et al., 2017; Bailey et al., 2018). The weekly schedule of topics is included in Supplemental Material. To qualitatively analyze study strategies, we asked a mixture of forced-choice and open-ended questions so that we could compare the methods students were using at the start of CoC with those that they continued to use after the program was over (no. 8 on pre assessment survey and no. 11 on post assessment survey; see Supplemental Material for questions).

    The pre assessment had responses from 45 students and the post assessment had responses from 41 students; some students listed more than one strategy used. The variety of responses were examined by C.F. and a list of categories was created (Table 1). C.F., K.G., and a third party (Ann Breen) independently read the responses and assigned them to the list of categories (Table 1). We categorized class preparation as passive, because while it has been shown to be effective (Prince, 2004), preparation without postclass engagement is not a study method. Highlighting/underlining are passive, because they can be done without thinking about or devoting more time to the material. Further, highlighting demonstrates low effectiveness (Dunlosky et al., 2013). The three independent analyses demonstrated 88.5% intercoder reliability.

    TABLE 1. Learning strategies and example student responses before (pre assessment) and after (post assessment) participating in the CoC programa

    Learning strategiesPassive or active?Example student responsesNumber of students responding (pre assessment)Number of students responding (post assessment)
    NonePassive“None,” “N/A”11
    Class preparationPassive“Reading,” “Answer study guide questions”302
    Highlighting/underliningPassive“Highlight”61
    Total passive374
    Prioritizing studyActive“Planning study time,” “Planning out my work at the beginning of the week”030
    Writing to learnActive“Take notes,” “Annotate text,” “Writing on whiteboards”289
    Talking to peersActive“Asking questions to friends,” “Study groups,” “TQ sessions”b1914
    Making use of course resourcesActive“Doing the back exam,” “Office hours,” “going to SI sessions”c16
    Reflective writingActiveN/A010
    OtherActive“Index card strategies,” “Concept mapping,” “Figure Facts”721
    Total active5590

    aChi-square comparing passive and active-learning strategies = 34.992, df = 1, p < 0.00001. N = number of students who reported using that strategy; N = 45 (pre), 41 (post); some students reported using several strategies.

    bTeach and Question (TQ) sessions are 30 minute sessions where a pair of students alternates in the role of teacher (explaining their understanding of content) and questioner (asking questions of the teacher to probe his or her understanding and help the teacher think more deeply about the content).

    cSupplemental Instruction (SI) sessions are voluntary tutoring sessions that are offered weekly by a peer leader. During the sessions students ask questions, go over concepts, and practice their knowledge of content.

    Pre–program assessment showed that students used more active than passive strategies before enrolling in the program. However, significantly more student responses contained active strategies in post assessment (Table 1, p < 0.00001). In the pre assessment, the top three student responses were class preparation, writing to learn, and talking to peers; in the post assessment the top three student responses were prioritizing study, other active-learning strategies, and talking to peers (Table 1). Students had no notion of prioritizing study in the pre assessment. We also noticed a shift in student responses for the category of writing to learn from passive to active strategies. Pre assessment responses mainly referenced taking notes, whereas post assessment responses referred to activities such as writing on whiteboards or “brain dumps” (i.e., writing out everything you know about a given topic from memory). Due to lack of specificity in student responses, we cannot determine whether taking notes referred to in class or on their own time.

    DISCUSSION

    In this study, we found that providing an 8-week program that focused on study skills improved exam and final grades for students with low and medium preparedness. The program appeared to improve students’ time management and study skills over time, with students using more methods to plan or organize their study and focusing on active-learning strategies. Our program also improved retention in the course during the intervention, with 100% of students who participated remaining in the course for the full semester. Persistence in subsequent biology courses was also positively impacted.

    Predicting which students will succeed—or not—in course work is a challenge familiar to many educators. It is particularly daunting in the first semester, when a student does not yet have any collegiate course performance that might provide clues. Yet the first semester is when many students take courses that act as “gateways” to upper-division courses, particularly in the sciences. Developing, implementing, and assessing interventions that help students along these paths rely on knowing from the first day of the semester which students are likely to do well, which students are likely to struggle, and which students are in the middle.

    We chose a logistic regression model as our predictor tool. A logistic regression model examines the relationship between a set of input variables and the log-odds of a binary outcome occurring; these log-odds can then be converted to a probability. We believe this model has several advantages for predicting student performance and evaluating course outcomes. One, it assigns a unique weight to each input variable that shows, controlling for everything else in the model, how the variables are related to the likelihood of a student’s success. Two, it allows us to generate a unique probability of success for each student in the course at the very beginning of the semester, helping identify students who may have the most to gain from the CoC intervention. Three, at the end of the course, we can compare the course performance of students with similar predicted probabilities for success to see whether students who participated in the CoC intervention were more likely to reach a successful benchmark than those who did not participate.

    How the Structure of CoC Compares with Similar Programs

    Seventeen percent of the students who were enrolled in the Bio101 course in Fall 2017 and Fall 2018 participated in eight consecutive weeks of the CoC program. We note that participation in the CoC program was inversely related to student preparedness: 23.3% of students with low preparedness, 18% of students with medium preparedness, and 10.6% of students with high preparedness. Thus, the students most in need of the program were the most likely to attend, which could be attributed to our direct efforts to involve first-year advisors in recruiting students who would benefit from the program. Our participation rate is considerably lower than comparative programs, likely due to the commitment nature of the program (students must attend all eight consecutive weekly sessions in order to receive extra credit). Other voluntary programs are more “come if you can/want” (Arendale, 1994; Kudish et al., 2016; Chen et al., 2017).

    As this program was voluntary, we acknowledge that student motivation may be a factor in our findings, in that students motivated enough to attend might also be expected to earn higher grades. Similar to our study, students self-select to participate in other programs (Kudish et al., 2016; Chen et al., 2017). In other studies, students receive course credit for attending a supplemental course, in the format of a first-year seminar, either before (Wienhold and Branchaw, 2018) or concurrently with the introductory biology course (Minchella et al., 2002). Even in these cases, participation is somewhat voluntary, as students choose which first-year seminar they take. Despite the voluntary, and possibly biased, nature of these programs, we do not believe that student motivation is a major factor in our results based on other studies that have found no difference in this parameter (Minchella et al., 2002; Belzer et al., 2003; Chen et al., 2017). Interestingly, Belzer et al. (2003) found that motivation increased after the program for participating students and decreased for students who did not participate. They attributed the program-induced increase in motivation to more study time, which could also apply to our program.

    Programs to support introductory biology courses or supplemental instruction are provided at other institutions and typically impact primarily first-year students. All other programs we examined occurred at large public universities in the Midwest. Kudish et al. (2016) was the only other program occurring at a small, private, liberal arts college like Ursinus that has been analyzed. Topics covered by Minchella et al. (2002) and Belzer et al. (2003) were most similar to ours, including time management, learning strategies, and metacognitive reflections. The major difference in our content versus that of other programs was that Belzer et al. (2003) covered class material directly in addition to presenting active-learning strategies, whereas Kudish et al. (2016) exclusively covered course content in their program. Interestingly, we saw a similar improvement in course performance in our program, which focused exclusively on active-learning strategies and did not review course content.

    Similar to Belzer et al. (2003) and Minchella et al. (2002), we aimed to improve academic performance and create community by giving students a place where they could get to know one another and the Biology Department. Each week, students were welcomed with snacks and given a chance to reflect on their learning in a safe space. After exams, we dedicated time to dissect their performance on each question (see “Exam BIOpsy” in Supplemental Material) and develop a strategy to improve the next time. We posited a “beginner’s mind” approach to learning, encouraging students to practice new time management and active-learning strategies in CoC. At the end of each meeting, we asked students to set intentions, that is, to articulate which strategies they planned to use in the time until our next CoC meeting. When we reconvened, we talked about whether students upheld their intentions for the previous week (e.g., go to office hours or review material for 20 minutes each day before the exam) and what parameters helped or hindered success. Though not directly tested, we believe this accountability, along with a comfortable atmosphere, fostered a sense of belonging, which impacts student experience of the discipline (Minchella et al., 2002). The fact that all students who participated in CoC were retained in the course is further evidence to this point.

    How CoC Improvement in Academic Performance Compares with Other Reports

    We found that the students who needed help the most were the ones who benefited the most from the program. In contrast to Chen et al. (2017), in our study, students in the high-preparedness category did not show gains in academic performance on exams or final course grade from participation in the CoC program. However, among students who participated in the program, there was a trend toward improved academic performance on the first exam for students with low preparedness. Further, and more importantly, we found that students with low and medium preparedness significantly improved on the second exam and in the final grade in the course. In terms of magnitude, we found the performance gains for students with medium preparedness to be similar other studies, a one-third increase from “C” to “C+” (Minchella et al., 2002; Kudish et al., 2016; Chen et al., 2017). Students with low preparedness showed a two-thirds increase in final grade, greater than other reports, but at a lower grade level (“D” to “C−”). We did not measure the rate of “D’s”, “F’s”, withdrawals, and incompletes in our study, but we know there were no withdrawals or incompletes, as all students who participated in CoC were retained in the course. Thus, our findings are in line with similar programs showing decreased adverse outcomes for participating students (Minchella et al., 2002; Belzer et al., 2003; Wienhold and Branchaw, 2018).

    The benefit of participation, particularly for struggling students, appears to translate to future courses as well. Wienhold and Branchaw (2018) found that participating students who earned a “C” in the first introductory biology course were more likely to improve their grades in the next course, a benefit that was not found for participating students earning higher grades in the first course. In the second biology course, Kudish et al. (2016) found a positive correlation between program attendance and final course grades for all students participating. When this analysis was based on preparation (defined as whether students took AP Biology in high school), there was also a positive correlation. When performance was analyzed longitudinally, there were no differences in grades before and after the program, but there was a significant interaction in the second biology course based on underrepresented minority status, which suggests that the historical “achievement gap” for these students was narrowed. These findings echo other reports that have found a disproportionate benefit of similar interventions for students from historically minoritized groups. One would assume that improved final grades would translate to increased content knowledge, but this has not been directly addressed in many studies. Belzer et al. (2003) assessed this with a high school equivalent biology exam and found no improvement in student scores as a result of program participation.

    How CoC Improvement in Retention Compares with Other Reports

    All students who participated in the CoC program were retained in the Bio101 course compared with 88.7% of students who did not participate. Similarly, Wienhold and Branchaw (2018), Belzer et al. (2003), and Minchella et al. (2002) saw fewer “adverse outcomes,” defined as students who received a “D” or “F” final course grade, withdrew from the course, or did not complete the course, for students participating in their programs. Several programs also saw improvement in the number of students continuing on to the next biology course (Wienhold and Branchaw, 2018), students completing the introductory biology course sequence in a timely manner (Wienhold and Branchaw, 2018), students graduating from the college (Minchella et al., 2002), and specifically with STEM majors (Minchella et al., 2002; Kudish et al., 2016). Similar to their report on course performance, Kudish et al. (2016) showed a disproportionate benefit for historically minoritized groups and underprepared students with regard to 4-year graduation rate with a STEM major. We also see evidence that participation in CoC improved retention and final grade in subsequent biology courses, particularly for students with low and medium preparedness entering a STEM gateway course. This provides evidence that programs such as ours and others enhance the persistence and performance in STEM disciplines for students from diverse groups. It may also suggest that short-term interventions focused on developing positive study habits early in a student’s STEM career can improve long-term outcomes, especially for students who may not be as prepared from the outset.

    How CoC Mediated Changes in Study Habits Compare with Other Reports

    Although we did not find a difference in the number of learning strategies students used pre and post the CoC program, we did observe a striking change in types of strategies used due to participation in the CoC program. The majority of students indicated on the post assessment survey given at the end of the semester that they continued to use the wide variety of active-learning strategies that they had learned the first 8 weeks of the semester in the CoC program, which suggests the changes in study habits were long-lasting. Student reliance on passive learning strategies such as reading or highlighting texts was greatly reduced over the course of the program. At the beginning of the CoC program, these were the top strategies students used, but after the program, students cited making a plan for their study as the top strategy. This was one of the strategies that was repeatedly emphasized in the program. “Plan of Study” was a detailed schedule adapted from Steiner (2016) for the 7 days before the exam that outlined what the students needed to study, how they would study it, and what day they would study it (see Supplemental Material). Thus, we saw increased reliance on the strategies that we taught the students in CoC coinciding with a better grade in the course. Chen et al. (2017) also reported students used resources more effectively to improve course performance due to program participation.

    Additional changes in student habits after CoC program participation included students commenting that they “completed their homework” (study questions) in the pre assessment, whereas students more frequently referred to time for study in the post assessment. This suggests that they were switching from simply completing assignments without engaging with the material to making time to reflect on their understanding, as we instructed them to do. Likewise, there was a change in students’ descriptions of how they were “writing to learn,” referring more often to taking notes in the pre assessment, but in the post assessment switched to activities such as brain dumps. This means that they followed our guidance to write down everything they knew about a topic so that they could determine gaps in their understanding. Student disdain for concept mapping noted in Belzer et al. (2003) differs from our experience and others (Steiner, 2016), which could be due to differences in course content and structure.

    There were a few student habits that did not change. Despite frequent encouragement to use course resources such as office hours and supplemental instruction, the CoC program did not markedly change student use of these opportunities. This may be because students have limited time. Likewise, the biology course we supported had daily quizzes necessitating frequent review, which is the reason we believe that the CoC program did not change the number of days before students reviewed their class notes.

    Instead of teaching students specific active-learning and time management strategies as we did, Chen et al. (2017) asked students to choose the resources they felt would be most effective to study for an upcoming exam from a given list, explain why each resource was useful, and when/where/how they planned to use each resource. After the exam, Chen et al. (2017) asked students to state which resources they had used and reflect on their utility. Chen et al. (2017) found that getting students to be strategic about their learning made their study more effective, which in turn improved their performance. Chen et al. (2017) also found that student value attached to the method (i.e., using the resource that they had planned to use) was important for the positive impact on performance. Despite participation being voluntary in Chen’s study, most students participated for both exams, which suggests that students saw a benefit. Chen et al. (2017) found that student performance correlated with resources that explicitly considered the exam format and fostered learning and understanding of the class material. Likewise, student articulation of when and how resources were going to be used correlated with improved performance. Thus, a brief intervention combining metacognition and planning was sufficient to improve performance, reduce negativity around test-taking, and help students feel more in control of their learning (Chen et al., 2017). Guiding student approach to course work in this way may be particularly important for underprepared students who show differences in metacognition and learning strategies (Kudish et al., 2016). This observation will inform future iterations of this intervention to support biology courses as well as other departments, both STEM and non-STEM, at Ursinus.

    Lessons Learned and Future Directions

    Similar to our study, Kritzinger et al. (2018) showed that low (“at risk”), medium (“murky middle”), and high preparedness (“likely to pass”) can be determined before students even begin course work. Others have also mentioned lack of preparedness as a significant barrier to success in the sciences (Minchella et al., 2002; Kudish et al., 2016). This is particularly valuable for murky middle students, who are just as likely as at-risk students to attrite, but are not detected by traditional metrics used to determine academic risk, which are often concentrated in the first year (Murky Middle Project, 2014; Tyson, 2014; Kritzinger et al., 2018). Despite having the potential to succeed, students who perform poorly in first-year science courses can display dampened interest, confidence, and grit (Minchella et al., 2002; Kritzinger et al., 2018). We hypothesized that providing support before students are struggling would mitigate their risk of failure (Steiner, 2016; Sebesta and Speth, 2017; Kritzinger et al., 2018;). The CoC program started the first week of classes, 2 weeks before the first exam. We found that participation in CoC prevented the least-prepared students from failing their first two exams. Medium-prepared students did not show a significant gain on the first exam, but subsequent performance was quite improved, moving these students from “C’s” to “B’s”. This is undoubtedly one reason these students stayed in the course. We are aware that the least-prepared students did not reach our definition of success (a “C+” or better in the course), but they did move more than a full letter grade up from the first exam (“D−”) to the final course grade (“C”). These findings are proof of principle that the least-prepared students require very large interventions to meet our definitions of success, while those students with medium preparedness face neither of these constraints and therefore respond most strongly (Kritzinger et al., 2018).

    One reason murky middle students are hard to identify is because they display characteristics of both at-risk and likely to pass students (Kritzinger et al., 2018). One critical area where murky middle students tended toward at-risk students was with regard to effort regulation, or the ability to persevere in the work despite difficulty, distractions, or negative feelings (Kritzinger et al., 2018). Crede and Phillips’ meta-analysis study found that effort regulation has the highest correlation with academic performance (cited in Kritzinger et al., 2018). Poor-performing students also struggle with metacognitive self-regulation, specifically planning and discerning what is important to review (Sebesta and Speth, 2017; Kritzinger et al., 2018). Differences in how struggling students approach their course work have implications for skill mastery in the discipline of biology. For example, murky middle students are less likely to apply what they are learning to other course discussions, other course components (e.g., lab), or even other courses, all evidence of deep learning (Kritzinger et al., 2018). Also, they less readily see the value in working with peers (Kritzinger et al., 2018), which is particularly paramount in STEM, where collaboration is an integral part of the discipline. We did not see any difference before or after the CoC program among student responses regarding talking to peers, but we did not compare responses between preparedness levels or with students who were not participating in the CoC program.

    Characteristics associated with academic success (peer learning, application and synthesis of concepts, effort regulation) can and should be taught to change students’ behavior (Sebesta and Speth, 2017; Kritzinger et al., 2018). In addition to the skills for academic success, students need to be taught habits of mind, such as time management, resourcefulness, organization, goal setting, and metacognition (Cook et al., 2013; Zhao et al., 2014; Kritzinger et al., 2018). Our program focused on teaching these skills alongside active-learning strategies but lacked content review. We are currently piloting a new model that combines the CoC curriculum with supplemental instruction which are peer-facilitated group work sessions that are intended to pair with historically difficult gateway courses (Arendale, 1994). Due to our past assessment of supplemental instruction combined with our assessment of CoC in this article, we are confident that students will benefit from participation in this program, so we are requiring all students enrolled in Bio101 to participate.

    One of the stated purposes of Belzer et al. (2003) was to move away from lecture and get students more involved with one another and their own learning process. The opportunity for students to articulate their understanding to the instructor and each other, repeatedly review, and practice applying that understanding with low-stakes consequences is invaluable. The benefit in these programs may lie in the fact that they make the academic environment more comfortable for students. “Students feel that a large class was more like a small class, that [they] counted, and that someone was interested in whether or not they understood and were able to learn the content for the course, thereby increasing [their] motivation” (Belzer et al., 2003, p. 38). To retain students from historically minoritized groups in STEM, it is imperative that we make the course experience more welcoming, less isolating, and confidence boosting (Malcolm and Feder, 2016; Martinez-Acosta and Favero, 2018; Theobald et al., 2020). Such efforts not only improve student experience, retention, and performance in the corresponding gateway course, but also within the discipline and the college (Minchella et al., 2002; Kudish et al., 2016; Kritzinger et al., 2018; Wienhold and Branchaw, 2018). After taking Minchella’s first-year seminar, students were more positive about faculty instruction, availability, and flexibility and the availability and usefulness of campus resources and opportunities such as research and internships (Minchella et al., 2002). Wienhold and Branchaw (2018) found their first-year seminar not only eased the transition to college as expected, but also gave students a long-standing community within the discipline and raised their awareness of means to engage in the discipline, such as getting involved in research and attending research seminars (Wienhold and Branchaw, 2018). Likewise, we have found that Ursinus College STEM students who did research early in their college tenure, either the summer before or after the first year, were retained in STEM majors: 90% graduated, as compared with 65% of their peers (Reig et al., 2018). Further, in our JBridge program, a 4-day workshop to build skills for the second semester of first-year biology, 79% of students (n = 19 respondents) agreed that the program made them feel more a part of the department and their cohort and grew their confidence in themselves as scientists; 83% of them have been retained to the level appropriate for their year of matriculation. These programs have in common with CoC that the students work closely with a faculty member and older student mentor as a role model. This reinforces the perception that the faculty member cares about them and is not trying to weed them out but is “on their side,” which goes a long way in building student confidence (Cavanagh et al., 2018). Thus, we recommend a student-centered approach wherein instructors and advisers are trained in evidence-based pedagogies and embed these strategies in their courses. As early intervention is critical for students to reach their maximal potential, this shift is most important for gateway courses that are the entry point for the discipline. The immediate benefit is improved student outcomes in the curriculum, but it also diversifies the discipline and extends the pipeline, because students will stay interested and engaged longer.

    ACKNOWLEDGMENTS

    This material is based upon work supported by the National Science Foundation under grant no. 1458719. We are grateful to all the Bio101 instructors (Robert Dawley, Ellen Dawley, Cory Straub, Simara Price, and Biology faculty (Becky Kohn, Beth Bailey) for partnering with us in this effort. Thank you also to Robert Dawley for the exam autopsy and abbreviation. We thank Elizabeth Vallen for her counsel on best practices and for sharing resources. We are pleased to see student involvement in the work as well: Sinead O’Callaghan’s exam study plan and Maddy Wert’s idea to rename the exam autopsy as BIOpsy. We thank Cory Straub for his work on the analysis and reporting of supplemental instruction, which informed our discussions of the findings from the CoC program and Figure 1. We are indebted to Ann Breen for giving her time to independently analyze the study strategy data from pre and post assessments. We are grateful for Kevin Guidry’s thoughtful comments during survey development and program implementation.

    REFERENCES

  • Arendale, D. R. (1994). Understanding the supplemental instruction model. New Directions for Teaching and Learning, 60, 11–21. https://doi.org/10.1002/tl.37219946004 Google Scholar
  • Bailey, E. G., Baek, D., Meiling, J., Morris, C., Nelson, N., Rice, N. S., ... Stockdale, P. (2018). Learning gains from a recurring “teach and question” homework assignment in a general biology course: Using reciprocal peer tutoring outside class. CBE—Life Sciences Education, 17(2), 1–10. https://doi.org/10.1187/cbe.17-12-0259 Google Scholar
  • Ballen, C. J., Wieman, C., Salehi, S., Searle, J. B., & Zamudio, K. R. (2017). Enhancing diversity in undergraduate science: Self-efficacy drives performance gains with active learning. CBE—Life Sciences Education, 16(4), 1–6. https://doi.org/10.1187/cbe.16-12-0344 Google Scholar
  • Barnes, W., Slate, J. R., & Rojas-LeBouef, A. (2010). College-readiness and academic preparedness: The same concepts? Current Issues in Education, 16(1), 1–13. Google Scholar
  • Belzer, S., Miller, M., & Shoemake, S. (2003). Concepts in biology: A supplemental study skills course designed to improve introductory students’ skills for learning biology. American Biology Teacher, 65(1), 30–40. https://doi.org/10.2307/4451430 Google Scholar
  • Cavanagh, A. J., Chen, X., Bathgate, M., Frederick, J., Hanauer, D. I., & Graham, M. J. (2018). Trust, growth mindset, and student commitment to active learning in a college science course. CBE—Life Sciences Education, 17(1), 1–8. https://doi.org/10.1187/cbe.17-06-0107 Google Scholar
  • Chen, P., Chavez, O., Ong, D. C., & Gunderson, B. (2017). Strategic resource use for learning: A self-administered intervention that guides self-reflection on effective resource use enhances academic performance. Psychological Science, 28(6), 774–785. https://doi.org/10.1177/
0956797617696456 MedlineGoogle Scholar
  • Collier, P. J., & Morgan, D. L. (2008). “is that paper really due today?”: Differences in first-generation and traditional college students’ understandings of faculty expectations. Higher Education, 55(4), 425–446. https://doi.org/10.1007/s10734-007-9065-5 Google Scholar
  • Conley, D. T. (2008). Rethinking college readiness. New Directions for Higher Education, 144, 3–13. https://doi.org/10.1002/he.321 Google Scholar
  • Cook, E., Kennedy, E., & McGuire, S. Y. (2013). Effect of teaching metacognitive learning strategies on performance in general chemistry courses. Journal of Chemical Education, 90(8), 961–967. https://doi.org/10.1021/ed300686h Google Scholar
  • Dunlosky, J., Rawson, K. A., Marsh, E. J., Nathan, M. J., & Willingham, D. T. (2013). Improving students’ learning with effective learning techniques. Psychological Science in the Public Interest, 14(1), 4–58. https://doi.org/10.1177/1529100612453266 MedlineGoogle Scholar
  • Frank, T. (2016). 8 Better ways to make and study flash cards. Retrieved August 27, 2017, from https://collegeinfogeek.com/flash-card-study-tips/ Google Scholar
  • Heideman, P. D., Flores, K. A., Sevier, L. M., & Trouton, K. E. (2017). Effectiveness and adoption of a drawing-to-learn study tool for recall and problem solving: Minute sketches with folded lists. CBE—Life Sciences Education, 16(2), 1–13. https://doi.org/10.1187/cbe.16-03-0116 Google Scholar
  • Higher Education Research Institute. (2010). Degrees of success bachelor’s degree completion rates among initial STEM majors. Retrieved September 17, 2020, from https://heri.ucla.edu/nih/downloads/2010
-Degrees-of-Success.pdf Google Scholar
  • Hurtado, S., Han, J. C., Sáenz, V. B., Espinosa, L., Cabrera, N., & Cerna, O. (2007). Predicting transition and adjustment to college: Biomedical and behavioral science aspirants’ and minority students’ first year of college. Research in Higher Education, 48, 841–887. Google Scholar
  • Johnson, A. C. (2007). Unintended consequences: How science professors discourage women of color. Science Education, 91(5), 805–821. https://doi.org/10.1002/sce.20208 Google Scholar
  • Koch, A. K. (Ed.) (2018). Improving teaching, learning, equity, and success in gateway courses: New directions for higher education. Hoboken, NJ: Wiley. Google Scholar
  • Kritzinger, A., Lemmens, J. C., & Potgieter, M. (2018). Learning strategies for first-year biology: Toward moving the “murky middle.” CBE—Life Sciences Education, 17(3), 1–13. https://doi.org/10.1187/cbe.17-10-0211 Google Scholar
  • Kudish, P., Shores, R., McClung, A., Smulyan, L., Vallen, E. A., & Siwicki, K. K. (2016). Active learning outside the classroom: Implementation and outcomes of peer-led team-learning workshops in introductory biology. CBE—Life Sciences Education, 15(3), 1–11. https://doi.org/10.1187/cbe.16-01-0051 Google Scholar
  • Malcolm, S., & Feder, M. (Eds.) (2016). Barriers and opportunities for 2-year and 4-year STEM degrees. Washington, DC: National Academies Press. https://doi.org/10.17226/21739 Google Scholar
  • Maloof, J., & White, V. K. B. (2005). Team study training in the college biology laboratory. Journal of Biological Education, 39(3), 120–124. https://doi.org/10.1080/00219266.2005.9655978 Google Scholar
  • Martinez-Acosta, V. G., & Favero, C. B. (2018). A discussion of diversity and inclusivity at the institutional level: The need for a strategic plan. Journal of Undergraduate Neuroscience Education, 16(3), A252–A260. MedlineGoogle Scholar
  • Mayring, P. (2000). Qualitative content analysis: Basic ideas of content analysis. Forum Qualitative Sozialforschung, 1(2). Google Scholar
  • Minchella, D. J., Yazvac, C. W., Fodrea, R. A., & Ball, G. (2002). Biology resource seminar: First aid for the first year. American Biology Teacher, 64(5), 352–357. https://doi.org/10.2307/4451310 Google Scholar
  • Murky Middle Project. (2014). Retrieved June 28, 2019, from www.eab%0A
.com/technology/student-success-collaborative/members/white%0A-papers/the-murky-middle-project Google Scholar
  • Prince, M. (2004). Does active learning work? A review of the research. Journal of Engineering Education, 93(3), 223–231. https://doi.org/
10.1002/j.2168-9830.2004.tb00809.x Google Scholar
  • Reig, A. J., Goddard, K. A., Kohn, R. E., Jaworski, L., & Lopatto, D. (2018). The FUTURE program: Engaging underserved populations through early research experiences. In Gourley, B. L.Jones, R. M. (Eds.), (pp. 3–21). Washington, DC: American Chemical Society. https://doi.org/10.1021/bk-2018-1275.ch001 Google Scholar
  • Roediger, H. L. (2013). Applying cognitive psychology to education: Translational educational science. Psychological Science in the Public Interest, Supplement, 14(1), 1–3. https://doi.org/10.1177/1529100612454415 MedlineGoogle Scholar
  • Round, J. E., & Campbell, A. M. (2013). Figure facts: Encouraging undergraduates to take a data-centered approach to reading primary literature. CBE—Life Sciences Education, 12(1), 39–46. https://doi.org/10.1187/cbe.11-07-0057 LinkGoogle Scholar
  • Rybczynski, S. M., & Schussler, E. E. (2011). Student Use of out-of-class study groups in an introductory undergraduate biology course. CBE—Life Sciences Education, 10(1), 74–82. https://doi.org/10.1187/cbe-10-04-0060 LinkGoogle Scholar
  • Sebesta, A. J., & Speth, E. B. (2017). How should I study for the exam? Self-regulated learning strategies and achievement in introductory biology. CBE—Life Sciences Education, 16(2), 1–12. https://doi.org/
10.1187/cbe.16-09-0269 Google Scholar
  • Steiner, H. (2016). The Strategy Project: Promoting self-regulated learning through an authentic assignment. International Journal of Teaching and Learning in Higher Education, 28(2), 271–282. Google Scholar
  • Theobald, E. J., Hill, M. J., Tran, E., Agrawal, S., Nicole Arroyo, E., Behling, S., … & Freeman, S. (2020). Active learning narrows achievement gaps for underrepresented students in undergraduate science, technology, engineering, and math. Proceedings of the National Academy of Sciences USA, 117(12), 6476–6483. https://doi.org/10.1073/pnas
.1916903117 MedlineGoogle Scholar
  • Tobias, S. (1990). They’re not dumb. They’re different. A new “tier of talent” for science. Change: The Magazine of Higher Learning, 22(4), 11–30. https://doi.org/10.1080/00091383.1990.9937642 Google Scholar
  • Tyson, C. (2014, September 10). The “Murky Middle.” Inside Higher Ed, Retrieved June 28, 2019, from www.insidehighered.com/news/2014/09/
10/maximize-graduation-rates-colleges-should-focus-middle-range
-students-research-shows#:∼:text=StudentswithaG.P.A.of,firstyearwithaG
.P.A.&text=”It’s called the murky middle,with a first-year G.P.A Google Scholar
  • Weston, T. J., Seymour, E., Koch, A. K., & Drake, B. M. (2019). Weed-Out Classes and Their Consequences. In SeymourElaineHunterAnne-Barrie (Eds.), Talking about Leaving Revisited (pp. 197–243). New York: Spring International. https://doi.org/10.1007/978-3-030-25304-2_7 Google Scholar
  • Wienhold, C. J., & Branchaw, J. (2018). Exploring biology: A vision and change disciplinary first-year seminar improves academic performance in introductory biology. CBE—Life Sciences Education, 17(2), 1–11. https://doi.org/10.1187/cbe.17-08-0158 Google Scholar
  • Xianglei, C. (2013). STEM attrition: College Students’ paths into and out of STEM fields (NCES 2014-001). Retrieved September 17, 2020, from https://nces.ed.gov/pubs2014/2014001rev.pdf Google Scholar
  • Zhao, N., Wardeska, J., McGuire, S., & Cook, E. (2014). Metacognition: An effective tool to promote success in college science learning. Journal of College Science Teaching, 43(4), 48–54. https://doi.org/10.2505/4/jcst14_043_04_48 Google Scholar