ASCB logo LSE Logo

Current InsightsFree Access

Recent Progress in Learning Progressions Research

    Published Online:https://doi.org/10.1187/cbe.19-09-0181

    Abstract

    Learning progressions (LPs) are hypothetical models that describe how learning in a domain may unfold over time. Over the past decade, LPs have grown in popularity. At the same time, there have been advances in LP research. In this installment of Current Insights, I bring together three recent articles that examine the validity and utility of LPs as models to guide research and instruction.

    Over the past decade, learning progressions (LPs) have grown in popularity in education research communities, including biology education research. One proposed affordance of LPs is that they are organized around how foundational ideas and practices of disciplines develop over time (National Research Council, 2007, p. 219). Thus, in contrast to discrete topics or standards, LPs have the potential to bring focus and coherence to disciplinary learning. Furthermore, LPs are research based—grounded in theory and empirical evidence. From early on, LP researchers referred to LPs as “hypotheses” or “models” of how learning unfolds in a particular domain (Corcoran et al., 2009; Duncan and Hmelo-Silver, 2009). This second aspect allows for scientific progress as LP researchers grapple with model validation, examine underlying assumptions, and consider the practical utility of LPs for improving learning and instruction.

    In this installment of Current Insights, I bring together three articles that capture some of the recent progress in LP research. The first article, by Jin and colleagues, presents a conceptual framework outlining the validity considerations that arise as researchers develop, evaluate, and ultimately attempt to use LPs in instructional contexts. The second and third articles offer extended treatments of one or more considerations of LP validity and use. Sikorski questions the validity of assumptions that underlie how sophistication is defined during initial LP development. Alonzo and Elby explore the intersection of LP evaluation and use, arguing that despite their limited empirical validity, LPs may still be useful for instructors. I begin with the article by Jin and colleagues, using it to highlight intersections with the subsequent articles.

    A FRAMEWORK TO GUIDE THE LP VALIDATION PROCESS

    Jin, H., van Rijn, P., Moore, J. C., Bauer, M. I., Pressler, Y., & Yestness, N. (2019). A validation framework for science learning progression research. International Journal of Science Education, 41(10), 1324–1346. https://doi.org/10.1080/09500693.2019.1606471

    If LPs are models, then as Jin and colleagues argue, there are questions that LP researchers must address. First, are LP models good representations of how students’ thinking develops? That is, are the models plausible and empirically supported? And second, are LP models appropriate and practically useful? That is, do they reflect the aims and values of the community, and do they help practitioners make progress toward those aims?

    The authors identify five stages—development, scoring, generalization, extrapolation, and use—during which questions about LP validity and utility should be considered and addressed. They summarize the potential concerns raised during each of these stages and discuss how to approach these concerns, often using examples drawn from their own prior research. Here, I will briefly summarize just three of these concerns, chosen to highlight intersections with the other two articles in this set.

    During the initial stages of LP development, one consideration is whether LP assessment items adequately capture students’ thinking. In particular, are the items able to detect students’ beginning ideas? For example, assessment items that hinge on specific vocabulary (e.g., “Which container has more thermal energy?”) will be invalid for novice students, because these items conflate lack of familiarity with terms with lack of relevant thinking. A second concern arising during LP development is whether the most sophisticated level, the “upper anchor” of the LP, represents a valid endpoint. Upper anchors are typically set by LP developers in consultation with disciplinary experts. Sikorski (summarized later) raises concerns about this practice, arguing that, like other aspects of LPs, upper anchors should be understood as conjectural and open to interrogation and critique.

    Generalization refers to the process of using items’ scores to make inferences about LP levels. Jin and coworkers describe validation at this stage as addressing as the extent to which, “LP levels should be differentiated from each other, showing that the levels actually exist” (p. 1335). LP assessments are sometimes used to assign an individual student's thinking to a specific level. However, empirical support for this modeling choice is limited. For example, Jin and colleagues have found that, while item responses correlate in the aggregate (Figure 5, p. 1337), individual students do not tend to “fit” a single level (Table 4, p. 1339). The authors summarize these results as indicating that student responses are “coherent to a certain degree” (p. 1338). Alonzo and Elby (the third article in the set) offer a more critical interpretation, arguing that, in general, “LP models of student thinking do not adequately fit empirical data” (p. 3).

    In terms of LP use, Jin and colleagues raise concerns about whether instructor use of LP assessments and associated materials improves student learning (and importantly does no harm!). The authors point to a need for additional research in this area. Alonzo and Elby offer one example of such research: Despite arguing against the empirical validity of LPs, they propose and provide empirical evidence that LPs can be useful to teachers.

    As a whole, the framework laid out in this article provides a helpful way to organize the validity and utility considerations that are part of LP research, summarizing how the community has addressed these issues and pointing the way to future research.

    EXAMINING ASSUMPTIONS OF SOPHISTICATION IN LP RESEARCH

    Sikorski, T. R. (2019). Context-dependent “upper anchors” for learning progressions. Science & Education. https://doi.org/10.1007/s11191-019-00074-w

    Sikorski describes the typical structure of an LP as containing “a set of progress variables describing what students will learn, the starting point or lower anchor for learning, intermediaries, and the upper anchor” representing “the highest degree of sophistication learners are expected to reach” (p. 958). Central to Sikorski's argument is that upper anchors often describe singular, fixed endpoints. For example, an early version of the Daily Celestial Motion LP (Plummer and Maynard, 2014) presents as an upper anchor the ability to use the Earth's rotation to provide “relatively accurate descriptions of the sun, moon, and stars’ apparent motion.”

    Sikorski's argument is “that the notion of a singularly most sophisticated way of reasoning about phenomena is inconsistent with studies of professional science, limits the range of learning pathways that LPs can model, and reinforces problematic assessment practices in science classrooms” (emphasis in original, p. 958).

    Sikorski argues that LP upper anchors, by presenting sophistication as singular and fixed, fail to represent the adaptive and interconnected nature of scientific expertise. Expert scientists do not simply know the correct way to think devoid of context; they adapt and coordinate their thinking to meet the demands of a situation. By setting a fixed upper anchor, LPs cannot detect potentially productive flexibility in students’ thinking. And when LPs present a singular “most sophisticated” upper anchor to instructors, they may interpret that upper anchor as correct and dismiss lower-level ideas or strategies as incorrect and therefore not valuable. This interpretation undercuts the purported utility of LPs in helping teachers interpret and formatively respond to their students’ thinking. Moreover, it may dull teachers’ appreciation of context sensitivity. For example, Sikorski argues that fixed understandings of sophistication make it difficult for LPs to account for “the possibility that students’ incorrect models can be the result of sophisticated modeling practice” (p. 966).

    To address these concerns, Sikorski argues for a conceptual revision that replaces preset upper anchors with context-­dependent “upper reaches” that allow multiple ways of knowing to count as sophisticated. Sikorski describes two existing LP structures that incorporate context dependence: “Toolbox” models drop assumptions of progress in favor of articulating how different repertoires of thinking that may be useful to students in different contexts. “Cumulative” models maintain a sense of progression but predict that earlier ideas may persist and have utility. Sikorski articulates a need for third class of LP models that are more explicitly “context dependent.” These LP models would include adaptive thinking—how students coordinate and choose among multiple ideas or approaches to knowing—as part of the definition of sophistication.

    Sikorski's contribution, “in the spirit of model revision,” is to encourage “researchers continue to refine the notion of a learning progression” by opening up a discussion about an element of LP models—the upper anchor—that had otherwise escaped scholarly critique.

    LEARNING PROGRESSIONS: WRONG BUT USEFUL?

    Alonzo, A. C., & Elby, A. (2019). Beyond empirical adequacy: learning progressions as models and their value for teachers. Cognition and Instruction, 37(1), 1–37. https://doi.org/10.1080/07370008.2018.1539735

    All models are wrong, but some are useful.

    George Box

    Alonzo and Elby begin by arguing that LPs have limited empirical validity because a central construct of LP models—levels—does not fit the data. Students’ responses to LPs are more inconsistent and more sensitive to context than one would predict if, to use Jin and colleagues’ phrasing, “the levels actually exist” (p. 1335). Alonzo and Elby argue that, even if levels do not exist; that is, even if the notion of a level is an idealization without robust empirical support, we need not reject LP models entirely. Instead, they raise the possibility that, like other highly simplified or idealized scientific models (e.g., Bohr's model of the atom or the Lotka-Volterra model of predator–prey interactions), LPs may be fruitful despite (or perhaps because) they do not accurately reflect the complex reality of students’ cognitive development.

    To explore this idea empirically, Alonzo and Elby studied how physics teachers used a common LP on force and motion (Alonzo and Steedle, 2009) to reflect on student thinking and to plan instruction. The teachers in the study were provided with LP assessment score reports that presented information about individual students (both at the item level and as an overall level score) and class proportions for various items and levels. The teachers participated in two think-aloud interviews during which they reviewed these materials.

    Five of the teachers were chosen for deeper analysis, because they displayed “sufficiently sophisticated formative assessment practices.” Interview transcripts were examined for evidence of how teachers were 1) interpreting student thinking, 2) drawing instructional implications, and 3) gaining new insights about student thinking.

    Here, I highlight three salient findings. First, while teachers sometimes reasoned about students using a levels model (e.g., “He's at a level 4”), it was more common for teachers to consider students’ ideas at a finer grain size (e.g., ideas about “impetus,” forces without motion, or frictionless situations). Second, teachers were more able to use specific item data, as opposed to levels-based data, to make actionable instructional decisions. For example, reviewing items that revealed specific difficulties related to frictionless surfaces led one teacher to propose an intervention using an air hockey table. Levels-based information tends to be more vague (e.g., “I have to figure out a way to get eight students over to level 3”). Third, reasoning about the limitations of LP levels was generative for teachers: Noticing inconsistencies between item data and levels promoted a stance of inquiry and led teachers to propose and test hypotheses about students’ thinking.

    Alonzo and Elby end by suggesting that conceptualizing and communicating about LPs as imperfect models (as opposed to “validated” research products) can invite teachers to actively participate in taking a “research-based” approach to understanding their students and designing instruction.

    REFERENCES

  • Alonzo, A. C., & Steedle, J. T. (2009). Developing and assessing a force and motion learning progression. Science Education, 93, 389–421. doi: 10.1002/sce.20303 Google Scholar
  • Corcoran, T., Mosher, F. A., & Rogat, A. (2009). Learning progressions in science: An evidence-based approach to reform (CPRE Research Report RR-63). Philadelphia, PA: Consortium for Policy Research in Education. Google Scholar
  • Duncan, R. G., & Hmelo-Silver, C. E. (2009). Learning progressions: Aligning curriculum, instruction, and assessment. Journal of Research in Science Teaching, 46, 606–609. doi: 10.1002/tea.20316 Google Scholar
  • National Research Council. (2007). Taking science to school: Learning and teaching science in grades K-8. Washington, DC: The National Academies Press. Google Scholar
  • Plummer, J. D., & Maynard, L. (2014). Building a learning progression for celestial motion: An exploration of students’ reasoning about the seasons. Journal of Research in Science Teaching, 51(7), 902–929. Google Scholar