There are many unique things about the community of researchers in health sciences education. Most obvious, in fact, is that it is a community; not a separate community of psychometricians, another of learning theorists, another of simulation people, another of qualitative researchers, and another of practitioners—clinical teachers, basic scientists, etc. These subgroups do exist, and many have their own specialty societies with their own journals and conferences. But these are in addition to, not exclusive of, the common activities and outlets—journals like AHSE, Medical Education, Academic Medicine, and conferences like AMEE, CCME, RIME, ASME, which accept all kinds of research (although, not all kinds of health professions, sad to say).

One need only peruse any issue of Advances to see an apparently random potpourri of research approaches. The last issue contained nine qualitative studies. This issue, there are seven experimental studies, and many are strongly theory-testing; a point I will return to. The studies represent the full range of research methods and disciplines interested in the science of education.

Further, though heterogeneous, it is a very congenial, bunch. As long as I’ve been in the field, I have been amazed by the collegiality that we exhibit with each other, and the stark contrast between our real warmth and the kind of backbiting and nastiness that I have observed in other disciplines. One might think that such heterogeneity of values and epistemologies could breed real animosity, yet the opposite seems to be the case. Perhaps sociologists may have an explanation, but I don’t.

This stark contrast between us and others has become increasingly obvious to me in the past year or so. In reorienting my professional life to allow more time for retirement, I’ve been doing more stuff on campus in addition to my usual activities in health sciences (like the couple who downsizes to a bigger house when the kids leave home). I have become aware that there were two groups who were very active on campus, with very different agendas. One, championed by my colleagues in cognitive psychology, goes under various names like “Science of Learning Science of Instruction”, or “Education and Cognition” and is heavily engaged in experimental quantitative research applying theory—based strategies like cognitive load, test-enhanced learning, interleaved learning to learning situations. While the materials may be realistic (for example, statistics problems) typical studies involve first year undergrads paid to participate for an hour. And the primary goal is to conduct research that grad students could use for a PhD. Not that the research is not worthy; the designs are elegant and typically the effects are large and robust. But they rarely leave the psychology building; in fact, a review of papers I have accrued from this research reveals that almost without exception the results are published exclusively in psychology journals.

The second group goes under the acronym SoTL—Scholarship of Teaching and Learning. I initially thought this stood for “Science of…”, but my error was corrected on several occasions. The banner represents a loose confederation of professor-teachers who are passionate about education and anxious to think critically about what they’re doing and put their activities in a larger context. Theirs is a very grassroots and inclusive group. Their view of “Scholarship” is much more inclusive, and they take pain to point out that, for example, there are many kinds of evidence. This is not just the old qualitative—quantitative dichotomy, but rather, is intended as a direct counterpoint to the narrow experimentalist theory-driven approach of the Ed Cog folks.

It will not be a surprise to find that they publish in different journals, go to different meetings, etc. As noted earlier, the Ed Cog journals pretty well all have “psychology” in their title—most are mainstream, high impact journals like Journal of Experimental Psychology, Psychological Science, Memory and Cognition. And of course to publish in these journals you have to toe the line and do controlled experiments, testing theories, and so forth. The SoTL journals are less likely to be indexed (about 50 % in my review are not indexed) and are much more likely to be “How-to” articles, thought pieces, and so on.

In short, the two groups are like religious fundamentalists, where there is only one way to truth, beauty and wisdom, and Unitarians, where anything goes (forgive the caricature of both groups). And the polarization is to the detriment of both. For the psychologists, their instructional strategies, which frequently are both counter-intuitive and unlikely to be discovered by the classroom teacher (mixed, interleaved practice) and powerful [Mayer’s instructional interventions have an average effect size near 1 (Mayer 2010)] are unlikely to ever find application outside of the psychology department simply because they are unlikely to be discovered elsewhere. Although the leaders, I’m sure, cherish hopes that their basic research would lead to real demonstrable advances in education across the years, it seems to me the gap is simply too large to span.

There are multiple losses on the other side as well. The SoTLers, however intense their passion about improving education, are unlikely to have the skills needed to apply scholarly methods of social and behavioural science to their questions. There is no single scientific method, and the methods they use in their discipline are unlikely to be helpful in addressing educational questions. I speak from the the firsthand experience of a former physicist who took many years to acquire the new skills on the job.

Moreover, their research efforts may be no more than an exercise in frustration. All academic research builds on a corpus of prior knowledge, and an understanding of that knowledge is a prerequisite to making a real contribution. So the SoTL scholar must master a second discipline, education, and become familiar with the current state of the art in the particular domain she is interested in, in addition to her own discipline, if her scholarship is to yield tangible rewards.

Somehow health sciences education appears to have avoided this segregation between the teachers and the educational researchers. Perhaps there are historical reasons for this; when medical education began to emerge as a discipline in health sciences in the 1960s in Buffalo, the “founding fathers” were a close-knit alliance of both clinicians (George Miller, Hilly Jason) and a ragtag bunch of education types (Chuck Dohner, Chris McGuire, Steve Abrahamson). Similarly, from my beginnings in the field in the 1970s, the standard modus operandi was in teams of clinicians and methodologists. Good questions emerged from the interaction of the two groups, and the research quality was ensured by the methodological skills of the PhDs and the content expertise and practical experience of the clinicians. The field has matured in the ensuing four decades, and so has the sophistication of the researchers. On the one hand, we see more PhDs with training in their discipline but graduate research directly occurring in a health discipline. On the other, more and more practitioners are seeking advanced degrees at the Masters and PhD levels.

One consequence of this maturation is that our research programs cut two ways. On the one hand, more and more research is theory-based; indeed, a theoretical framework is becoming a prerequisite for publication in medical education journals. On the other, some of the research sets out to critically test and extend theories into more practical domains, so the results have real impact on the practice of education in heath sciences.

The six experimental papers in this issue nicely illustrate the range of goals. Three of the studies address the role of simulation in learning. Leung et al. (2014) showed that virtual patients, computer-based cases designed to interactively simulate the interaction between a clinician and a patient, were less well-liked and led to less learning than simply case studies. Chan et al. (2015) showed, for the first time, that high fidelity simulation was actually inferior to a CD for learning auscultation skills. Jamison and Stewart (2014) built a simple physiologic simulator to help students interpret spirometry and showed a differential effect on higher order understanding compared to recall questions. These studies add to an accumulation of evidence in fields ranging from anatomy (Khot et al. 2013) to crisis management (Norman et al. 2012).that high technology approaches have marginal or no advantage over simpler approaches.

The studies also contribute to more fundamental questions about the nature of learning. Chen et al.’s (2015) study showing the advantage of low fidelity in learning auscultation was interpreted as a result of increased cognitive load in the high fidelity simulation. Other studies in this issue by Blissett et al. (2014) directly showed the relation between cognitive load and learning in a similar task—ECG interpretation, contrasting student-generated and expert-generated schemas. Student-generated schemas resulted in increased cognitive load and reduced initial learning (but no long-term differences) compared to expert-generated schema. A similar manipulation was performed by Chamberland et al. (2014) comparing student, peer and expert-generated self-explanations. However, this study examined the value added to expert- and peer- generated explanations. The findings were similar—there was no advantage to expert-generated (and no advantage to peer-generated) explanations. Chan et al. (2015) also used a simulation task to examine the broader question of part versus whole training. In line with similar findings in the general education literature, but directly in contrast to the dominant rhetoric in education, there was a clear superiority for part-task training.

Finally, perhaps the most theory-laden paper, by Kulasegaram et al. (2014), examined the interaction between mixed/blocked practice and single versus multiple contexts (single vs multiple organ system), for near and far transfer (again, same vs different organ system). Practice with multiple contexts had an overall advantage, but this was most marked for far transfer. Essentially, by seeing the same problem in multiple contexts, students learned to actively disregard the context. Moreover the deleterious effect of a single context on far transfer was most evident with mixed practice. Thus, as well as providing strategies of concept learning for transfer, the study significantly extends our understanding of various factors affecting deliberate practice, showing that, under some circumstances, mixed practice can have negative consequences.

All of these studies exemplify, to greater or lesser degree, “Pasteur’s quadrant” in the Kirkpatrick taxonomy (Stokes 1997), with significant contribution both to practical aspects of instruction and contribution to theories of learning. It is, I think, no coincidence that authors of the studies come from both health sciences and behavioural science disciplines. They illustrate beautifully the synergistic effects that result form these effective multidisciplinary teams.