In recent years, computerized cognitive training activities have become increasingly popular for the general public who uses them as a form of entertainment and/or with the goal of cognitive enhancement. In addition, cognitive training has also become relevant as a topic of scientific research. Although the development of the first techniques to improve human performance and learning dates back hundreds of years, there has been a renewed interest in the scientific community to implement cognitive training approaches in order to investigate the development and malleability of cognitive abilities and skills, as well as to uncover underlying mechanisms of our cognitive architecture (e.g., Strobach and Karbach 2016). At the same time, there have been ongoing debates regarding the usefulness and effectiveness of cognitive training, with most of the controversies focusing on the inconsistent findings that make it difficult to draw general conclusions, but also on the pace by which many commercial companies have adopted and put forward their products, making claims that are based on very little scientific evidence (Simons et al. 2016). Nonetheless, the development of effective interventions, especially for individuals with age-related or clinical impairments, is still very relevant, and their efficacy has been supported by a number of empirical findings showing significant performance gains on trained and untrained tasks after cognitive training (Weicker et al. 2016).

Although there are many ways to implement cognitive training, this Special Topic focuses on targeted approaches to improve cognitive functions via “process-based” interventions with the overall goal to improve skills or abilities that go beyond the specific training task itself, i.e., to elicit transfer effects (e.g., Green and Bavelier 2008, Jaeggi and Buschkuehl 2014). Such transfer effects are frequently reported after cognitive interventions focusing on basic processing capacities, such as working memory or executive functions (see Au et al. 2015; Karbach and Verhaeghen 2014, for meta-analyses). Moreover, playing video games, especially of the “action video game” genre has been shown to improve a variety of cognitive skills (e.g., Li et al. 2009; see Toril et al. 2014, for a meta-analysis). These improvements are in contrast to the aims of strategic or skill-based approaches that are designed to improve performance in the very specific task that is being trained, such as mnemonics or arithmetic skills (Ericsson et al. 1980). Furthermore, one of the major goals of this Special Topic was to focus on research that takes a theory-driven approach to understand and explain the underlying mechanisms of cognitive training.

The papers that are included in this Special Topic mostly rely on working-memory interventions as training vehicle, and a majority of them use lab-based and/or researcher-developed tasks. However, there are some papers that investigate the efficacy of more game-like and/or commercially available “apps” or programs that are more complex than typical lab-based interventions, and as such, they might reflect a more ecologically valid approach to investigate the effects of cognitive training. Furthermore, the work presented here incorporates a wide range of approaches that include cognitive, developmental, neuroscientific, and meta-analytic methodologies, with the common goal to elucidate how cognitive training may contribute to cognitive enhancement. Overall, the Special Topic consists of 14 contributions by 62 authors, representing work that has been conducted in various countries and labs. Twelve of the papers are primary research articles, one of them is a meta-analytic review, and one of them is an opinion paper.

The overarching topics that are being addressed by the contributions in this Special Topic reflect several key issues that are currently being discussed in the cognitive training literature, namely, whether there are individual differences that drive some of the effects observed during and after cognitive training, and if so, whether those individual differences might account for some of the discrepant findings often observed between studies and labs. Furthermore, one of the fundamental questions in intervention work addresses underlying mechanisms of training and transfer, and thus, several papers tackle this issue by specifically manipulating key variables that are thought to determine learning and transfer, and/or by comparing specific variants of the intervention and testing their differential effects on a selection of transfer measures.

Hering et al. (2017) investigated individual differences that might moderate training and transfer effects of a verbal working-memory intervention in older adults. Although the authors found no far transfer to “everyday behavior,” they observed near transfer effects to non-trained working-memory tasks that were moderated by age and crystallized intelligence, and thus, the authors argue that individual differences have to be considered to better understand who benefits from working-memory training.

Similarly, Guye et al. (2017) investigated individual differences in training trajectories using three different working-memory interventions in younger and older adults. However, in contrast to Hering et al. (2017), they found very limited evidence for individual differences as predictors for training outcome, although they did find that cognitive performance at baseline was related to training improvement, specifically, those with higher abilities seemed to benefit more, indicating a magnification effect, which was especially apparent in their young adult population.

Karbach et al. (2017) administered a task-switching intervention in three populations, namely, children and young and older adults. Their data demonstrated that training those specific executive function skills led to a reduction of age differences, and furthermore, they observed that baseline abilities predicted both training and transfer gains. However, in contrast to Guye et al. 2017, their results showed that participants with lower abilities improved the most, and thus, indicating a compensation effect in that the training was most beneficial for those who had the most room to improve.

Hogrefe et al. (2017) systematically investigated the mechanisms of transfer that might influence and explain working-memory training outcome. Specifically, the authors investigated the beneficial effects of lower reaction time variability during visual n-back and consistency n-back training. Compared to a passive control group, the enriched training demands of the consistency n-back training led to more pronounced near-transfer effects, as well as stronger transfer to a structurally different and more complex working-memory task. These effects were promoted by less task-specific increase in working-memory efficiency. The impact of reaction time variability is discussed in the context of previous training studies and their behavioral as well as neuronal effects.

Buschkuehl et al. (2017) investigated the effects of an adaptive change-detection task training. They compared two training conditions differing in some key task features (e.g., whether feedback was provided). Participants in both conditions substantially and equally improved their performance on this task over the course of 10 training sessions. However, an exploratory investigation of transfer effects showed that these improvements remained highly task specific and did not generalize to untrained tasks, illustrating that not all types of working-memory tasks might be suitable as interventions to promote transfer.

Blacker et al. (2017) focus on two commonly used working-memory interventions to investigate their differential impact on near- and far-transfer tasks, as well as on neural activity via electroencephalography. In their first experiment, the authors rely on a correlational approach as a proof of concept to demonstrate that relational working memory predicted both n-back performance and matrix reasoning, but not complex span performance, arguing that differences in the requirements to extract and maintain relational information in working memory might account for differential effects in training efficacy as observed in the literature. The second experiment builds on this first experiment by using n-back and complex span tasks as intervention to further probe their relationship to relational working memory and reasoning. The results indeed show differential effects on both brain and behavior as a function of training paradigm. Specifically, as compared to complex span training, dual n-back training emerged as the more robust intervention in that there were clear indications for near transfer, and furthermore, n-back training resulted in greater neural changes (i.e., increases in alpha power) as compared to complex span training or a control intervention. Interestingly, and similar to Karbach et al. 2017, they found that individuals with lower working-memory performance at baseline seem to benefit more from training than high-performing individuals. However, there was no indication for far transfer to matrix reasoning. Furthermore, inconsistencies between the two experiments in terms of observed relationships between assessments illustrate the need to further clarify underlying mechanisms of learning and transfer after working-memory training.

In an attempt to go beyond the investigation of differential effects of specific interventions and to test whether the combination of two approaches might lead to additive effects, Ralph et al. (2017) trained an adolescent population with an intervention that focused on two domains, namely, primary memory and secondary memory. Various transfer effects were assessed using free recall tasks (near transfer), matrix reasoning, and verbal inference tasks, as well as academic performance (far transfer) before, after, and 6 months following a combined training that included both domains. However, compared to a control group that trained only one component, the results showed neither significant near- nor far-transfer effects. Thus, additional research is needed to elucidate whether a combined training can positively impact higher-level cognition.

Another study investigating the effects of combined training focused on a multimodal intervention including cognitive and aerobic training. In this study, Lai et al. (2017) compared the effects of simultaneous and sequential training formats in older adults. Participants performed 12 sessions of same-day multimodal training under simultaneous (concurrent cognitive dual-task and aerobic exercise) or sequential (cognitive dual-task followed by aerobic exercise) conditions. The sequential training group showed significantly larger improvements on working memory than the simultaneous group. Motivation to engage in cognitive effort as well as baseline aerobic fitness moderated the training gains, again, highlighting individual differences in training outcomes.

Focusing specifically on motivational effects related to the training environment, Mohammed et al. (2017) compared the effects of 4 weeks of training on a gamified working-memory training (a 3D space-themed “collection” game; Deveau et al. 2015) to working-memory training without motivational features. Participants in the game-training group reported that they enjoyed the intervention more and exerted more effort than the group who trained on the basic version. While they also improved more during training, there were no significant group differences in any of the transfer measures at posttest. The authors conclude that the inclusion of motivational features neither substantially benefited nor hurt broader learning. However, their findings may provide important information for the design of training interventions, especially when it comes to tailoring the training to participants’ interests in order to improve interest and adherence in intervention programs.

Most of the studies referenced above were primarily interested in the investigation of underlying mechanisms and individual differences, and thus, it is maybe not surprising that very few of them included any outcome measures that might reflect real-world performance (but see Hering et al. 2017; Mohammed et al. 2017; Ralph et al. 2017). Nonetheless, given the common assumption that cognitive training approaches are broadly beneficial, it would be valuable if researchers would routinely include measures that serve at least as proxies for everyday life. This issue is specifically addressed in the opinion paper by Söderqvist and Bergman Nutley (2017), which focuses on the issue of transfer using working-memory training as an example. They point out that the current literature often lacks a theoretically driven approach to select and justify the outcome measures to assess transfer. More importantly, the implementation or even a discussion of measures that might be relevant from a real-world perspective, such as academic performance, is often missing. They urge the research community to adopt methodological approaches that are more ecologically valid by selecting transfer measures that might mirror the complexity of everyday behavior in order to further understand the potential benefits and limitations of cognitive enhancement.

That said, several empirical research papers in this Special Topic did specifically assess transfer to outcome measures that are very relevant for everyday behavior. Rosenbaum et al. (2017) investigated risk-taking in adolescents as a proxy for an important real-world behavior. They tested whether 4 weeks of working-memory training (compared to active controls) increased performance on cognitive control measures and decreased risk-taking in adolescents. They found that working-memory training transferred to short-term memory performance but not to performance on basic cognitive control measures, such as the Go/No-Go task or the Stroop task. However, adolescents also performed two risk-taking tasks administered after training completion, either with or without an anonymous peer audience. Those who received working-memory training evinced suppressed levels of risk-taking when observed by peers. Thus, even though the lack of transfer to basic cognitive control measures indicated that improvements in basic control abilities may not mediate the decrease in risk-taking behavior, the study illustrates the potential impact of cognitive training on everyday behavior. At the same time, it clearly shows that more research is needed to understand the nature of these effects.

Kolodny et al. (2017) used a computerized intervention that was previously developed for children with attention deficit hyperactivity disorder (ADHD) to train high-functioning adults with ADHD. This intervention targeted several specific attentional functions and was administered across an 8-week period. Compared to an active control group, the trained group improved their performance in selective and executive attention tasks, and those near-transfer effects were maintained at the follow-up session which was administered 2–3 months after training completion. Even though the authors did not observe any improvement in self-reported ADHD symptoms, their results illustrate nonetheless that targeted training can be beneficial even for high-functioning ADHD participants, a population whose cognitive functions might be particularly hard to alter.

Given that the use of “brain training games” has become quite ubiquitous, the systematic evaluation of their effectiveness in an ecologically valid setting is of high interest, especially since the benefits of such programs (or lack thereof) have been discussed controversially. To address this issue, Strobach and Huestegge (2017) analyzed data from a commercially available program by focusing on a selection of training tasks that all tap into specific working-memory processes, namely, capacity and updating, and compared those data with the performance of an active control group who completed an intervention that consisted of general knowledge tasks. Participants independently completed both the intervention and their testing sessions at home. Going beyond previous results in this domain that has been mostly limited to specific effects, the data reported here provide evidence for near-transfer effects; however, the far transfer effects seem to be more elusive. In addition, similar to the results by Guye et al. 2017, larger improvements were observed in those with higher baseline ability, indicating a magnification effect.

Also focusing on commercial training programs, Tetlow and Edwards (2017) adopted a meta-analytical approach. Their aim was to evaluate the efficacy of commercially available computerized cognitive training programs to improve cognition in older adults and to examine far transfer to untrained tasks which might be relevant for everyday functioning. They report significant small to medium training effects for several cognitive domains, namely, attention, processing speed, and visuospatial memory. Moreover, there was also evidence for far transfer in self-reported measures of everyday function, indicating that commercially available computerized cognitive training programs may improve certain cognitive abilities and aspects of everyday behavior, at least in older age. However, the study also showed that the results were inconsistent across different measures of everyday functioning, and furthermore, these effects were based on very few results, mainly because the majority of studies did not include (proxies of) daily life behavior. Again, this illustrates the need for more research addressing these issues (cf. Söderqvist and Bergman Nutley 2017).

Interestingly, while some papers in this Special Topic (e.g., Blacker et al. 2017; Karbach et al. 2017) find that training interventions are most beneficial for those with lower baseline abilities, other papers (e.g., Guye et al. 2017; Strobach and Huestegge 2017) find the opposite. As such, it is unclear whether individual differences might emerge as a function of population (e.g., high-performing young adults vs. individuals with cognitive impairments, or whether the populations are developmental, age-related, or clinical), the type of intervention (e.g., process-based vs. multidomain or strategy-based trainings), whether they are modulated by specific features of the training intervention (such as the level of adaptivity in terms of task difficulty, the intensity and spacing of the training sessions, motivational features of the task environment, etc.), or by general individual differences (such as differences in personality, motivation, genetic predisposition, etc.). Furthermore, while the reports here seem to support the broader literature in that they find rather consistent evidence for near transfer, indications for far-transfer effects seem to be more elusive, and while there is some indication for translation of training effects into real-world behavior, the overall evidence for such effects remains limited, also because such measures are rarely included in the study protocol.

Thus, should we simply conclude that cognitive training is “not effective” in terms of far transfer and real-world behavior? We think that the answer to this question should be an affirmative “No.” What the contributions to this Special Topic show is that cognitive and neural plasticity is possible across a wide range of populations and following a diverse set of interventions. They also clearly illustrate that we need more research in order to understand the mechanisms driving training and especially transfer effects, as well as the individual differences that moderate those effects. It becomes apparent that the search for effective cognitive interventions will not benefit from “one-size-fits-all” approaches, and that we need tailored interventions in order to maximize the training outcomes. Of course, that probably also means cognitive training might not work for everyone—but that should not prevent us from investigating how training interventions need to be designed for those who do benefit in order to improve cognition that might even translate into everyday life. Thus, the question really should be “what type of training is best for whom?” And even though the field of cognitive training research has been moving forward in that direction, there clearly is a lot of work to be done. For instance, theoretical models to explain these training and transfer effects (or the lack thereof) are still mostly missing and should be developed and tested in future studies.

Overall, the contributions in this Special Issue are targeting some of the key issues that are currently being discussed in the cognitive training literature, illustrating the broad interest in this field. Furthermore, these contributions highlight the importance to invest resources to facilitate the investigation to elucidate the underlying mechanisms and individual differences that might determine training outcome, which will ultimately provide a better understanding of the potential benefits and limitations of cognitive training, as well as the malleability of human performance and cognitive architecture more generally. As such, the contributions of this Special Topic are not only of interest for the specific field, but highly relevant for a broad readership interested in cognitive science.