By boldly stating a list of fourteen specific Grand Challenges for Engineering, the influential National Academy of Engineering (NAE) has, in effect, set the stage for shaping the next round of major scientific and technological initiatives. In an open electronic publication (Perry et al. 2008), and on a special web site (http://www.engineeringchallenges.org), the authors eloquently describe their rationale for selecting the Grand Challenges. Importantly, these challenges were formulated based entirely on human needs, without being influenced by lesser considerations. These fourteen challenges will undoubtedly influence science and technology policy around the world for the next decade, and set the tone for public scientific discourse. They will likely shape funding priorities of government and non-government agencies. They will help guide strategic planning at universities and research-driven corporations alike. Most significantly, when these challenges are overcome, they will have a far-reaching, transformational effect on the future of humanity.

The Ninth Challenge: The ninth Grand Challenge is particularly significant for the Neuroinformatics community—it challenges engineers and neuroscientists to work together to reverse engineer the human brain, no less. In video-recorded messages on the above mentioned website, noted scientists Robert Langer, Bernadine Healy, and NAE President Charles Vest acknowledged the formidable complexity of the task. At the same time, they described the compelling need, and the vast benefits to humanity that would result from a deeper understanding of brain structure and function, starting with the creation of “general-purpose” artificial intelligence, improving human learning, better forecasting of drug side effects, computer-assisted methods for diagnosing, treating, and monitoring mental illnesses in a personalized manner. Neuro-prosthetic implants that are developed using the fruits of overcoming this challenge could help victims of dementia to remember, blind people to see, deaf people to hear, speech-impaired people to communicate, and crippled people to control their limbs mentally. Evidently, overcoming the ninth Grand Challenge will favorably impact the seventh, eighth, thirteenth, and fourteenth challenges as well. The reverse is also true: advances in virtual reality will certainly assist us in addressing the ninth challenge. Thus, the ninth challenge should be viewed as part of a synergistic “ecosystem of challenges” posed by the NAE.

At the same time, reverse engineering the brain is also a quintessential step in understanding the human mind. It is noticeable in this respect that reverse engineering the brain is also a central tenet of the Decade of the Mind initiative (Albus et al. 2007), and the focus of the fourth symposium (http://www.dom-4.com), held at the Sandia National Laboratory, in that series. Understanding the human mind at this level will have profound consequences for technology, health, and society at large. Moreover, reverse engineering the brain and the mind has deep philosophical consequences in addition to its practical implications (Samsonovich and Ascoli 2002, 2005). Some also argue that new theoretical paradigms may be needed to truly understand the brain and its cognitive functions, above and beyond the data-driven task of sufficiently detailed mapping (De Schutter 2008). The combination of sheer intellectual depth, the magnitude of the expected practical results, and the massive blueprint of the most complex object in the known universe, will render reverse engineering the brain the grandest of the NAE’s 14 challenges.

Assessing the Challenge: To develop a feel for just how great this challenge is, consider the following. A large multi-institutional team recently managed to map the entire nervous system of the simple 500-cell worm C. elegans (White et al. 1986). It was a multi-year effort. This is an organism for which much is already known, including the entire genome (C. elegans Sequencing Consortium 1998), cell lineage (Sulston and Horvitz 1977, Sulston et al. 1983), gene expression map (Kim et al. 2001), the interactome (Li et al. 2004), and anatomy (http://www.wormatlas.org). Yet, the nervous system of this worm remains to be “reverse engineered” (Schafer 2005). It is not yet possible to perform a realistic computational simulation of its entire nervous system, although specific parts can be modeled (White et al. 2007).

As one considers higher life forms, the task becomes dramatically harder, and even less has been accomplished to this day. For instance, the tiny fruit fly brain remains to be fully mapped as of this writing, although major efforts are underway. Moving to the mammalian central nervous system, three-dimensional high-resolution electron microscopy is revealing just how densely interconnected cortical neuropil is, when a still “tiny” 600 μm3 sample is examined at a resolution of 2–3 nm (Mishchenko et al. 2008). Yet, even if this feat were to be scaled up to the whole cortex, the task of reverse engineering the brain will remain incomplete. One reason is that the brain consists of many more active components than just neurons. Local communication among neurons is associated with the function of astrocytes and microglia, and the signals that each contributes to neuronal output. Additionally, brain vasculature and meninges can also modulate synaptic strength (Nadkarni et al. 2008; Stevens 2008; Theodosis et al. 2008) or directly alter neuron activity (Wieseler-Frank et al. 2004). Thus, reverse engineering the brain must include not only the longer-range communication via electric conduction, chemical signals, and neuronal function, but also more local cellular events that can impact the initiation or relay of these phenomena.

In engineering parlance, the brain is a system whose behaviors “emerge” from a complex set of interactions of its numerous components. True reverse engineering of the brain requires the demonstration of these emergent behaviors in silico. This is systems biology taken to an extreme: reverse engineering the brain goes well beyond the idea of mapping its structure, its cellular makeup, and molecular composition, although these are necessary prerequisites. To meet the ninth Grand Challenge, one must take the nontrivial next step, and create a successful computational system (combining appropriate hardware and software components) that algorithmically recapitulates all important brain functions (Ascoli 2003).

The above discussion is not intended to imply that this challenge is impossible to overcome—quite the contrary. The past century of progress in neuroscience, together with the resulting base of experience with lower model organisms and wealth of new tools and relevant technologies, already provide realistic leads to meet this challenge. For a start, it is abundantly clear that this project will require a cooperative partnership of researchers from multiple disciplines including imaging, image analysis, neuronal simulation, network analysis, molecular neuroscience, informatics, and high-performance computing, among others. It is also uncontroversial that this effort will require a level of international cooperation without precedent (Spitzer 2008). Auspiciously, we now live in an age in which successful examples of large-scale multi-disciplinary cooperative projects exist, notably the Human Genome Project (Venter et al. 2001). Not so long ago, terascale computing was posed by the Defense Advanced Research Projects Agency (DARPA) as a Grand Challenge. This level of computing is now available in desk-side computers, and state-of-the-art computer technology has already moved beyond the petascale. Recently developed optical microscopes (e.g., Klar et al. 2001) demonstrated that finer resolution than the long-standing Abbe/Rayleigh limit can be achieved. In other terms, the challenge may be gigantic, but it is nonetheless humanly achievable within our lifetime.

One is naturally led to speculate on just how long it may take for scientists to reverse engineer the brain. Clearly, the answer to this question depends upon the level of resources, the quality and quantity of manpower that can be dedicated to this task, the quality of project coordination, and the effectiveness of senior management guiding the project. The answer also depends upon just how much we are able to accelerate progress in some of the key rate-limiting steps, for example, automated imaging systems, computing speed, and storage capacity. It also depends on our ability to successfully carve out the problem into smaller modules that can be worked on by multiple teams in parallel (“divide and conquer” strategy). All told, it is probably fair to speculate that this will be a multi-decade effort that will bear plenty of worthwhile fruits even before it is completed. Perhaps the most important outcome will be a cadre of computational neuroscientists armed with the knowledge and tools to redefine the future of humanity.

The Central Role of Neuroinformatics: Regardless of the countless organizational details, the resultant scientific activity will have to catapult the field of neuroinformatics to center stage. Without a doubt, neuroinformatics and computational neuroscience will play an essential integrative role (Bjaalie 2008) given the importance of simulation-based science stressed by Dr. Vest of the NAE in describing this grand challenge. Simulation based neuroscience is the ideal mechanism for integrating data from otherwise currently under-connected realms. These realms include the traditional “wet” neuroscience laboratory, the world of engineering instrumentation, and the computational sciences. Since the results of computational models can be compared directly to physically recorded data, simulations also provide an effective “language” for identifying and communicating critical gaps in understanding across disciplines. Using this medium, and related other efforts at building common terminologies (Ascoli et al. 2008; Gardner et al. 2008b), we foresee a new virtuous cycle in which the needs of neuroscience rapidly instigate advances in computer science and engineering. In turn, these advances will enable a fresh round of advances in neuroscience. We purport that this cycle will culminate in the reverse engineered brain. When the task is complete, a perturbation of the in silico brain model should result in a change in its activity that is reflected in vivo, and vice versa.

Neuroinformatics is increasingly a multi-faceted discipline touching upon all in silico aspects of neuroscience. Each of these aspects has proven to be nontrivial. For example, the task of representing, sharing, and combining data is by itself considerably challenging (Ascoli 2006). Encouragingly, several large-scale cooperative efforts are already underway within the neuroinformatics community (e.g., Gardner et al. 2008a, Bug et al. 2008, Gupta et al. 2008, Marenco et al. 2007), and earlier national initiatives (Huerta et al. 1993) continue to bear fruit.

The Role of Imaging: Precise mapping of brain structure is a pre-requisite to accurate simulation of brain function. This is a task that originated in different communities, notably imaging and computer vision, but their fruits are gravitating towards the field of neuroinformatics to be integrated with other data. As recent examples, there is a growing trend towards integrating efforts aimed at advanced neuroimaging at scales ranging from the whole body down to the level of macromolecules (Pagani et al. 2008, Dolan 2008, Micheva and Smith 2007, Klar et al. 2001) Recent efforts in molecular imaging have resulted in advances that allow individual neurons to be distinguished in what was until recently a vast connected mass of gray matter (Livet et al. 2007). Recent efforts in the area of computational image analysis (Bjornsson et al. 2008) have emphasized methods that map brain tissue much more completely compared to earlier efforts, using a combination of multi-spectral imaging and integrative image analysis. An important outcome of such efforts is the emergence of bio-image informatics (Peng 2008) as an important discipline in its own right. The unique structural complexity of brain tissue is increasingly being described with growing realism by the field of computational neuroanatomy (Halavi et al. 2008, Stepanyants et al. 2008). While the past decade has witnessed impressive advances in each of these fields on an individual basis, and recent efforts are increasingly demonstrating integration across these areas, much remains to be done. First, each of these fields must advance their tools to a level of full automation and accurate validation, sufficient to become usable on a massively larger scale than currently possible (Ascoli 2008). Second, these fields must be integrated much more seamlessly, greatly strengthening their connection with simulation-based neuroinformatics. This will require the development of data interchange standards, methods to better organize data (Mackenzie-Graham et al. 2008) and extensive sharing of tools (Van Horn and Ball 2008). While these activities are already underway in laboratories around the world, they will need to expand by several orders of magnitude compared to the present. Such a scale-up will dramatically transform the field of neuroinformatics.

The Role of Engineers: As noted by Dr. Vest, engineers have a vital role to play in this grand endeavor. They are not only needed to develop the required tools and technologies, but also to foster the advancement of the very manner in which engineering is practiced and applied. He particularly stressed the importance of simulation based science and engineering as the way of the future across multiple Grand Challenges. By rising to the challenge of simulating systems with brain-like complexity, engineers will not only help overcome the ninth Grand Challenge, but also address several other human needs. The experience gained from such ultra-scale simulation studies will offer benefits beyond biology, for example, improving our ability to forecast the impact of natural disasters, and creating personalized medicines.

To meet this Grand Challenge, there is an acute need for a fresh round of engineering advancements in imaging systems, image computation algorithms, and large-scale computing hardware and software, all driven by the needs of brain science. As a case in point, the Internet is simply inadequate for exchanging large datasets, and scientists routinely exchange data by putting them on terabyte-size disk drives or digital video disks (DVD), and use overnight mailing services to send them to colleagues. Greatly increased networking speeds will also be needed to access the next generation of supercomputing facilities. Starting with the classic Connection Machine (Hillis and Tucker 1993), supercomputer designs have long derived inspiration from the idea of approximating brain-like connectivity in general-purpose supercomputers. While this field has evolved considerably, as exemplified by the contemporary IBM Blue Gene system (Gara et al. 2005), it still has a long way to go in order to support the ninth Grand Challenge.

The Need for Public Support and Understanding: Considering the magnitude of the task, long-term funding from all available sources will be needed. Securing public funding will require citizens to become better informed about the unique nature of neuroscience. A provocative article recently complained that payoffs from investments in research on brain disorders have been small compared to other diseases such as diabetes and heart conditions, and suggested that funding for neuroscience should be reduced (McGee 2007). In response, others have argued the opposite (Roysam 2007), that the brain is dramatically more complex than other organs, suggesting that funding for neuroscience should be dramatically increased to help researchers grapple with its greater complexity.

In conclusion, neuroinformatics is the perfect discipline and intellectual substrate for storing the end product that will result from reverse engineering the brain, and presenting it in a usable and actionable form to the scientific community. The reverse-engineered brain will take the form of a versatile in silico model that becomes the basis for simulation-based prediction of brain responses to various inputs and conditions (normal as well as dysfunctional), and set the stage for the next round of transformational discoveries with far-reaching human impact.