ABSTRACT
Recent advances in the neuroscientific understanding of the brain are bringing about a tantalizing opportunity for building synthetic machines that perform computation in ways that differ radically from traditional Von Neumann machines. These brain-like architectures, which are premised on our understanding of how the human neocortex computes, are highly fault-tolerant, averaging results over large numbers of potentially faulty components, yet manage to solve very difficult problems more reliably than traditional algorithms. A key principle of operation for these architectures is that of automatic abstraction: independent features are extracted from highly disordered inputs and are used to create abstract invariant representations of the external entities. This feature extraction is applied hierarchically, leading to increasing levels of abstraction at higher levels in the hierarchy.
This paper describes and evaluates a biologically plausible computational model for this process, and highlights the inherent fault tolerance of the biologically-inspired algorithm. We introduce a stuck-at fault model for such cortical networks, and describe how this model maps to hardware faults that can occur on commodity GPGPU cores used to realize the model in software. We show experimentally that the model software implementation can intrinsically preserve its functionality in the presence of faulty hardware, without requiring any reprogramming or recompilation. This model is a first step towards developing a comprehensive and biologically plausible understanding of the computational algorithms and microarchitecture of computing systems that mimic the human cortex, and to applying them to the robust implementation of tasks on future computing systems built of faulty components.
Supplemental Material
- R. Ananthanarayanan and D. S. Modha. Anatomy of a cortical simulator. In Proceedings of Supercomputing 07, November 2007. Google ScholarDigital Library
- A. Bakhoda, G. Yuan, W. Fung, H. Wong, and T. Aamondt.Analyzing cuda workloads using a detailed gpu simulator. In Proceedings of IEEE International Symposium on Performance Analysis of Systems and Software (ISPASS), pages 163--174, 2009.Google ScholarCross Ref
- H. Berry and O. Temam. Modeling self-developping biological neural network. Neurocomputing, 70(16--18):2723--2734, 2007. Google ScholarDigital Library
- R. E. Brown and P. M. Milner. The legacy of Donald O. Hebb: more than the Hebb synapse. Nat Rev Neurosci, 4(12):1013--1019, Dec 2003.Google ScholarCross Ref
- N. Corporation. CUDA Programming Guide. NVIDIA Corporation, 2701 San Toman Expressway, Santa Clara, CA 95050, USA, 2007.Google Scholar
- Systems of neuromorphic adaptive plastic scalable electronics (synapse). http://www.darpa.mil/dso/solicitations/baa08-28.html.Google Scholar
- A. Dehon. Nanowire-based programmable architectures. J. Emerg. Technol. Comput. Syst., 1(2):109--162, 2005. Google ScholarDigital Library
- M. Emmerson and R. Damper. Determining and improving the fault tolerance of of multilayer perceptrons in a pattern-recognition application, 1993.Google Scholar
- J. Fieres, K. Meier, and J. Schemmel. A convolutional neural network tolerant of synaptic faults for low-power analog hardware. In ANNPR, pages 122--132, 2006. Google ScholarDigital Library
- S. Haeusler and W. Maass. A statistical analysis of information-processing properties of lamina-specific cortical microcircuit models. Cereb. Cortex, 17(1):149--162, Jan 2007.Google ScholarCross Ref
- A. Hashmi, H. Berry, O. Temam, and M. H. Lipasti. Leveraging progress in neurobiology for computing systems. In Proceedings of the Workshop on New Directions in Computer Architecture held in Conjunction with 42nd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO-42), 2009.Google Scholar
- A. Hashmi and M. Lipasti. Cortical columns: Building blocks for intelligent systems. In Proceedings of the Symposium Series on Computational Intelligence, pages 21--28, 2009.Google ScholarCross Ref
- A. Hashmi and M. Lipasti. Discovering cortical algorithms. In Proceedings of the International Conference on Neural Comp utation (ICNC), October 2010.Google Scholar
- S. Haykin. Neural Networks: A Comprehensive Foundation (2nd Edition). Prentice Hall, 1998. Google ScholarDigital Library
- J. Hirsch and L. Martinez. Laminar processing in the visual cortical column. Current Opinion in Neurobiology, 16:377--384, 2006.Google ScholarCross Ref
- M. Holler, S. Tam, H. Castro, and R. Benson. An electrically trainable artificial neural network (ETANN) with 10240 'floating gate' synapses. In International Joint Conference on Neural Networks, IJCNN, volume 2, pages 191--196, Jun 1989.Google ScholarCross Ref
- D. Hubel and T. Wiesel. Receptive fields, binocular interactions and functional architecture in cat's visual cortex. Journal of Physiology, 160:106--154, 1962.Google ScholarCross Ref
- K. Ibata, Q. Sun, and G. Turrigiano. Rapid Synaptic Scaling Induced by Changes in Postsynaptic Firing. Neuron, 57(6):819--826, 2008.Google ScholarCross Ref
- M. H. Kalisman N, Silberberg G. The neocortical microcircuit as a tabula rasa. Proc. Natl. Acad. Sci. USA, 102, 880--885, 2005.Google ScholarCross Ref
- E. Kandel, J. Schwartz, and T. Jessell. Principles of Neural Science. McGraw-Hill, 4 edition, 2000.Google Scholar
- G. Kreiman, C. Koch, and I. Fried. Category-specific visual responses of single neurons in the human medial temporal lobe. Nature Neuroscience, 3:946--953, 2000.Google ScholarCross Ref
- Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278--2324, November 1998.Google ScholarCross Ref
- A. Losonczy and J. Magee. Integrative properties of radial oblique dendrites in hippocampal ca1 pyramidal neurons. Neuron, 50:291--307, 2006.Google ScholarCross Ref
- A. Nere and M. Lipasti. Cortical architectures on a gpgpu. In GPGPU '10: Proceedings of the 3rd Workshop on General-Purpose Computation on Graphics Processing Units, pages 12--18, New York, NY, USA, 2010. ACM. Google ScholarDigital Library
- J. T. Paz, S. Mahon, P. Tiret, S. Genet, B. Delord, and S. Charpier. Multiple forms of activity-dependent intrinsic plasticity in layer V cortical neurones in vivo. The Journal of Physiology, 587(13):3189--3205, 2009.Google ScholarCross Ref
- J. Peissig and M. Tarr. Visual object recognition: do we know more now than we did 20 years ago? Annu. Rev. Psychol., 58:75--96, 2007.Google ScholarCross Ref
- M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience, 2:1019--1025, 1999.Google ScholarCross Ref
- D. Ringach. Haphazard wiring of simple receptive fields and orientation columns in visual cortex. J. Neurophysiol., 92(1):468--476, Jul 2004.Google ScholarCross Ref
- S. Ryoo, C. Rodrigues, S. Babhsorkhi, S. Stone, D. Kirk, and W. Hwu. Optimization principles and application performance evaluation of a multithreaded gpu using cuda. In Proceesings Symposium on principles and practices of parallel programming, SIGPLAN, pages 73--82, 2008. Google ScholarDigital Library
- A. Sandberg and N. Bostrom. Whole brain emulation: A roadmap. Technical Report 2008-3, Future of Humanity Institute, Oxford University, 2008.Google Scholar
- J. Schemmel, J. Fieres, and K. Meier. Wafer-scale integration of analog neural networks. In IEEE International Joint Conference on Neural Networks, IJCNN, pages 431--438, June 2008.Google ScholarCross Ref
- T. Serre, A. Oliva, and T. Poggio. A feedforward architecture accounts for rapid categorization. Proc. Natl. Acad. Sci. USA, 104(15):6424--6429, Apr 2007.Google ScholarCross Ref
- T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio. Robust object recognition with cortex-like mechanisms. IEEE Trans. Pattern Anal. Mach. Intell., 29(3):411--426, Mar 2007. Google ScholarDigital Library
- SIA. Semiconductor industry association 2007 roadmap. http://www.sia-online.org/, 2007.Google Scholar
- G. Sperling. A model for visual memory tasks. Human Factor, 5:19--31, 1963.Google ScholarCross Ref
- M. Spratling. Presynaptic lateral inhibition provides a betterarchitecture for self-organising neural networks. Network: Computation in Neural Systems,, 10(4):285--301, 1999.Google ScholarCross Ref
- R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998. Google ScholarDigital Library
- L. Swanson. Mapping the human brain: past, present, and future. Trends in Neurosciences, 18(11):471--474, 1995.Google ScholarCross Ref
Index Terms
- Automatic abstraction and fault tolerance in cortical microachitectures
Recommendations
Automatic abstraction and fault tolerance in cortical microachitectures
ISCA '11Recent advances in the neuroscientific understanding of the brain are bringing about a tantalizing opportunity for building synthetic machines that perform computation in ways that differ radically from traditional Von Neumann machines. These brain-like ...
Fault Tolerance in Multiprocessor Systems Without Dedicated Redundancy
An algorithm called RAFT (recursive algorithm for fault tolerance) for achieving fault tolerance in multiprocessor systems is described. Through the use of a combination of dynamic space- and time- redundancy techniques, RAFT achieves fault tolerance in ...
Using dynamic task level redundancy for OpenMP fault tolerance
ARCS'12: Proceedings of the 25th international conference on Architecture of Computing SystemsObtaining fault tolerant applications and systems is one of today's most important topics of research. Fault tolerance is becoming more and more essential in shared memory parallel programs and in multi/many core architectures due to the decreasing size ...
Comments