Abstract
In a recent paper, Kaplan (Synthese 183:339–373, 2011) takes up the task of extending Craver’s (Explaining the brain, 2007) mechanistic account of explanation in neuroscience to the new territory of computational neuroscience. He presents the model to mechanism mapping (3M) criterion as a condition for a model’s explanatory adequacy. This mechanistic approach is intended to replace earlier accounts which posited a level of computational analysis conceived as distinct and autonomous from underlying mechanistic details. In this paper I discuss work in computational neuroscience that creates difficulties for the mechanist project. Carandini and Heeger (Nat Rev Neurosci 13:51–62, 2012) propose that many neural response properties can be understood in terms of canonical neural computations. These are “standard computational modules that apply the same fundamental operations in a variety of contexts.” Importantly, these computations can have numerous biophysical realisations, and so straightforward examination of the mechanisms underlying these computations carries little explanatory weight. Through a comparison between this modelling approach and minimal models in other branches of science, I argue that computational neuroscience frequently employs a distinct explanatory style, namely, efficient coding explanation. Such explanations cannot be assimilated into the mechanistic framework but do bear interesting similarities with evolutionary and optimality explanations elsewhere in biology.
Similar content being viewed by others
Notes
In what follows I focus on this particular presentation of the mechanistic approach to computational neuroscience and cognitive science. For work in a similar vein see Piccinini (2007), Piccinini and Craver (2011), Kaplan and Craver (2011) and Piccinini and Bahar (2013). The view I develop here is directed at this general view, to the extent that it presupposes the two mechanistic norms that I outline below (3M and MDB), but for ease of exposition I only make explicit the contrast with Kaplan’s (2011) paper. Note that not all mechanists are committed to the narrow view of mechanistic explanation that is targeted here. See e.g. Bogen and Machamer (2010).
It is not clear whether or not, each of these theses is sufficient for computational chauvinism, or if they are jointly sufficient, or if any of them are necessary. In proposing my own view on the explanatory models in computational neuroscience, I will endorse modified versions of C1 and C3.
Rather tellingly, (Kaplan (2011), p. 349) quotes from (Dayan and Abbott (2001), p. xiii) in order to invoke their distinction between descriptive and mechanistic models; but this third class of models gets no mention whatsoever. In comparison, (Craver (2007), p. 162) acknowledges that there may also be non-mechanistic (“non-constitutive”) explanation in neuroscience, but restricts his focus. I.e. he accepts the possibility of explanatory pluralism. It might be argued, in a similar vein, that the I-minimal models I discuss below are not counter-examples to Kaplan’s account because they are simply beyond his focus. But that would be to ignore his explanatory monism (the claim that all explanatory models in computational neuroscience are mechanistic).
Piccinini and Scarantino (2010) are careful to distinguish computation from information processing. This distinction is not critical for understanding the scientific material discussed below because these neuroscientists are not associating “neural computation” with digital computation, or any other man-made computational system. Rather “neural computation” is a catch-all for whatever information processing (i.e. formally-describable input-output transformations) neural systems do and the theoretical definition of neural computation is a work in progress.
Crucially, Kaplan and Craver’s (2011, p. 611) version of 3M lacks the qualificatory phrase “to the extent that”. They write that: “In successful explanatory models in cognitive and systems neuroscience (a) the variables in the model correspond to components....”. This can reasonably be interpreted as requiring that all components correspond to mechanism parts, thus requiring that models contain no non-referring mathematical features such as dummy variables or un-interpretable parameters. This is a problematic requirement that (Kaplan (2011), pp. 347–348) avoids, stating just that at least one model variable or dependency must correspond to a mechanism component or causal relations.
Unfortunately Kaplan gives no indication of how the “all else equal” clause should be spelled out. Given that he only mentions idealization and abstraction as there for “computational tractability” or because the details are unknown, I assume that the “all else equal” clause” just means that if the choice is between a fully detailed model which is impossible (or very difficult) to implement in your hardware, or an elliptical model that works, the elliptical model is better.
As I see it, the primary statement of the MDB assumption is in Craver’s (2007, p. 113) discussion of the “mechanism sketch”, as part of his presentation of the norms for mechanistic explanation: “A mechanism sketch is an incomplete model of a mechanism. It characterizes some parts, activities, or features of the mechanism’s organization, but it leaves gaps. Sometimes gaps are marked in visual diagrams by black boxes or question marks. More problematically, sometimes they are masked by filler terms that give the illusion that the explanation is complete when it is not. .... Terms such as “activate,” “inhibit,” “encode,” “cause,” “produce,” “process,” and “represent” are often used to indicate a kind of activity in a mechanism without providing any detail about exactly what activity fills that role. Black boxes, question marks, and acknowledged filler terms are innocuous when they stand as place-holders for future work....”. In general terms, better explanations arise as research progresses along the axis from mechanism sketches, to mechanism schemata, and finally to complete mechanistic models. On most occasions in which the MDB assumption is in play, a mechanism sketch is held up to unfavourable comparison against an improved, more detailed model. See discussion below of the Hodgkin–Huxley (HH) model, and Kaplan’s (2011) other case studies of progress in model building through de-idealization.
A note on terminology: by ‘abstract’ I mean a model which leaves out much biophysical detail, in other words ‘highly incomplete’; by ‘idealized’ I mean a model which describes a system in an inaccurate or unrealistic way (Thomson-Jones 2005). In criticizing the MDB assumption, abstraction is the more relevant term. However, the literature on models often conflates these two.
Strictly speaking, the term ‘V1’ only refers to primary visual cortex in primates, whereas ‘striate cortex’ is appropriate for primary visual cortex of felines or primates. Below I do sometimes use ‘V1’ to refer to primary visual cortex of both kinds of animals, as is now quite common in the literature.
See Chirimuuta and Gold (2009) for a more detailed review of these topics.
At worst, the HH model has been described by mechanist philosophers of neuroscience as a merely phenomenal and not at all explanatory model (Craver 2008; Bogen 2008), or as a how-possibly model that has been falsified by later investigation and superseded by current how-actually models (Kaplan and Craver 2011, pp. 355–358). See Weber (2005); Weber (2008), Schaffner (2008), Levy (in press), and Woodward (in press) for contrary opinions.
While the model-target distinction sometimes becomes blurred in the neuroscientific literature—sometimes a computation is talked about as if it is just a model, and sometimes it is treated as a function belonging to the neural circuit itself—it is worth delineating it at this point. CNC’s are computations performed by neurons and circuits in the brain. Thus the normalization model, e.g. Eq. 1, is a representation of the neural computations.
Amongst the many proposed functions of normalization are: maximizing sensitivity of sensory neurons; sharpening the tuning of sensory neurons; decoding distributed neural representations; discriminating amongst stimuli; computing a winner-take-all pooling rule; and redundancy reduction.
Incidentally, (Weiskopf (2011a), p. 249) presents a similar idea when he writes, “The interesting functionally defined categories, then, constitute recurrent building blocks of cognitive systems. They explain the possession of various capacities of those systems without reference to specific realizing structures.” Though he does not focus here on non-mechanistic explanation.
In a sense A-minimal models can also be said to define a universality class. E.g. all those neuron’s whose action potential’s can be described by HH model. Here, explanation of the universality is that the model captures the shared, essential difference makers across all of these different neurons.
Anderson’s (2010) notion of a “working” would be an example of this hypothetical kind of circuit.
Note that Hubel and Wiesel’s hierarchical feedforward model predicts that for high contrast stimuli simple cells will be less selective about orientation, in conflict with empirical observations.
Similarly, Olsen et al. (2010) have studied normalization in the olfactory system of Drosophila. Quoting Simoncelli (2003), they present a two part efficient coding hypothesis, stating “(1) each neuron should use its dynamic range uniformly, and (2) responses of different neurons should be independent” (p. 295). They manipulate parameters of normalization in a computer simulation of the fly’s olfactory system and show that normalization decorrelates neuronal responses, as required by (2), and that a similar gain control transformation boosts weak responses to select stimuli, as required by (1). They also show that the simulation findings fit with the empirical observations of decorrelation in the fly’s area PN. It should be noted that the simulation in which the normalization equation is embedded is highly abstract. It is not intended as a realistic, biophysical simulation of these neural systems. Rather, each neuron is modelled by a single number which represents its response to the olfactory stimulus, and the entire population model only simulates 24 out of 50 neuronal types that have been found in the target brain area.
Rieke and Warland’s (1999) Spikes is a classic text on efficient coding and the application of information theory to neuroscience. Other examples of efficient coding explanation are: Laughlin (1981), Srinivasan et al. (1982), Atick and Redlich (1992), van Hateren (1992), Rieke et al. (1995), Dan et al. (1996), Olshausen and Field (1996), Baddeley et al. (1997), Bell and Sejnowski (1997), Machens et al. (2001), Simoncelli and Olshausen (2001) Schwartz and Simoncelli (2001), Vincent et al. (2005), Chechik et al. (2006), Graham et al. (2006), Smith and Lewicki (2006), Borghuis et al. (2008), Liu et al. (2009), and Doi et al. (2012).
References
Allen, C., Bekoff, M., & Lauder, G. (1998). Nature’s purposes: Analyses of function and design in biology. Cambridge, MA: Bradford Books.
Anderson, M. L. (2010). Neural reuse: A fundamental organizational principle of the brain. Behavioral and Brain Sciences, 33, 245–313.
Angelaki, D., Caddick, S., Movshon, T., Reynolds, J., Rust, N., Shamma, S., et al. (2009). Physiology: Systems. In D. J. Heeger et al. (Eds.), Canonical neural computation: A summary and a roadmap. http://www.theswartzfoundation.org/docs/Canonical-Neural-Computation-April-2009.pdf.
Atick, J. J., & Redlich, A. N. (1992). What does the retina know about natural scenes? Neural Computing, 4, 196–210.
Attneave, F. (1954). Some informational aspects of visual perception. Psychological Review, 61, 183–193.
Azevedo, F. A. C., Carvalho, L. R. B., Grinberg, L. T., Farfel, J. M., Ferretti, R. E. L., Leite, R. E. P., et al. (2009). Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. Journal of Comparative Neurology, 513, 532–541.
Baddeley, R., Abbott, L. F., Booth, M. J. A., Sengpiel, F., Freeman, T., Wakeman, E. A., et al. (1997). Responses of neurons in primary and inferior temporal visual cortices to natural scenes. Proceedings Biological Science, 264, 1775–1783.
Barenblatt, G. I. (1996). Scaling, self-similarity, and intermediate asymptotics. Cambridge: Cambridge University Press.
Barlow, H. B. (1961). Possible principles underlying the transformation of sensory messages. In W. A. Rosenblith (Ed.), Sensory communication. Cambridge, MA: MIT Press.
Batterman, R. (2002). The devil in the details. Oxford: Oxford University Press.
Batterman, R. (2009). Idealization and modeling. Synthese, 169, 427–446.
Beatty, J. (1994). The proximate/ultimate distinction in the multiple careers of Ernst Mayr. Biology and Philosophy, 9, 333–356.
Bechtel, W. (2008). Mental mechanisms: Philosophical perspectives on cognitive neuroscience. London: Routledge.
Bechtel, W., & Mundale, J. (1999). Multiple realizability revisited: Linking cognitive and neural states. Philosophy of Science, 66, 175–207.
Bechtel, W., & Richardson, R. C. (1993). Discovering complexity. Princeton, NJ: Princeton University Press.
Bell, A. J., & Sejnowski, T. J. (1997). The independent components of natural scenes are edge filters. Vision Research, 37, 3327–3338.
Bogen, J. (2008). The Hodgkin–Huxley equations and the concrete model: Comments on Craver, Schaffner, and Weber. Philosophy of Science, 75, 1034–1046.
Bogen, J., & Machamer, P. (2010). Mechanistic information and causal continuity. In P. McKay, F. R. Illari, & J. Williamson (Eds.), Causality in the sciences (pp. 845–864). Oxford: Oxford University Press.
Bonds, A. B. (1989). Role of inhibition in the specification of orientation selectivity of cells in the cat striate cortex. Visual Neuroscience, 2, 41–55.
Borghuis, B. G., Ratliff, C. P., Smith, R. G., Sterling, P., & Balasubramanian, V. (2008). Design of a neuronal array. Journal of Neuroscience, 28, 3178–3189.
Brandon, R. N. (1981). Biological teleolog: Questions and explanations. Studies in History and Philosophy of Science, 12(2), 91–105.
Buzsáki, G. (2006). Rhythms of the brain. Oxford: Oxford University Press.
Caddick, S., Carandini, M., Hausser, M., Martin, K., Priebe, N., Reynolds, J., Scanziani, M., et al. (2009). Physiology: Mechanisms. In D. J. Heeger et al. (Eds.), Canonical neural computation: A summary and a roadmap. http://www.theswartzfoundation.org/docs/Canonical-Neural-Computation-April-2009.pdf.
Carandini, M. (2012). From circuits to behavior: A bridge too far? Nature, 15(4), 507–509.
Carandini, M., & Heeger, D. J. (1994). Summation and division by neurons in primate visual cortex. Science, 264, 1333–1336.
Carandini, M., & Heeger, D. J. (2012). Normalization as a canonical neural computation. Nature Reviews Neuroscience, 13, 51–62.
Carandini, M., Heeger, D. J., & Senn, W. (2002). A synaptic explanation of suppression in visual cortex. Journal of Neuroscience, 22(22), 10053–10065.
Chechik, G., Anderson, M. J., Bar-Yosef, O., Young, E. D., Tishby, N., & Nelken, I. (2006). Reduction of information redundancy in the ascending auditory pathway. Neuron, 51, 359–368.
Chemero, A., & Silberstein, M. (2008). After the philosophy of mind: Replacing scholasticism with science. Philosophy of Science, 75, 1–27.
Chirimuuta, M., & Gold, I. J. (2009) The embedded neuron, the enactive field? In J. Bickle (Ed.), Handbook of Philosophy and Neuroscience. Oxford: Oxford University Press.
Churchland, P. S., & Sejnowski, T. J. (1992). The computational brain. Cambridge, MA: MIT Press.
Craver, C. F. (2006). When mechanistic models explain. Synthese, 153, 355–376.
Craver, C. F. (2007). Explaining the brain. Oxford: Oxford University Press.
Craver, C. F. (2008). Physical law and mechanistic explanation in the Hodgkin and Huxley model of the action potential. Philosophy of Science, 75(5), 1022–1033.
Craver, C. F., & Darden, L. (2001). Discovering mechanisms in neurobiology: The case of spatial memory. In P. Machamer, R. Grush, & P. McLaughlin (Eds.), Theory and method in the neurosciences. Pittsburgh: University of Pittsburgh Press.
Cummins, R. (1983). The nature of psychological explanation. Cambridge, MA: Bradford/MIT Press.
Dan, Y., Atick, J. J., & Reid, R. C. (1996). Efficient coding of natural scenes in the lateral geniculate nucleus: Experimental test of a computational theory. Journal of Neuroscience, 16, 3351–3362.
Daugman, J. G. (1985). Uncertainty relation for resolution in space, spatial frequency, and orientation optimized by two-dimensional visual cortical filters. The Journal of the Optical Society of America A, 2(7), 1160–1169.
Dayan, P., & Abbott, L. F. (2001). Theoretical neuroscience: Computational and mathematical modeling of neural systems. Cambridge, MA: MIT Press.
Doi, E., Gautier, J. J., Field, G. D., Shlens, J., Sher, A., Greschner, M., et al. (2012). Efficient coding of spatial information in the primate retina. Journal of Neuroscience, 32(46), 16256–16264.
Fodor, J. A. (1975). The language of thought. Cambridge, MA: Harvard University Press.
Gabor, D. (1946). Theory of communication. Journal of the Institution of Electrical Engineers, 93, 429–459.
Gazzaniga, M. S., Mangun, G., & Ivry, R. (1998). Cognitive neuroscience: The biology of the mind. New York: W. W. Norton.
Giere, R. (2006). Scientific perspectivism. Chicago: Chicago University Press.
Godfrey-Smith, P. (2001). Three kinds of adaptationism. In S. H. Orzack & E. Sober (Eds.), Adaptationism and optimality (pp. 335–357). Cambridge: Cambridge University Press.
Graham, D. J., Chandler, D. M., & Field, D. J. (2006). Can the theory of “whitening” explain the center-surround properties of retinal ganglion cell receptive fields? Vision Research, 46, 2901–2913.
Heeger, D. J. (1992). Normalization of cell responses in the cat striate cortex. Visual Neuroscience, 9, 181–197.
Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. Journal of Physiology, 117, 500–544.
Hubel, D. H., & Wiesel, T. N. (1962). Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. Journal of Physiology, 160, 106–154.
Izhikevich, E. M. (2010). Dynamical systems in neuroscience: The geometry of excitability and bursting. Cambridge, MA: MIT Press.
Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language, inference and consciousness. New York: Cambridge University Press.
Jones, J. P. & Palmer, L. A. (1987). An evaluation of the two-dimensional gabor filter model of simple receptive fields in cat striate cortex. Journal of Neurophysiology, 58, 1233–1258.
Kaplan, D. M. (2011). Explanation and description in computational neuroscience. Synthese, 183, 339–373.
Kaplan, D. M., & Craver, C. F. (2011). The explanatory force of dynamical and mathematical models in neuroscience: A mechanistic perspective. Philosophy of Science, 78, 601–627.
Khalifa, K. (2012). Inaugurating understanding or repackaging explanation? Philosophy of Science, 79, 15–37.
Koch, C. (1998). Biophysics of computation: Information processing in single neurons. New York: Oxford University Press.
Laughlin, S. (1981). A simple coding procedure enhances a neuron’s information capacity. Zeitschrift fur Naturforschung, 36, 910–912.
Laughlin, S. B. (2001). Energy as a constraint on the coding and processing of sensory information. Current Opinion in Neurobiology, 11, 475–480.
Lennie, P. (2003). The cost of cortical computation. Current Biology, 13, 493–497.
Levy, A. (in press). What was Hodgkin and Huxley’s achievement? British Journal for Philosophy of Science.
Levy, A., & Bechtel, W. (in press). Abstraction and the organization of mechanisms. Philosophy of science.
Liu, Y. S., Stevens, C. F., & Sharpee, T. O. (2009). Predictable irregularities in retinal receptive fields. Proceedings of the National Academy of Sciences of the United States of America, 106, 16499–16504.
Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of Science, 67, 1–25.
Machens, C. K., Stemmler, M. B., Prinz, P., Krahe, R., Ronacher, B., & Herz, A. V. (2001). Representation of acoustic communication signals by insect auditory receptor neurons. Journal of Neuroscience, 21, 3215–3227.
Markram, H. (2006). The Blue Brain Project. Nature Reviews Neuroscience, 7, 153–160.
Marr, D. (1982). Vision: A computational investigation into the human representation and processing of visual information. San Francisco: W.H. Freeman & Co. Ltd.
Mayr, E. (1961). Cause and effect in biology. Science, 134, 1501–1506.
Mitchell, S. D. (2002). Integrative pluralism. Biology and Philosophy, 17(1), 55–70.
Mitchell, S. D. (2009). Unsimple truths: Science, complexity, and policy. Chicago: University of Chicago Press.
Nishimoto, S., Vu, A. T., Naselaris, T., Benjamini, Y., Yu, B., & Gallant, J. L. (2011). Reconstructing visual experiences from brain activity evoked by natural movies. Current Biology, 21(19), 1641–1646.
Olsen, S. R., Bhandawat, V., & Wilson, R. I. (2010). Divisive normalization in olfactory population codes. Neuron, 66, 287–299.
Olshausen, B. A., & Field, D. J. (1996). Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381, 607–609.
Olshausen, B. A., & Field, D. J. (2006). What is the other 85 percent of V1 doing? In J. L. van Hemmen & T. J. Sejnowski (Eds.), 23 Problems in systems neuroscience. Oxford: Oxford University Press.
Piccinini, G. (2006). Computational explanation in neuroscience. Synthese, 153, 343–353.
Piccinini, G. (2007). Computing mechanisms. Philosophy of Science, 74, 501–526.
Piccinini, G., & Bahar, S. (2013). Neural computation and the computational theory of cognition. Cognitive Science, 34, 453–488.
Piccinini, G., & Craver, C. (2011). Integrating psychology and neuroscience: Functional analyses as mechanism sketches. Synthese, 183(3), 283–311.
Piccinini, G., & Scarantino, A. (2010). Computation vs. information processing: Why their difference matters to cognitive science. Studies in History and Philosophy of Science, 41, 237–246.
Rieke, F., Bodnar, D. A., & Bialek, W. (1995). Naturalistic stimuli increase the rate and efficiency of information transmision by primary auditory afferents. Proceedings Biological Sciences, 262, 259–265.
Rieke, F., Warland, D., Steveninck, Rd R V, & Bialek, W. (1999). Spikes: Exploring the neural code. Cambridge, MA: MIT Press.
Rust, N., & Movshon, T. (2005). In praise of artifice. Nature Neuroscience, 8, 1647–1650.
Salinas, E. (2008). So many choices: What computational models reveal about decision-making mechanisms. Neuron, 60, 946–949.
Schaffner, K. F. (2008). Theories, models, and equations in biology: The heuristic search for emergent simplifications in neurobiology. Philosophy of Science, 75, 1008–1021.
Schwartz, O., & Simoncelli, E. P. (2001). Natural signal statistics and sensory gain control. Nature Neuroscience, 4, 819–825.
Sejnowski, T. J., Churchland, P. S., & Koch, C. (1988). Computational neuroscience. Science, 241, 1299–1306.
Shagrir, O. (2010a). Computation: San Diego style. Philosophy of Science, 77, 862–874.
Shagrir, O. (2010b). Marr on computational-level theories. Philosophy of Science. 77(4), 477–500.
Simoncelli, E. P. (2003). Vision and the statistics of the visual environment. Current Opinion in Neurobiology, 13, 144–149.
Simoncelli, E. P., & Olshausen, B. A. (2001). Natural image statistics and neural representation. Annual Review of Neuroscience, 24, 1193–1216.
Smith, E. C., & Lewicki, M. S. (2006). Efficient auditory coding. Nature, 439, 978–982.
Srinivasan, M. V., Laughlin, S. B., & Dubs, A. (1982). Predictive coding: A fresh view of inhibition in the retina. Proceedings Biological Sciences, 216, 427–459.
Sterratt, D., Graham, B., Gillies, A., & Willshaw, D. (2011). Principles of computational modelling in neuroscience. Cambridge: Cambridge University Press.
Strevens, M. (2004). The causal and unification accounts of explanation unified—Causally. Nous, 38, 154–176.
Strevens, M. (2008). Depth: An account of scientific explanation. Cambridge, MA: Harvard University Press.
Thomson-Jones, M. (2005). Idealization and abstraction: A framework. In M. Thomson-Jones & N. Cartwright (Eds.), Idealization XII: Correcting the model (pp. 173–217). Amsterdam: Rodopi.
Tolhurst, D. J., To, M. P. S., Chirimuuta, M., Lovell, P. G., Chua, P. Y. & Troscianko, T. (2010) Magnitude of perceived change in natural images may be linearly proportional to differences in neuronal firing rate. Seeing and Perceiving, 23, 349–372.
Trappenberg, T. (2010). Fundamentals of computational neuroscience. Oxford: Oxford University Press.
van Hateren, J. H. (1992). Theoretical predictions of spatiotemporal receptive fields of fly LMCs, and experimental validation. Journal of Comparative Physiology A. Neuroethology, Sensory, Neural, and Behavioral Physiology, 171, 157–170.
Vincent, B. T., Baddeley, R. J., Troscianko, T., & Gilchrist, I. D. (2005). Is the early visual system optimised to be energy efficient? Network, 16, 175–190.
Wainwright, M. J., Schwartz, O., & Simoncelli, E. (2001). Natural image statistics and divisive normalization: Modeling nonlinearities and adaptation in cortical neurons. In R. Rao, B. Olshausen, & M. Lewicki (Eds.), Statistical theories of the brain. Cambridge, MA: MIT Press.
Weber, M. (2005). Philosophy of experimental biology. Cambridge: Cambridge University Press.
Weber, M. (2008). Causes without mechanisms: Experimental regularities, physical laws, and neuroscientific explanation. Philosophy of Science, 75(5), 995–1007.
Weisberg, M. (2007). Three kinds of idealization. Journal of Philosophy, 104(12), 639–659.
Weiskopf, D. A. (2011a). Models and mechanisms in psychological explanation. Synthese, 183, 313–338.
Weiskopf, D. A. (2011b). The functional unity of special science kinds. British Journal for the Philosophy of Science, 62, 233–258.
Willmore, B. D. B., Bulstrode, H., & Tolhurst, D. J. (2012). Contrast normalization contributes to a biologically-plausible model of receptive-field development in primary visual cortex (V1). Vision Research, 54, 49–60.
Woodward, J. (2003). Making things happen. New York: Oxford University Press.
Woodward, J. (in press). Explanation in neurobiology: An interventionist perspective. In D. M. Kaplan (Ed.), Integrating psychology and neuroscience: Prospects & problems. Oxford: Oxford University Press.
Zucker, S. W. (2006). Which computation runs in visual cortical columns? In J. L. van Hemmen & T. J. Sejnowski (Eds.), 23 Problems in systems neuroscience. Oxford: Oxford University Press.
Acknowledgments
I would like to thank participants in the Pitt ‘Representations, Perspectives, Pluralism’ seminar, in particular my co-teachers Sandra Mitchell and Jim Bogen, for early discussions of this material. I am also grateful to audience members at the Pitt Center for Philosophy of Science where this paper was originally presented. I am indebted to Jim Bogen, Carrie Figdor, Arnon Levy, Peter Machamer, Collin Rice and James Woodward for comments on the manuscript, and also to the anonymous referees for many helpful suggestions. Finally, I would like to thank David Heeger for permission to use Figure 2.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Chirimuuta, M. Minimal models and canonical neural computations: the distinctness of computational explanation in neuroscience. Synthese 191, 127–153 (2014). https://doi.org/10.1007/s11229-013-0369-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11229-013-0369-y