Skip to main content

Agent Above, Atom Below: How Agents Causally Emerge from Their Underlying Microphysics

  • Chapter
  • First Online:
Wandering Towards a Goal

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Abstract

Some physical entities, which we often refer to as agents, can be described as having intentions and engaging in goal-oriented behavior. Yet agents can also be described in terms of low-level dynamics that are mindless, intention-less, and without goals or purpose. How we can reconcile these seemingly disparate levels of description? This is especially problematic because the lower scales at first appear more fundamental in three ways: in terms of their causal work, in terms of the amount of information they contain, and their theoretical superiority in terms of model choice. However, recent research bringing information theory to bear on modeling systems at different scales significantly reframes the issue. I argue that agents, with their associated intentions and goal-oriented behavior, can actually causally emerge from their underlying microscopic physics. This is particularly true of agents because they are autopoietic and possess (apparent) teleological causal relationships.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 49.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 64.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 89.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Calvino, I.: Invisible cities. Houghton Mifflin Harcourt (1978)

    Google Scholar 

  2. Davidson, D.: Mental events. Reprinted in Essays on Actions and Events, 1980, 207–227 (1970)

    Chapter  Google Scholar 

  3. Kim, J.: Mind in a physical world: an essay on the mind-body problem and mental causation. MIT Press (2000)

    Google Scholar 

  4. Bontly, T.D.: The supervenience argument generalizes. Philos. Stud. 109(1), 75–96 (2002)

    Article  Google Scholar 

  5. Block, N.: Do causal powers drain away? Philos. Phenomenol. Res. 67(1), 133–150 (2003)

    Article  Google Scholar 

  6. Castiglione, F.: Agent based modeling. Scholarpedia 1(10), 1562 (2006)

    Article  ADS  Google Scholar 

  7. Adami, C., Schossau, J., Hintze, A.: Evolutionary game theory using agent-based methods. Phys. life Rev. 19, 1–26 (2016)

    Article  ADS  Google Scholar 

  8. Skinner, B.F.: The Behavior of Organisms: An Experimental Analysis (1938)

    Google Scholar 

  9. Schlichting, C.D., Pigliucci, M.: Phenotypic Evolution: A Reaction Norm Perspective. Sinauer Associates Incorporated (1998)

    Google Scholar 

  10. Kahneman, D.: Maps of bounded rationality: psychology for behavioral economics. Am. Econ. Rev. 93(5), 1449–1475 (2003)

    Article  Google Scholar 

  11. Conway, J.: The game of life. Sci. Am. 223(4), 4 (1970)

    Google Scholar 

  12. Max Tegmark’s answer to the Annual Edge Question (2017). https://www.edge.org/annual-questions

  13. Fodor, J.A.: Special sciences (or: the disunity of science as a working hypothesis). Synthese 28(2), 97–115 (1974)

    Article  Google Scholar 

  14. Hoel, E.P., Albantakis, L., Tononi, G.: Quantifying causal emergence shows that macro can beat micro. Proc. Natl. Acad. Sci. 110(49), 19790–19795 (2013)

    Article  ADS  Google Scholar 

  15. Hoel, E.P.: When the map is better than the territory (2016). arXiv:1612.09592

  16. Laplace, P.S.: Pierre-Simon Laplace Philosophical Essay on Probabilities: Translated from the Fifth French edition of 1825 with Notes by the Translator, vol. 13. Springer Science & Business Media (2012)

    Google Scholar 

  17. Pearl, J.: Causality. Cambridge University Press (2009)

    Google Scholar 

  18. Bateson, G.: Steps to an ecology of mind: collected essays in anthropology, psychiatry, evolution, and epistemology. University of Chicago Press (1972)

    Google Scholar 

  19. Tononi, G., Sporns, O.: Measuring information integration. BMC Neurosci. 4(1), 31 (2003)

    Article  Google Scholar 

  20. Shannon, Claude E.: A mathematical theory of communication. Bell Syst. Tech. J. 27(4), 623–666 (1948)

    Article  MathSciNet  Google Scholar 

  21. Fodor, J.A.: A theory of content and other essays. The MIT Press (1990)

    Google Scholar 

  22. Maturana, H.R., Varela, F.J.: Autopoiesis and Cognition—The Realization of the Living, Ser. Boston Studies on the Philosophy of Science. Dordrecht, Holland (1980)

    Book  Google Scholar 

  23. England, J.L.: Statistical physics of self-replication. J Chem. Phys. 139(12), 09B623_1 (2013)

    Google Scholar 

  24. James, W.: The principles of psychology. Holt and Company, New York (1890)

    Google Scholar 

  25. Ashby, W.R.: An introduction to cybernetics (1956)

    Google Scholar 

  26. Marshall, W., Albantakis, L., Tononi, G.: Black-boxing and cause-effect power (2016). arXiv:1608.03461

  27. Kullback, S.: Information Theory and Statistics. Courier Corporation (1997)

    Google Scholar 

  28. Frisch, M.: Causal Reasoning in Physics. Cambridge University Press (2014)

    Google Scholar 

  29. Sperry, R.W.: A modified concept of consciousness. Psychol. Rev. 76(6), 532–536 (1969)

    Article  Google Scholar 

  30. Ellis, G.: How can Physics Underlie the Mind. Springer, Berlin (2016)

    Book  Google Scholar 

  31. Noble, D.: A theory of biological relativity: no privileged level of causation. Interface Focus 2(1), 55–64 (2012)

    Article  Google Scholar 

Download references

Acknowledgements

I thank Giulio Tononi, Larissa Albantakis, and William Marshall for our collaboration during my PhD. The original research demonstrating causal emergence was possible [14] was supported by Defense Advanced Research Planning Agency (DARPA) Grant HR 0011-10-C-0052 and the Paul G. Allen Family Foundation.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Erik P. Hoel .

Editor information

Editors and Affiliations

Appendix A. Technical Endnotes

Appendix A. Technical Endnotes

6.1.1 A.1 Scales and Interventions

To simplify, only discrete systems with a finite number of states and/or elements are considered in all technical endnotes. The base microscopic scale of such a system is denoted S m , which via supervenience fixes a set of possible macroscales {S} where each macroscale is some S M . This is structured by some set of functions (or mappings) \( \varvec{M}:S_{m} \to S_{M} \) which can be over microstates in space, time, or both.

These mappings often take the form of partitioning S m into equivalence classes. Some such macroscales are coarse-grains: all macrostates are projected onto by one or more microstates [14]. Other macroscales are “black boxes” [25]: some microstates don’t project onto the macrostate so only a subset of the state-space is represented [15, 26]. Endogenous elements at the microscale (those not projected onto the macroscale) can either be frozen (fixed in state during causal analysis) or allowed to vary freely.

To causally analyze at different scales requires separating micro-interventions from macro-interventions. A micro-intervention sets S m into a particular microstate, do(S m  = s m ). A macro-intervention sets S M instead: do(S M  = s M ). If the macrostate is multiply-realizable then a macro-intervention corresponds to:

$$ do\left( {S_{M} = s_{M} } \right) = \frac{1}{n}\mathop \sum \limits_{{s_{{m,i \in s_{M} }} }} do\left( {S_{m} = s_{m,i} } \right) $$

where n is the number of microstates (s i ) mapped into S M .

6.1.2 A.2 Effective Information and Causal Properties

Effective information (EI) measures the result of applying some Intervention Distribution (I D ), itself comprised of probabilities p(do(s i )) which each set some system S into a particular state s i at some time t. Applying I D leads to some probability distribution of effects (E D ) over all states in S. For systems with the Markov property each member of I D is applied at t and E D is the distribution of states transitioned into at t+1. For such a system S then EI over all states is:

$$ EI\left( S \right) = \frac{1}{n}\mathop \sum \limits_{{s_{i} \in S}} D_{KL} \left( {S_{F} \left| {do\left( {S = s_{i} } \right)} \right.\left\| {E_{D} } \right.} \right) $$

where n is the number of system states, DKL is the Kullback-Leibler divergence [27], and (S F |do(S = s i )) is the transition probability distribution at t+1 given do(S = s i ). Notably, if we are considering the system at the microscale S m , EI would be calculated by applying I D uniformly (H max , maximum entropy), which means intervening with equal probability (p(do(s i ) = 1/n) by setting S into all n possible initial microstates (\( do\left( {S = s_{i} } \right)\forall_{i} \in 1 \ldots n) \). However, at a macroscale I D may not be a uniform distribution over microstates, as some microstates may be left out of the I D (in the case of black boxing) or grouped together into a macrostate (coarse-graining).

Notably, EI reflects important causal properties. The first is the determinism of the transition matrix, or how reliable the state-transitions are, which for each state (or intervention) is:

$$ D_{KL} \left( {S_{F} \left| {do\left( {S = s_{i} } \right)\left\| {H_{max} } \right.} \right.} \right) $$

While the degeneracy of the entire set of states (or interventions) is: \( D_{KL} \left( {E_{D} ||H_{max} } \right) \). Both determinism and degeneracy are [0, 1] values, and if one takes the average determinism, the degeneracy, and the size of the state-space, then: \( EI = \left( {determinism\!-\!degeneracy} \right)*size \).

6.1.3 A.3 Scales as Codes

The capacity of an information channel is: \( C = max_{p\left( X \right)} I\left( {X;Y} \right) \), where I(X;Y) is the mutual information H(X) – H(X|Y) and p(X) is some probability distribution over the inputs (X). Shannon recognized that the encoding of information for transmission over the channel could change p(X): therefore, some codes used the capacity of the information channel to a greater degree.

According to the theory of causal emergence there is an analogous causal capacity for any given system: \( CC = max_{{\left( {I_{D} } \right)}} EI\left( S \right) \).

Notably, for the microscale S m I D  = H max (each member of I D has probability 1/n where n is the number of microstates). However, a mapping M (described in Appendix A) changes I D (Appendix B) so that it is no longer flat. This means that EI can actually be higher at the macroscale than at the microscale, for the same reason that the mutual information I(X;Y) can be higher after encoding. Searching across all possible scales leads to EI max , which reflects the full causal capacity. EI can be higher from both coarse-graining [14] and black-boxing [15].

6.1.4 A.4 What Noise?

If the theory of causal emergence is based on thinking of systems as noisy information channels, one objection is that real systems aren’t actually noisy. First it’s worth noting that causal emergence can occur in deterministic systems that are degenerate [14]. Second, in practice nearly all systems in nature are noisy due to things like Brownian motion. Third, any open system receives some noise from the environment, like a cell bombarded by cosmic rays. If one can only eliminate noise by refusing to take any system as that system, this eliminates noise but at the price of eliminating all notions of boundaries or individuation. Fourth, how to derive a physical causal microscale is an ongoing research program [28], as is physics itself. However, it is worth noting that if the causal structure of the microscale of physics is entirely time-reversible, and the entire universe is taken as a single closed system, then it is provable that causal emergence for the universe as a whole is impossible. However, as Judea Pearl has pointed out, if the universe is taken as a single closed system then causal analysis itself breaks down, for there is no way to intervene on the system from outside of it [17]. Therefore, causal emergence is in good company with causation itself in this regard.

6.1.5 A.5 Top-Down Causation, Supersedence, or Layering?

To address similar issues, others have argued for top-down causation, which takes the form of contextual effects (like wheels rolling downhill [29]), or how groups of entities can have different properties than individuals (water is wet but individual H2O molecules aren’t). Others have argued that causation has four different Aristotelian aspects and different scales fulfill the different aspects [30]. It’s also been suggested that the setting of initial states or boundary conditions constitute evidence for top-down causation [31], although one might question this because those initial states or boundary conditions can themselves also be described at the microscale.

Comparatively, the theory of causal emergence has so far been relatively metaphysically neutral. Openly, its goal is to be intellectually useful first and metaphysical second. However, one ontological possibility is that causal emergence means the macroscale supersedes (or overrides) the causal work of the microscale, as argued originally in [14]. A different metaphysical option is that scales can be arranged like a layer cake, with different scales contributing more or less causal work (the amount irreducible to the scales below). Under this view, the true causal structure of physical systems is high dimensional and different scales are mere low dimensional slices.

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Hoel, E.P. (2018). Agent Above, Atom Below: How Agents Causally Emerge from Their Underlying Microphysics. In: Aguirre, A., Foster, B., Merali, Z. (eds) Wandering Towards a Goal. The Frontiers Collection. Springer, Cham. https://doi.org/10.1007/978-3-319-75726-1_6

Download citation

Publish with us

Policies and ethics