Next Article in Journal
From Data to Semantic Information
Previous Article in Journal
Information Seen as Part of the Development of Living Intelligence: the Five-Leveled Cybersemiotic Framework for FIS
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hierarchical Dynamical Information Systems With a Focus on Biology

Philosophy, University of Natal, Durban 4041, South Africa
Entropy 2003, 5(2), 100-124; https://doi.org/10.3390/e5020100
Submission received: 20 February 2003 / Accepted: 26 February 2003 / Published: 26 June 2003

Abstract

:
A system of a number of relatively stable units that can combine more or less freely to form somewhat less stable structures has a capacity to carry information in a more or less arbitrary way. I call such a system a physical information system if its properties are dynamically specified. All physical information systems have certain general dynamical properties. DNA can form such a system, but so can, to a lesser degree, RNA, proteins, cells and cellular subsystems, various immune system elements, organisms in populations and in ecosystems, as well as other higher-level phenomena. These systems are hierarchical structures with respect to the expression of lower level information at higher levels. This allows a distinction between macro and microstates within the system, with resulting statistical (entropy driven) dynamics, including the possibility of self-organization, system bifurcation, and the formation of higher levels of information expression. Although lower-level information is expressed in an information hierarchy, this in itself is not sufficient for reference, function, or meaning. Nonetheless, the expression of information is central to the realization of all of these. ‘Biological information’ is thus ambiguous between syntactic information in a hierarchical modular system, and functional information. However, the dynamics of hierarchical physical information systems is of interest to the study of how functional information might be embodied physically. I will address 1) how to tighten the relative terms in the characterizations of ‘information system’ and ‘informational hierarchy’ above, 2) how to distinguish between components of an information system combining to form more complex informational modules and the expression of information, 3) some aspects of the dynamics of such systems that are of biological interest, 4) why information expression in such systems is not sufficient for functional information, and 5) what further might be required for functional information.

1. Introduction

Information is a multiply ambiguous notion. Since it is unlikely that information is just one thing, it is more useful to characterize various aspects of information and its use in ways that are open-ended and general enough to allow linking together information and information related concepts in various disciplines and specialties. I will focus in this paper on certain aspects of genealogical information expression in DNA, genes, organisms and populations as an example of a physical information system with functional characteristics. The issue is to get clear how far we can go towards understanding these aspects of information from the structure and dynamics of the information system alone. The treatment, insofar as it goes, should be extendible to other biological information systems to which the notion of a code or formal language is applicable. I will assume that all biological information expression is a dynamical process, embodied in physical phenomena that can be understood in terms of forces and flows. I will further assume that all biologically relevant properties of biological information expression are dynamical properties. This approach puts no special restrictions on biological information, allowing it to be emergent and to have laws at its own level, but it does impose a sort of discipline that forbids classifications without regard to interactions and embodiment. The core of this account is a dynamical approach to information expression that should be applicable wherever information is expressed, from natural laws to works of art, though the details will be significantly different in each application. The biological case is especially interesting because it requires attention both to underlying physico-chemical processes as well as to notions of function and interpretation. Even if biological information is only a sort of proto-information, it can serve as a model for the dynamical embodiment of richer notions of information.
The central idea of this paper is a physical information system, in which the information bearing capacity rests in the dynamical possibilities of the elements of the system, and on the closure conditions of the system itself. I will argue that such a system is minimally required for the expression of information, as opposed to merely being subject to the application of one or another information theoretic formalism. Information expression itself is a dynamical (and perhaps semiotic) process that cannot be fully described in purely formal terms. This is because formal descriptions can apply just as well to accidental relations, which do not convey information, as they can to dynamical connections that can convey information.
The great tragedy of formal information theory is that its very expressive power is gained through abstraction away from the very thing that it has been designed to describe. This has led to debates about the scope of applicability of the theory, with the risk of too liberal interpretation allowing the theory to apply to merely nominal classes that have no underlying reality, and too strict an interpretation that applies only to what we already know without dissent to be information, but which risks ignoring important connections that can be discovered with the guidance of a broader interpretation of the formalism. Investigation based on interactions and closure conditions at least deals with things that can be tested by interventions into the system, and requires that classifications be along the same dynamical lines with which we interact with the world. This helps in correcting prejudices grounded in habitual classifications or overly optimistic expectations, but more importantly allows for a more concrete interpretation of what it is to be able to bear and express information. It also allows us to apply concepts of information theory without prejudice as to how restrictive the notion of information really is, without allowing any subjective regularity or classification to be informative. In particular, a dynamical approach requires that all information concepts applying to a system be defined in terms of dynamical properties of the system itself, making them internal properties of the system, rather than classifications that are imposed from the outside. This in itself does not resolve issues of functionality and meaning of this “information”. However, it can help us to come clean on what is dynamically involved in functionality and meaningfulness, which in turn can help to place them in the world. I will address these issues in the final section.
First, though, I will place my area of focus in this paper in the broadest context of information theory. Next I introduce some central systems concepts in dynamical terms, and then apply them to the definition of a physical information system. I will discuss some properties of such systems, especially their capacity for hierarchical organization that is capable of development and evolution. The dynamics of information expression turn out to central to these processes. Next, I will describe some aspects of biological systems that match these properties, and that have been referred to in terms of information. Only then will I raise questions of meaning and function.

2. Varieties of Information

In its broadest sense, as Donald Mackay first pronounced, information is a distinction that makes a difference [23]. Formal information theory comes, broadly, in three guises: the Shannon approach [42], algorithmic complexity of Kolmogorov [30,31] and Chaitin [10], and the more recent information flow approach developed by Barwise and Seligman [2]. All are based on the existence of distinctions among elements within some unspecified system. The different approaches emphasize different aspects of the logic of distinctions, and they complement each other rather than being competitors.
Turning from formal approaches to the embodiment of information, there are a number of nested views, each inheriting the logical and ontological commitments of the containing views, but differing in what they take genuine information to be. I don’t think that this disagreement is constructive. It is more important to recognize what each view is committed to, and how each must be left open in order to allow an integrated approach to information across the range of views.
The most liberal and inclusive view is the “It From Bit” view. It has originated independently from so many people that it is pointless to attribute an origin, though it probably goes back to Leibniz’s view that the world has a logical structure in terms of sensations based in the ability to discriminate. The term is due to John Wheeler, and the view has recently been powerfully if controversially championed by Stephen Wolfram. On this view, any causally grounded distinction makes a difference. It might be called a God’s eye perspective, or the view from nowhere. On this view information is objective, and there is nothing else.
The negentropy view of information is a restriction on the It From Bit view. Only those Its that are capable of doing work (organizing and using energy, or for sorting things) count as information. The rest is disorder. This view is due to Schrödinger [41], though the groundwork was done by Szillard. The motivation for this view is that work is required for control, and the information in microstates beyond that in macrostates is hidden from view. Negentropy measures the capacity for control (in bits, the number of discriminations that a system can make).
The next view is a restriction of the negentropic approach to particular levels of a physical hierarchy, so that information is relativized to a cohesive level of an object, such as an organism or a species. The view is due to Brooks and Wiley [4,46], Collier [11] and Smith [43]. The idea is that not all negentropy is expressed at a given level, and the “Its” available are level relative. This information is a measure of the constraints on the objects within the level; because of their connection to biological and cognitive form, Collier [13] calls this expressed information enformation to distinguish it from other forms of negentropy (for example, disordered information due to nonequilibrium conditions is sometimes called intropy). Lyla Gatlin [28] called this information stored information, but this name is somewhat misleading, as it does not reflect its dynamical and often active nature. This sort of information will be the focus of this paper.
Restricting further we have functional information, which is the expressed information that is functional. I will address this sort of information in this paper, but I will have little to say about it. Some functional information is meaningful. The nature of meaning is the great object of desire for information theory. Within the scope of meaningful, or semantic information, is intentional information, or cognitive content. At the next level of restriction is social information, though some authors hold that cognitive content depends on language, which is a social activity. I will not discuss these levels further here, which is not to say that they are unimportant, or are in some sense reducible to the information forms that I do discuss.

3. Preliminary Concepts

The central concepts are clarifications of the notions of system, cohesion, component, level and hierarchy I used [10] in the introduction of physical information systems to explain the information dynamics presumed by Wiley and Brooks [4,46] in their call for a unified approach to biology based in a statistical dynamics based in entropy production in both energetic and informational processes. The general principles, however, have a much broader application, and the characterization of the central dynamical concepts is a selection of the relevant notions from a much broader set drawn up by Collier and Hooker (especially in [26], but see also [25]) in the context of a general discussion of reducibility and emergence in complex systems. This framework has been adopted by the Brooks & Wiley school [5,6,7,9].
Holistic aspects that resist formulation in precise terms characterize many organized systems. Explicit definitions for central concepts concerning complexly organized systems are often not just impossible to provide, but they can be quite misleading. The reason for this is simple: Explicit definitions place the defined term on only one side of the definition, so that all explicitly defined concepts are in principle eliminable. For example, if bachelors are unmarried adult males, by definition, we need not suppose that there are these things, bachelors, in addition to unmarried adult males. Requiring explicit definitions of irreducible phenomena implies that the concepts of these phenomena, at least, can be reduced to the concepts in their definitions. If the concepts refer to dynamically irreducible phenomena, and the definitions are in dynamical terms, then the definitions presuppose dynamical reducibility. A requirement of explicit definitions for all dynamical phenomena in terms of simpler phenomena would rule out, a priori, nonreducible complex phenomena.
Robert Rosen [37] has suggested that all irreducibly complex systems are impredicative, in the sense that they cannot be given definitions that clearly separate defined from defining terms.1 Impredicative structures are well known in mathematics, and such common structures as the counting numbers and the continuum are now known to resist fully formal explicit definition. The issue of impredicativity was a central focus of much discussion at the turn of the 19th to 20th Centuries [35]. Attempts to reduce all mathematical definitions to explicit definitions in logical terms alone failed, due, we now know, to limits on the scope of purely formal methods even when applied to purely formal structures. Recognizing and accepting impredicativity has been helpful in understanding some age-old paradoxes in logic and language [1]. Impredicative definitions are implicit, and their intended interpretation can be determined only through use. This does not rule out a high degree of precision, but the precision can never be complete if the defined structures are not reducible. The problem is to find characterizations of concepts that are both useful for the study of complexity that are precise, but not so precise that they rule out irreducible complexity. The reader should not confuse different aspects of a characterization of a complex concept with different explicit definitions. As Rosen [37,38] pointed out, there is no single model of an impredicative structure that captures all of its properties, so no single definition can be complete. These structures are irreducibly complex. Unfortunately, there are therefore no simple examples, though the example of the hyperset, x = {x}, can be stated quite simply (Barwise and Echemendy [1] use it as a paradigm of an impredicative structure).
The following definitions, then, as they are intended to refer to both reducible and non-reducible structures, are not explicit. Impredicative aspects of the definitions are noted where especially relevant.
System: A dynamical system is a set of interacting components that is characterized and individuated from other systems by its cohesion. It is therefore a natural object. Its properties must be discovered, and its models must be tested.
Cohesion: Cohesion refers to the cause of the dynamical stabilities that are necessary for the continued existence of a system or system component as a distinct entity. Cohesion has been an integral component of the Brooks-Wiley formulation from the beginning [4,46] and continues to be central [5,6,7,11,12,17,19,21,22,23,24,25,26,27]. These stabilities arise from the constraints which dynamical interactions within a system impose on the dynamics of its components. Since stability in even relatively simple case resists penetration by traditional methods (see any text on non-linear systems for examples), we should not assume that an account of cohesion requires mechanism, decomposability, or reductionistic diagnosability. The basic form of cohesion is a dynamical property of a system that is insensitive to local variations in the system components (e.g. thermal fluctuations, vibrations or collisions), including those (non-linear) interactions that formed it, and to external influences [12]. Cohesion is invariant under conditions that do not destroy it. For example, a framed cloth kite has noticeable lift in a wind because the cohesion of its cloth molecules integrates the impulses produced by collisions with individual air molecules and transfers the result to the frame, and then to the kite’s cord, where the kite flyer experiences the lifting force as a tug. By contrast, an uncontained gas has no cohesiveness because it has no characteristic properties which interactions among its component molecules stabilize.
Several aspects of cohesion are worth explicit notice. These divide into basic properties of cohesion, which derive from its basic nature, and derived aspects of special interest, which are consequences of the manifestation of the basic properties in specific kinds of systems. The basic properties are (from [26]):
B1: The first basic property of cohesion is that it comes in degrees.
This is a direct consequence of its being grounded in forces and flows, which come in varying kinds, dimensions and strengths. Cohesion, then, must also accommodate kinds, dimensions and strengths. Secondly, and following on from the first property together with the individuating role of cohesion,
B2: Cohesion must involve a balance of the intensities of centrifugal and centripetal forces and flows that favors the inward, or centripetal.2
Last, this balance cannot be absolute, but must be likely over the boundaries of the cohesive entity. Just as there are intensities of forces and flows that must be balanced, there are, due to fluctuations, propensities of forces and flows that show some statistical distribution in space and time (or other relevant dynamical dimensions).
B3: Cohesion must involve a balance of propensities of centrifugal and centripetal forces and flows that favors the inward, or centripetal.
Note that the asymmetry of the balances in B1 and B2 implies a distinction between inner and outer, consistent with the role of cohesion in individuating something from its surroundings.
The derived aspects of cohesion now follow from the basic properties as they apply to specific systems with many properties. From B1, only some properties are relevant to cohesion. Thus, A1: In general, a dynamical system will display a mix of cohesive and non-cohesive properties. Next, from B2 and B3, A2 Cohesion then is not just the presence of interaction. Whence, A3 a property is cohesive only where there is appropriate and sufficient restorative interaction to stabilize it. From A1, A4: cohesiveness is perturbation-context dependent with system properties varying in their cohesiveness as perturbation kinds and strengths are varied.
Given the characterization of cohesion as a condition of a certain form of balance, A5: The interactive cohesive support of nominally system properties may extend across within-system, system-environment and within-environment interactions. Following from this, cohesion is not to be confined to stability of first order properties like rock shape, kite; rather, A6: cohesion characterizes all properties, including higher order process properties, that are interaction-stabilized against relevant perturbations. While the kite’s cohesion is primarily expressed as a structural stability of a first order property, that of a bird flock is expressed primarily as process stability: flocking through flight path changes. Here changes in flying direction and/or flying conditions, the momentary appearances of obstacles and adversaries, etc. all fail to disrupt the flocking process. The interactive cohesion which maintains this process includes perceptual communication among the component birds (e.g. calls, visual distance) together with all those cohesions in individual birds that make those interactive capacities possible plus those that constitute their capacity to fly. The import of this for information expression is that information can be expressed in interactions. Living systems are primarily characterized in terms of their process organization. Their structures may change, and must change somewhat whenever their adaptability is manifested; the more organized their adaptability, the higher order the cohesive processes that characterize them. See [11,12,16,20,21,22,23,24,25,26] for further definitions and applications of the cohesion concept that involve interactive closure.
Interactive closure between the system and environment, together with an organizational imbalance between system and environment that favors the system, allows the definition of system cohesion in terms of the organization of forces and flows, rather than simply in terms of their intensities and propensities. This gives rise to a new sort of system that can exist only if both organized and complex. Organizational stability is grounded in forces and flows, but resides in what we might naturally call the control of those forces and flows. This control is itself grounded in and realized through forces and flows, and is thus based in dynamical processes. Therefore, organizational cohesion fits our general characterization of cohesion. Organization is a higher order dynamical property in that it concerns not just the forces and flows, but the way in which they are inter-related. A focus on the intensities and propensities of flows alone tends to obscure the more subtle possibility of organization-based stability and the new possibilities to which it gives rise. Organization does not exhaust the possibilities of new forms of cohesion, which include higher order organizations of various sorts, the sort depending on the organized substrate. For example, the flocking of birds mentioned in the last paragraph is in fact grounded in sensorimotor organization of the autonomous birds. Similar flocking phenomena can be simulated with computer models in which the process is grounded quite differently (e.g., Reynolds’ BOIDS, [36]).
Cohesion as defined here gives the individuation conditions for dynamical entities, telling us what closure conditions must be satisfied in order to bind something together, and to distinguish it from other dynamical objects. The conditions can be energetic differences, kinetic differences, or organizational differences, and may be either entirely internal, or may be routed through the environment. Also, cohesion may be the sum of local effects (such as molecular bonds in a crystal), or else a non-localizable property (such as the organizational closure of an organism). As we shall see, physical information systems allow the expression of information even across non-localizable cohesive boundaries.
Levels A cohesive system level, or dynamical level, is a dynamically grounded constraint (structural or process) in a system that occurs when (and only when) cohesion exists. This definition is impredicative, and cannot be replaced with a explicit definition except in cases in which the level itself is subject to explicit reduction. Any attempt at a fully explicit definition of level would beg the question concerning the reducibility of levels. An example of multiple levels is found in the kite: the assembled kite represents a cohesive supra-component level, and its component cloth covering, framing rods and twine are each themselves cohesive supra-molecular levels. In this case one of the levels is contained within the other, but they need not be. Nonetheless, levels are typically nested by physical scale, forming a partial ordering. This sort of structure is easily confused with a classification. A typical example of such a confusion that many biology texts warn of is that between classifications in biological systematics and phylogenetic trees. The former are classifications, while the latter represent historical processes. The latter can be used as a basis for the former, but the two are distinct. While all phylogenetic distinctions are particulars (at least ideally), biological classes above the species level are abstractions. Levels, unlike classes, must be cohesive.
Since levels are forms of cohesion, the have the same three basic properties B1-B3 and obey the same six principles A1-A6 set out above. In particular, structural and process levels may co-exist, and in general must co-exist and inter-twine. Process levels may occur across structural levels, such as the respiratory process which occurs across the sub-cellular, multi-cellular organ and multi-organ levels correlating sub-cellular activity with both pulmonary and cardio-vascular activities, and conversely.3 Furthermore, levels may be more or less transitory, especially in living and similar non-stationary systems where structures and lower order process constraints are continually changing to suit the context. Thus we must reject any simple picture of a system as a single series of universal, permanent levels, like floors of a building, and recognize instead a web of partial levels, each level holding only within some domain, itself perhaps a function of system state (including history).
There is a somewhat broader notion of level that mixes dynamical and classificatory notions. This broader notion puts all immediate components of a given level in our sense at the next lower level, and allows us to talk broadly of the physical, chemical, cellular, etc. levels. This usage must not be confused with the notion of dynamical levels. It is purely descriptive, and has no independent dynamical reality (though it often does involve dynamical properties).4
Organization: Intuitively, an organized system is one exhibiting distinct but inter-related and coordinated component behaviors. Machines and living things are organized because their parts are relatively independent and each plays distinctive, systematic and essential roles in the whole. The overall integration of an organized system implies that there are non-localizable properties of the system in addition to local properties and interactions. An organized system thus displays a global ordering relation that has a high order of redundancy, unlike the low-level redundancy of the ordering relations of crystals, but more ordered than a random arrangement.
Components, parts and modules: Issues of decomposability and localizability inevitably concern the nature of the local units into which a system might be decomposed. Dynamical realism requires that ultimately these units are dynamical. The dynamical elements of some systems will be components that is, dynamically stable, separately identifiable sub-systems. Components may inter-penetrate, as do car body and features systems, or the cardio-vascular and hormonal subsystems of the human body, so long as they remain dynamically stable and identifiable (within the modeling criteria). A common form for components is parts, that is, spatially bounded and distinct dynamically stable sub-systems. Thus, most machines (as we currently construct them) have parts as their component elements at some sufficiently fine-grained spatial scale. It is natural to think of parts as modules, but the idea of a module could also apply to functional components that may not be spatially independent, such as the lift and propulsion functions of a bird’s wing.
However, some aspects of systems are best specified in terms of processes rather than components. This raises two issues: First, processes need not have a clear bottom, or atomic level, from which molecules are composed. The combination of processes can lead to a net product that is not decomposable into a sum of the effects of the two processes. (Robert Rosen discusses this implicitly in [37: Chapter 6], though this result is not obvious from his discussion because of an unnecessary formulation of the issue in terms of his modeling relation). The simplest example is the case of two mechanical systems that, when combined, produce a nonmechanical system. Thus if all components of a particular system are processes, there need be no fundamental components. Second and more generally, the closure of processes need not match the boundaries of parts, and an analysis into constituent processes need not match a decomposition into parts.
Dynamical hierarchies: A dynamical hierarchy is a concrete particular system with a dynamically determined partial ordering. Whenever there is a constraint asymmetry we shall speak of hierarchy relationships, with the direction of the hierarchy being the direction of control. Commonly among living and human engineered systems a hierarchical structure is a cohesive combinations of components, e.g. organs from cells and bodies from organs. Since any hierarchy is a partial ordering of levels, it must have a number of levels and a constraint asymmetry. The same levels (the same cohesive objects, but perhaps under different descriptions) can be the subject of more than one constraint asymmetry, and may therefore be members of more than one hierarchy. As noted by Salthe, nondegenerate hierarchies will always have at least three levels [39,40], or else one of asymmetry and transitivity will not hold, except trivially. Unfortunately, Salthe fails to use a concept equivalent to cohesion, and there is no guarantee that his hierarchical levels correspond to dynamical objects. In his scalar hierarchy [40], scale could include any arbitrary collection. In his specification hierarchy [40], the levels are classes, and are not objects at all. In a dynamical hierarchy, or hierarchical structure, the levels are dynamical levels, and the constraint is also dynamical. Salthe does not discuss this sort of hierarchy, but it is the kind found in instances of organized living systems, as discussed by myself in [11], in clarifying the work of Brooks and Wiley [4]. Unfortunately, failure to understand the distinction between dynamical organizational hierarchy instances and various abstractions that do not focus on the notion of cohesion as the sine qua non of a dynamical object has led to numerous misrepresentations of the Brooks and Wiley approach to information.
A dynamical hierarchy H, then, has n levels, x1, x2, x3, ...,xn such that there is a relation RH between the levels such that RH is transitive: for all distinct i,j,k, xi RH xj and xj RH xk → xi RH xk and asymmetric: for all distinct i,j, xi RH xj → ~xj RH xi, and RH is a dynamical constraint between the xis. There are two informal terms in this definition, ‘level’ and ‘dynamical constraint’. Formally, any two hierarchies with the same relational structure are identical, but they may be dynamically distinct. A straightforward case of a hierarchy is defined by the part-whole, or ‘component of’ relation, assuming that this is a dynamical constraint. This relation is an intensive constraint since it applies to each member relation without regard to any other members. The part-whole relation depends solely on the constraints that determine cohesion, and it is fundamental for understanding ontological relations between hierarchy members. However, there are interesting dynamical hierarchies that are not based on the component relation.
Extensive constraints apply in virtue of the relations between more than one member that are RH related to the same member of the hierarchy. An important sort of extensive constraint is additive. For example, an energy hierarchy is constrained at each higher level by the sum of the energy of the members at each of the lower levels, likewise for information hierarchies and organization hierarchies. The RH in such cases constrains the relationship between levels such that the extensive property value of the higher level is the sum of the values of the extensive property of the next lower levels. Formally, if Ei,n is the value of the extensive property of nth component at the ith level, RH(xi,n,xi) = {<xi,n,xi>: Ei = ∑m∈NEi,m, n∈N and m∈N if and only if <xi,m,xi>}. Note that, given the additive nature of an extensive property, the constraint is asymmetric, and it is transitive between levels. This definition is obviously impredicative. To spell this out, consider energy. The central notion of the energy constraint is that the energy of the higher level is the sum of the energies of the members of the next lower levels. So the constraint is a relation between each the higher level and each member of the lower level such that the sum of the energies of the members of the lower level equals the energy of the higher level, and the member of the lower level in each case is such that the constraint holds. This might appear awkward. Why not just define the constraint as requiring that each member of a level is such that the sum of the energies at that level is equal to the energy of the next highest level? The answer is that the relation between the levels is defined only if the constraint is defined. Therefore we can speak of the relation between levels only in terms of the place in the hierarchy, which requires that the constraint is given. Energy summation between levels is asymmetric: the sum is equal to the components of the sum only if there is only one component, but the then the cohesion conditions of the lower and upper level are the same, and there is only one level. It is also obvious that energy summation is a transitive constraint. Note that transitivity is a logical property of the hierarchy constraint relation, and does not imply that anything dynamical is passed between levels (such as energy in this case), although something that is passed could be the basis of transitivity.
Hierarchies based on extensive properties need not be strictly additive constraint. The constraint could be partial. This is the case for the hierarchies of information expression discussed by Brooks and Wiley [4,5,6,7,11] and in the next section. The expressed information is some part of the sum of the partial information of the components. There are also extensive organizational hierarchies in which higher level organization is just a part of lower level organization, or, more often, vice versa. Abstracting to just these extensive aspects, however, ignores any dynamical role of the defining relation of the hierarchy in its dynamical structure. Typically, the issue of what makes the summation and partitions of extensive hierarchies is a much more significant issue than the existence of the extensive hierarchies themselves. This is especially true in more complex extensive hierarchies, in which the extensive property is a more complex function than (partial) summation. Averages are a good example. Statistical mechanical properties of macrostates are averages of component properties. Typically an average defined with respect to constraints at one level does not bear a simple relation to averages defined at another level. Carrying averages across levels without regard to dynamics is called the averaging fallacy. It has been prevalent in discussions of group selection in biology [44], but it is widespread in statistical science.
One of the advantages of restriction to dynamical properties, and paying attention to the closure requirements for the cohesion of levels with respect to specific dynamical processes is that, if properly executed, it does not permit the averaging fallacy. A common consequence of the averaging fallacy is a mistaken claim of reduction of a higher level property to a lower level property, or of higher level phenomena to lower level phenomena. Sober and Wilson give a number of examples from the controversy over group selection. Information, such as genetic information, that is expressed as some average or more complex extensive function, can be selected as an average, and not merely as a consequence of the components of the average. Which is occurring requires a dynamical account of information expression.

4. Physical Information Systems

One of the first attempts to give a hierarchical account of information in living systems is due to Lyla Gatlin. Gatlin [28] calls the Shannon-Weaver information “potential information”, since it is a measure of the capacity to carry real information. She measures the expressed information, which she calls “stored information” so that it equals the product of the redundancy and the maximal information content, noting that it should be negentropic. Her notion is similar to the stored information of Brooks and Wiley [4], who based much of their initial discussion of biological information and information expression on Gatlin’s work. Their potential information, however, is quite different, since it is a) constrained by the actual probabilities of combination, b) defined as not being stored, and c) physically present.
In order for something to carry information, it must have a variety of possible states that are distinct from each other. A physical system that has this property must be made of components that are sufficiently distinct, and can combine in a variety of ways. This implies a dynamical distinction between relatively stable components, or elements of the system, and cohesive combinations that can preserve information long enough for us to say that the information has been carried. Strictly, these components need not be parts. For example, the wavelengths of a waveform might be the components of an information system, though they aren’t spatially distinct. In many cases, though, the components will be parts, such as the nucleotides of DNA and the codons of the genetic code. One must be wary, however, of assuming that information system components must be spatio-temporally delimited parts. The spatio-temporally delimited spikes found in neurons, for example, were widely thought to encode information in the nervous system, but now it appears that the rate of the spikes and other more complex functions of the spikes are the actual informational components. In any case, a physical information system must have two levels, one corresponding to elements that can combine in different ways, and another representing the combinations. The elements have only a potential for carrying information, which is realized only if they are combined to express some inter-related whole.
A physical information system can bear information whose properties depend only on properties internal to the system. The information is like Shannon-Weaver information, except that it is not an abstraction. It exists whenever there are relatively stable (cohesive) structures which can combine lawfully. The information carried by an element cannot be greater than the difference between the actual entropy of the element and its entropy if all its internal dynamical constraints (its cohesion) are released. This is equal to what Brillouin [3] calls the bound information. If it were more, then either lawfulness or the second law of thermodynamics would be violated. The actual value, however, is determined by its likelihood of combination with the other elements. The information content of a physical combination of elements (an array) is the sum of the contributions of the individual elements. For example, the nucleic acids have a structure which contains a certain amount of bound information (they are not just random collections of atoms), and can interact in regular ways with other nucleic acids (as a consequence, but not the only one, of their physical structure). The information capacity of a given nucleic acid sequence is determined by the a priori probability of that sequence relative to all the permitted nucleic acid sequences with the same molecules. The bound information, which will be greater, is determined by the probability of the sequence relative to all the random collections of the same molecules. (Nucleic acids, of course, have regular interactions with other structures, so the restriction of the information system to just nucleic acid sequences is questionable. We can justify singling out these sequences because of their special role in ontogeny and reproduction.) The lawful (regular) interactions of elements of an information system determine a set of (probabilistic) laws of combination, which we can call the constraints of the information system (see [42] for a simple example of constraints). Irregular interactions, either among elements of the information system or with external structures, represent noise to the information system.
The elements of an information system, since they are relatively stable, have fixed bound information. It is therefore possible to ignore their bound information in considering entropy variations. The elements are the “atoms” of the system, while the arrays are the states. The stored information of an array is a measure of its unlikelihood given the information system. The entropy (sensu Brillouin) of this unlikelihood equals the entropy of the physical structure of the array (see, e.g., [29] for an account of the entropy of biomolecules) minus the entropy of the information system constraints. This value is negative, indicating that the stored information of an array is negentropic. Its absolute value is the product of the redundancy of the information system and the Shannon-Weaver entropy. This is just Gatlin's stored information. Array entropy so calculated reflects more realistically what can be done with an information system than the Shannon-Weaver entropy. In particular, random alterations to an array make it difficult to recover the array.
This definition of array entropy is inadequate, since it is defined in terms of properties not in the system, namely the entropies of the constraints and the structure constituting the array. The entropy of a system is usually defined in terms of the likelihood of a given macrostate. Two microstates are equivalent macrostates if they have same effect at the macro level (ignoring statistical irregularities). If we assume that all states must be defined internally to the system, the above analysis of arrays does not allow any non-trivial macrostates; each macrostate has just one microstate. This forces a definition of entropy in terms of elements not in the system, or else a “cooked” definition, like Shannon-Weaver entropy. A satisfactory definition of array entropy must be given entirely in terms of the defining physical properties of the information system elements. Such a definition can be given by distinguishing between actual and possible array states.
By assumption, the elements of the system are relatively stable and combine lawfully to form arrays. Possible maximal arrays of elements are the microstates. The macrostates are the actual array states. The microstates of an array are the possible maximal arrays of which it is a part. The information and entropy of a macrostate are defined in the usual way in terms of probabilities of microstates. In abstract information systems this definition degenerates, since arrays can be arbitrarily large. In realistic information systems, though, there is an upper limit on possible array size (though it might be somewhat vague). In organisms the maximum array size is restricted largely by the lengths of the chromosomes. In species it is restricted to the maximum number of characteristics of a member. (There must be such a maximum, since the amount of genetic information is finite.) The array information is a form of bound information, but also has an entropy defined only in terms of the information system characteristics. The external entropy of the null array is the entropy of the constraints on the information system. The external entropy of a maximal array is the base line from which the internal entropy can be measured. It can be called the entropy of the information system. The size of the information system is the difference between these two entropies:
Size = Hconstraints - Hsystem.
The external entropy of an array is the internal entropy plus the entropy of the information system, equal to the entropy of the constraints minus the array information:
Hexternal = Hinternal + Hsystem = Hconstraints - I.
The internal entropy of information systems is an extension of the classical statistical entropy of thermodynamic systems. It treats information systems as closed with respect to information but open to matter and energy, whereas mechanical systems are closed if they allow energy to flow in and out of the system, but not matter. The internal entropy of an array is determined by the physically possible ways it could be realized, just as the entropy of a thermodynamic state is determined by its possible microstates. The internal entropy is no less physical than the thermodynamic entropy, unlike the sequence or configurational entropy of Shannon-Weaver information. Array information is a special case of message information, just as bound information is a special case of abstract information. In this sense it is not anthropomorphic to speak of a biological code or a chemical message.

5. Information and Entropy in Hierarchies

Information in components is potential information (more carefully, information capacity) that can be expressed in an array, though typically only part of the total information of the elements will be expressed. This sets up a somewhat degenerate two level hierarchy based on an extensive property. Using the Gatlin-Brooks-Wiley terminology, the distinguished set of higher level messages contain the stored information of the information system, while the variants contain the potential information. The stored information is what distinguishes a system from other systems. The cohesion of higher levels in physical information systems is the basis of their individuation. It also constrains the expression of potential information. Implicit in this two level hierarchy, however, are the external and system constraints, which are based on the dynamical properties of the elements, and represent dynamical constraints on the system. If we assume that the larger system constraints are also cohesive (required for its individuation as a real system), then the two level hierarchy is embedded in a larger informational hierarchy. We can multiply levels by with further intermediate levels of cohesion, in which each lower level provides potential information that can be expressed at the next higher level.
There are at least four levels in the hierarchy of biological information. (In fact there are many intermediate levels based in a complex heterarchy, but I grossly oversimplify for the sake of some clarity.) The lowest level is chemical, containing, among other things, DNA, RNA and proteins. The elements are nucleic acids, amino acids and the like. The arrays are macromolecules. At the next level, genetic information is stored in the macromolecules, but not all macromolecules are involved in storing genetic information. Some are involved only in “housekeeping” activities which maintain the chemical system. Others might do nothing. These two groups make up the potential information at the genetic level.
The next clear-cut level is the phenotype. The genetic information determines its characteristics. Not all genetic information is expressed. A recessive allele in a heterozygous individual is an example. The expressed genetic information is the stored information at this level. The potential information is the unexpressed genetic information.
In Brooks and Wiley [4], the information expressed at species level is the set of characters common to all members of the species. This is not quite right, since all members of a species might share some character just by accident. Note that this information is still expressed information of the organisms that are members of the species, and can be potential with respect to the species level, but careful attention to the level at which information is integrated into cohesion disallows accidental regularities in the sum of the information of the embers of a species being species information. A biological species is individuated by being closed under the possibility of successful interbreeding. The information expressed in the species, then, is the set of characteristics required to allow closure under the possibility of successful interbreeding. All other characteristics which are expressed are part of the potential information of the species. The species information literally defines the species, but the potential information indicates that it has the capacity to redefine itself. Note that the species level information in this case is defined in terms of the cohesion (due to interbreeding possibilities and resulting gene flow) of the species. Thus the definitions of cohesion, level and dynamical hierarchy determine the relevant sort of information expression, but also enforce how this information can be expressed dynamically. Any change in the species information, whether originating through self-organization or externally imposed, requires a change in cohesion, with possible resulting changes in levels (species splitting to become a family of species) or hierarchical structure (e.g., involuted distinct sub-populations within a species).
Although the information content of the lower levels is irrelevant to defining the properties of the information system at the higher levels, there is a clear sense in which the higher level information depends on the information at the lower levels: if the lower level information did not exist, neither would the higher level information. The possibility of the higher level information system depends on the maintenance of the stability of the lower level. Variations at the higher level are created by changes at lower levels. The physical processes which maintain the information system can occur at a low level, and as long as they yield relative stability at the higher level, they can largely be ignored. One important constraint, however, is that all higher level processes must be physically compatible with the lower level processes they depend on. In particular, any entropy and information produced at the higher level must be equal to or less than the entropy and information produced at the lower level (viewed externally).

6. Evolving Information Systems

The strong analogy between microstates and macrostates in statistical mechanics and in physical information theory suggests that the creation of new information through the addition of elements to a physical information system can increase the internal information capacity of an array, just as the expansion of a thermodynamic phase space can increase the negentropy, if the expansion is faster than the rate of relaxation.
It is fairly easy to demonstrate this. New information can come either from a new element incorporated into an array, or from new constraints on the array elements. By assumption, the constraints are a consequence of the physical properties of the elements, so the latter case could occur only through replacement of one array element by another. This is equivalent to the deletion of an element and the addition of a new one. The addition of a new element to an array can occur either through the addition of an element already in the information system, or through the addition of an element new to the information system. The former case involves a reduction of entropy, and can occur regularly only if this entropy loss is compensated by increases elsewhere in the system. The latter case involves an entropy increase unless the new element is constrained to occur at only one location in the array, in which case the entropy remains constant. The entropy increase arises from the possibility of the new element substituting for others in the permitted possible maximal arrays. Deletion is the inverse case of addition.
Replacement will increase both entropy and information if the deleted element remains in the system, the added element is new to the system, and the added element has greater information content than the old. In summary, addition increases both entropy and information if a new type of element is added to the system, replacement increases both if the replaced element is of a type which remains in the system, the replacement is of a new type, and the replacement contributes more information to the array than the replaced element. Deletion can never increase both entropy and information. Information new to the system is just noise which becomes incorporated into the system. Information and entropy both increase due to the incorporation of this noise.
Information expressed at a given level must satisfy more constraints at that level than the potential information, so it seems that potential information should increase faster than stored information, yet we observe that variation with a species is lower than this might lead us to expect. The answer is to be found in reproduction. In order to understand this phenomenon, we must distinguish between absolute and distributed information [3]. Information can be redundant in two different ways [28]. Redundancy can result from constraints on the combination of elements, called structural redundancy, or else from the repetition of combinations, called repetitive redundancy. Both are similar in that they protect the system from error due to noise. Absolute information is a measure of structural redundancy. It can be distributed repetitively among a number of identical arrays. The distributed information is the absolute information content multiplied by the degree of repetitive redundancy. Reproduction increases the repetitive redundancy.
Information expressed at the species level has a high repetitive redundancy, since it must be duplicated throughout the species. A deletion of stored information through variation in a species member (interference by noise) would interfere with successful breeding, thereby eliminating the variants from the species. Such variants might be able to reproduce, in which case a new species would appear. The degradation of stored information at the species level results, then, either in the casting off of the information entirely, or else in a speciation event. If the degradation is too fast, the species disappears. In order to avoid this, the reproduction rate must be higher than the production rate of variants to the stored information. Furthermore, potential information at the species level is unnecessary for reproduction, so it can degrade without destroying the species. Consequently, stored information will tend to be preserved, while variation will be maintained at a more or less constant level.
This effect at the species level has ramifications for the lower levels as well. Genetic potential information cannot contribute to reproduction, therefore it will tend to be in equilibrium with mutations. Likewise for potential chemical information (except inasmuch as it is required for maintenance of the information system). Stored chemical and genetic information will tend to be preserved to the extent that they contribute to stored information at the species level, whether in previously existing species or in species produced by new variation. Species act as filters which preserve some information and cast off (dissipate) other information. By far the larger number of variations will be cast off, due to the fairly stringent constraints on stored information. The production and maintenance of species feeds on the production of information at lower levels; only that which is reproduced survives. Species are informational dissipative structures, since they preserve themselves by storing information repetitively and produce themselves by capturing variant information produced at lower levels, increasing their internal entropy. In the process they cast off information which does not contribute to these processes.
High repetitive redundancy is inherently unstable, since more individual arrays are likely to vary. The species is protected by high redundancy, but is also more susceptible to variation. Most individual stored information variations will be cast off, but some will reproduce. The more repetition, the more likely this is. High repetitive redundancy of stored information, therefore, tends to produce speciation.
Species cohesion can be explained fairly easily using the notions of potential information and repetitive redundancy. When cohesion is high, there is little tendency for sub-species to form which have a high degree of internal repetitive redundancy of potential information. This becomes more likely as cohesion is lost, since characteristics peculiar to a sub-species are less likely to be propagated through the species. The higher repetitive redundancy of the potential information makes it more likely that a loss of stored information can be replaced by the redundant potential information in the sub-species, resulting in speciation. This is particularly obvious in the case in which characteristics are the cause of the lack of cohesion, since the very characteristics which make it more difficult for the sub-species to interbreed with the rest of the population are exactly those which make it more likely that loss of stored information can be replaced by potential information. Loss of stored information is not even required in these cases; the potential information can be converted directly into stored information if it becomes a pre-requisite for breeding. Although cohesion increases the stored information entropy of the species by tending to equalize the probability of its microstates, it is itself entropic since it tends to produce isolated sub-systems. The two are balanced at constant entropy, but can increase together when stored information is added to.
What is interesting about this sort of process is that it can occur away from equilibrium. If it does, then in principle the structures can form and be maintained that are analogous to dissipative structures. There is nothing special about species cohesion in this respect; any level within a physical information hierarchy is capable of self-organization. This is true at the genetic level, in development to form phenotypes, and at various finer levels in between. We might expect, then, that processes of information expression also involve the generation of new macro level information that is only potential in the lower levels, and that is not specifically determined by those levels, though it is permitted. One example of how this might happen in adaptive space is given in [12], and a discussion of how this might happen in semiotic processes is given in [17]. Brooks and Wiley [4,6] give a number of other processes in biology that fit this pattern, and the idea has been applied in botanical development as well [33].
This gives some idea of how an information system can evolve according to its own dynamics. External dynamics are also important. In biology, perhaps the most important external factor is selection. Since selection works only through elimination, its function is to remove information, or, to be more accurate, to reduce the size of the information system. This is true of selection at any level, whether working on populations of organisms, or within an organism during development, or in other selection regimes, such as within the immune system. This removal of information through selection can be added directly onto the sort of information dynamics described above. The most important thing to observe abut selection is that it is not itself creative, and other dynamics are required to explain the appearance and increase of information capacity. This suggests that a full account of meaningful information in biology cannot be limited to selection processes and the basis that they give for adaptation.

7. Meaning and Function

It is tempting to assume that the expression of information automatically makes the expressed form the referent of the potential information. For example, traits are often taken to be not just the expression of genes, but that the information in the genes is about traits. Sometimes this is implicit, as in metaphors like the genes as blueprints or programs for the whole organism. Aside from problems with the need for additional information and the nontrivial passage from potential to expression, the relation of “containing information about” is symmetrical, so just as the genes contain information about traits, the traits contain information about the genes. On the other hand, expression is an asymmetrical relation, so perhaps meaningfulness arises through the act of expression. By a similar argument, but going a bit further up the scale to the behavioral expression of genetic information through coordinated interactions with the environment, we might say that the genes encode information about the environment. Note that this is similar to what we think when we say that words express information in our ideas, but in this case we are more inclined to say that the words encode the ideas than the other way around. Adding in the usual biological function generating ideas of selection for adaptive traits, and self maintenance through organization do not help with this apparent disanalogy. If genes are like a code, then they do not express information about phenotype or environment in the same way that words express information about our ideas, at least not on the surface.
The difficulty seems to me to arise through an impoverished view of reference, in which coordination is assumed to be sufficient for reference. If we reflect back on language, we can observe that words function within a linguistic context, and that they get their meaning not merely through coordination with some object or property, but through mediation by contextual connections with other words. This is a feature of C.S. Peirce's semiotics, in which reference requires a tripartite relation that includes the Interpretant, and irreducibly triadic relation. Peirce also emphasized the necessity of habits, or regular forms that systems tend to take. Recalling that expression of information involves a binding of basic components of the information system, it seems natural to look on this as a sort of habit, or triadic structure. If we assume this, then we could try on the idea of the expressed information as a Peircean Interpretant. If we do so, then these expressions of information themselves are not signs, but they mediate the relations of signs and their objects. In this case, we could see the phenotypic information, for example, as mediating the relation between the information in the genes and that in the environment. This would give a basis for seeing particular genes as representing certain environmental features. There is a coordination established, but only in the context of the mediating phenotype. The simple coordination of genes with environmental features, even given selection history, is not enough to establish the meaning of the genes. Similarly, we can relate the genes to traits through the mediation of the phenotype, but not in isolation. A more fine-grained treatment would probably have to augment each of these accounts with a discussion of the relevant origins of the phenotypic information, adaptive selection in the first case, and development in the second. This has the attractive result that adaptation produces the meaning for the genes in the context of selection, whereas development produces the meaning of the genes in the context of the blueprint model. The role of the phenotype in this case is to act as a sort of final cause that gives a directed or intentional character to the phenotype. It also explains a widespread reluctance to reduce functionality in genetics to mere gene-trait or gene-environmental feature relations. The phenotype plays an essential role in providing context for this functionality. Furthermore, the expression of information in the phenotype has the appropriate organism-centered nature to allow us to refer to information for the organism in terms of its mediation, and it satisfies the requirement that information must involve some sort of choice or selection among alternatives.
Despite the attractiveness of this view from a Peircean perspective, in the above it fails the requirement of asymmetry. Just like the pure coordinative views, it allows that traits, in the context of the phenotype, might mean certain genes, or that environmental features, in the context of the phenotype, might also mean certain genes. This suggests that the purely informational is insufficient, and that more is required in order to give an adequate theory of meaning for biological information. As it stands, there is a complete parity between supposed sign and object, allowing them to be reversed. What we need is a way to fix what functions as a sign. These leads naturally to the question of “sign for what?” Typically, signs serve as vicariants, or stand-ins, for the objects they represent within some context of other objects and signs. In order to achieve this, we need account of sign function, which requires an account of functionality in general. Unless we have built this into our account of information from the beginning, either implicitly or explicitly, this takes us beyond the theory of information systems that is the subject of this paper. I will remark, though, that adaptation alone, or development alone, ensure only coordination, and neither gives a basis for the information asymmetry that we require, unless there are further assumptions hidden in our theories of these processes. If there are, I suggest that these assumptions go beyond information theory, and take us into the science of semiotics, or semiology. What might be surprising, especially to both pansemioticists and paninformationists, is that the resolution of the parity problem requires an appeal to function. So far this is absent from both approaches.
Information expression within information systems can give us some necessary conditions for biological meaning, such as a possible interpretant, sign and object, but it lacks the finesse to take us through the door of meaning. Nonetheless, it helps to put some constraints on a justifiable biological semiotics. It also helps to clarify a major lapse in both paninformational and pansemiotic approaches to the issues of information and meaning.
Brooks and McLennan [8,9] discussed this issue with respect to biological signaling. They concluded that most signaling done by organisms is signaling to oneself, both about itself and about its conditions. Whether this signaling is intentional or not is moot. Some of this self-conversation may produce changes in the organism detectable by other organisms. The meaning that those other organisms place on the “signaler” is not caused by the intentions of the signaler, relieving us of the burden of having to postulate a causal link between intention and meaning, or function of the sign. Breaking this link permits signals and meanings to evolve in a purely Darwinian manner (i.e., accidentally, with both costs and benefits, so long as the benefits outweigh the costs by at least a tiny bit). So, for example, a male stickleback turns red as a result of biochemical changes related to testosterone levels. The color change is a by-product of an internal chemical signal from the animal to itself, telling it that it is ready to breed. In that sense, the color change is completely unintentional. However, the color change does occur and as a sign may have different functions for different receivers in the environment (“mate” to a female stickleback, “dinner” to a heron). And so long as “mate” benefits are slightly greater than “dinner” costs, the system will continue.
Brooks and MacLennan’s work shows that we have a level of pure syntactic information (the chemical signals in this case, which may require further semiotic analysis) that can be treated without reference to meaning, the by products, which have no function per se, but are merely coordinated. Such things are really pure syntax, but their correlation with functional signals at the chemical level gives them a role as a sign of the functional activity within the male. The female can then use this information, though without necessarily having intentions, and not requiring a knowledge of what it means in terms of the precise function it represents in the male. This indicates that it is much too hasty to jump in at the level of intention, or even to assume the transitivity of functionality across signs. The nature of information expression, its functional nature (or not) in itself, and its function as a sign, must all be understood prior to making any judgments about intentions or representative meaning. This is the flip side of the gap mentioned above between information expression and full blown representational meaning.

Acknowledgements

I am grateful to Steve Kercel for discussions on impredicativity, which I have incorporated, and to Stan Salthe for discussions of hierarchies, which I have not used, but which made clear to me why his hierarchies are not useful for my purposes. Stan also made many editorial suggestions that I have incorporated. Dan Brooks has made some useful suggestions concerning his more recent work with Debbie MacLennan. Any errors or omissions that remain are, of course, my own. I am also grateful to INTAS and to the Konrad Lorenz Institute for Evolution and Cognition Research for support during the preparation of this document.

References

  1. Barwise, J.; Echemendy, J. The Liar; Oxford University Press: Oxford, 1987. [Google Scholar]
  2. Barwise, Jon; Seligman, Jerry. Information Flow: The Logic of Distributed Systems; University of Cambridge Press: Cambridge, 1997. [Google Scholar]
  3. Brillouin, L. Science and Information Theory, 2nd edition; Academic Press: New York, 1962; pp. 265–266. [Google Scholar]
  4. Brooks, D.R.; Wiley, E.O. Evolution as Entropy: Toward a Unified Theory of Biology, 2nd edition; University of Chicago Press: Chicago, 1988. [Google Scholar]
  5. Brooks, D.R. The nature of the organism: life takes on a life of its own. Proceedings of the New York Academy of Science 2000, 901, 257–265. [Google Scholar] [CrossRef]
  6. Brooks, D.R. Evolution in the Information Age: Rediscovering the Nature of the Organism. SEED. 2001. http://www.library.utoronto.ca/see/pages/SEED%20journal%20library.html.
  7. Brooks, D.R. Taking evolutionary transitions seriously. SEED. 2002. http://www.library.utoronto.ca/see/pages/SEED%20journal%20library.html.
  8. Brooks, D.R.; McLennan, D.A. Biological signals as material phenomena. Rev. pensee d’aujord d’hui 1997, 25, 118–127. [Google Scholar]
  9. Brooks, D.R.; McLennan, D.A. The nature of the organism and the emergence of selection processes and biological signals. Semiosis, Evolution, Energy: Towards a Reconceptualization of the Sign, Taborsky, E., Ed.; Bochum Publications in Semiotics New Series, Shaker Verlag: Aachen, 1999; 3, 185–218. [Google Scholar]
  10. Chaitin, Gregory J. Algorithmic Information Theory; Cambridge University Press: Cambridge, 1987. [Google Scholar]
  11. Collier, John. Entropy in Evolution. Biology and Philosophy 1986, 1, 5–24. [Google Scholar] [CrossRef]
  12. Collier, John. Supervenience and Reduction in Biological Hierarchies. Philosophy and Biology: Canadian Journal of Philosophy Supplementary, Matthen, M., Linsky, B., Eds.; 1988; 14, 209–234. [Google Scholar]
  13. Collier, John. Intrinsic Information. In Information, Language and Cognition; Hanson, Philip P., Ed.; Oxford University Press: Oxford, 1990. [Google Scholar]
  14. Collier, John. Information originates in symmetry breaking. Symmetry: Culture & Science 1996, 7, 247–256. [Google Scholar]
  15. Collier, John. Information Increase in Biological Systems: How Does Adaptation Fit? In Evolutionary Systems; van der Vijver, Gertrudis, Salthe, Stanley N., Delpos, Manuela, Eds.; Kluwer: Dordrecht, 1998; pp. 129–140. [Google Scholar]
  16. Collier, John. Autonomy in anticipatory systems: significance for functionality, intentionality and meaning. Computing Anticipatory Systems. CASYS'98 - Second International Conference, Dubois, D. M., Ed.; American Institute of Physics: Woodbury, New York, AIP Conference Proceedings 465. 1999, 75–81.
  17. Collier, John. The Dynamical Basis of Information and the Origins of Semiosis. Semiosis, Evolution, Energy: Towards a Reconceptualization of the Sign, Taborsky, Edwina, Ed.; Bochum Publications in Semiotics New Series, Shaker Verlag: Aachen, 1999; 3, 111–136. [Google Scholar]
  18. Collier, John. Causation is the Transfer of Information. In Causation, Natural Laws and Explanation; Sankey, Howard, Ed.; Kluwer: Dordrecht, 1999; pp. 279–331. [Google Scholar]
  19. Collier, John. Information Theory as a General Language for Functional Systems. In Anticipatory Systems: CASY’99 - Third International Conference; Dubois, D. M., Ed.; American Institute of Physics: Woodbury, New York, AIP Conference Proceedings; 2000.
  20. Collier, John. Autonomy and Process Closure as the Basis for Functionality. Closure: Emergent Organizations and their Dynamics, Jerry, L.R., van de Vijver, Gertrudis, Eds.; Volume 901 of the Annals of the New York Academy of Science. 2000; 280–291. [Google Scholar]
  21. Collier, John. Dealing With the Unexpected. In Anticipatory Systems: CASY 2000 - Fourth International Conference; Dubois, D.M., Ed.; American Institute of Physics: Woodbury, New York, AIP Conference Proceedings; 2000.
  22. Collier, John. What is Autonomy? In International Journal of Computing Anticipatory Systems: CASYS 2001 - Fifth International Conference, 2001; Dubois, D.M., Ed.;
  23. Collier, John; Mark, Burch. Order From Rhythmic Entrainment and the Origin of Levels Through Dissipation. In Symmetry: Culture and Science Order / Disorder, Proceedings of the Haifa Congress; 1998; 9, pp. 165–178. [Google Scholar]
  24. Collier, John; Mark, Burch. Symmetry, Levels and Entrainment. In Proceedings of the International Society for Systems Sciences; 2000. [Google Scholar]
  25. Collier, John; Hooker, C.A. Complexly Organised Dynamical Systems. Open Systems and Information Dynamics 1999, 6, 241–302. [Google Scholar] [CrossRef]
  26. Collier, John; Hooker, C.A. Reduction in Complex Systems, Manuscript.
  27. Collier, John; Muller, S.J. The Dynamical Basis of Emergence in Natural Hierarchies. In Emergence, Complexity, Hierarchy and Organization, Selected and Edited Papers from the ECHO III Conference, Acta Polytechnica Scandinavica, MA91; Farre, George, Oksala, Tarko, Eds.; Finish Academy of Technology: Espoo, 1998. [Google Scholar]
  28. Gatlin, L.L. Information Theory and the Living System; Columbia University Press: New York, 1972; pp. 49, 70. [Google Scholar]
  29. Holzmüller, W. Information in Biological Systems: the Role of Macromolecules; Cambridge University Press: Cambridge, 1984; pp. 92–94. [Google Scholar]
  30. Kolmogorov, A.N. 1965. Three Approaches to the Quantitative Definition of Information. Problems of Information Transmission 1965, 1, 1–7. [Google Scholar] [CrossRef]
  31. Kolmogorov, A.N. Logical Basis for Information Theory and Probability Theory. IEEE Transactions on Information Theory 1968, 14, 662–664. [Google Scholar] [CrossRef]
  32. MacKay, Donald M. Information, Mechanism and Meaning; MIT Press: Cambridge, MA, 1969. [Google Scholar]
  33. Maze, Jack; Robson, Kathleen; Banerjee, Satindranath. Studies into abstract properties of individuals: An empirical study of emergence in ontogeny and phylogeny in Achnatherum nelsonii and A. Lettermanii. S.E.E.D. 2001. http://www.library.utoronto.ca/see/pages/SEED%20journal%20library.html.
  34. Pattee, H. H. (Ed.) Hierarchy theory: the challenge of complex systems; Braziller: New York, 1973.
  35. Picard, J.R.W.M. Impredicativity and turn of the century foundations of mathematics: Presupposition in Poincaré and Russell. MIT dissertation in philosophy, 1993. [Google Scholar]
  36. Reynolds, Craig. Boids. 2001. http://www.red3d.com/cwr/boids.
  37. Rosen, R. Life Itself; Columbia University Press: New York, 1991; p. 44. [Google Scholar]
  38. Rosen, R. Essays on Life Itself; Columbia University Press: New York, 2000. [Google Scholar]
  39. Salthe, S.N. Evolving Hierarchical Systems; Columbia University Press: New York, 1985. [Google Scholar]
  40. Salthe, S.N. Development and Evolution: Complexity and Change in Biology; MIT Press: Cambridge, MA, 1993. [Google Scholar]
  41. Schrödinger, I. What is Life? reprinted in What is Life? and Mind and Matter; Cambridge University Press: Cambridge, 1944. [Google Scholar]
  42. Shannon, C.E.; Weaver, W. The Mathematical Theory of Communication; University of Illinois Press: Urbana, 1949; p. 38. [Google Scholar]
  43. Smith, J.D.H. Canonical ensembles, evolution of competing species, and the arrow of time. In Evolutionary Systems: Biological and Epistemological Perspectives on Selection and Self-organization; van de Vijver, G., Salthe, S.N., Delpos, M., Eds.; Kluwer: Dordrecht, 1998; pp. 141–153. [Google Scholar]
  44. Sober, Elliott; Wilson, David Sloan. Unto Others; Harvard University Press: Cambridge, MA, 1999. [Google Scholar]
  45. Ulanowicz, R.E. Ecology, the Ascendant Perspective; Columbia University Press: New York, 1997. [Google Scholar]
  46. Wiley, E.O.; Brooks, D.R. Victims of history- A nonequilibrium approach to evolution. Syst. Zool 1982, 31, 1–24. [Google Scholar] [CrossRef]
  • Notes

  • 1 Steve Kercel, on the Complexity-L mailing discussion list, has emphasized this aspect of complex systems, and was instrumental in drawing its importance to my attention. See http://views.vcu.edu/complex/discussion/dlist.htm.
  • 2 The notion of centripetality comes from Ulanowicz [45]. The notions of centrifugality and balance are implicit in his account, as he mentioned to me privately.
  • 3 Salthe [39,40] describes cross-level processes as unlikely because of screening off of processes within one level from another. In the case of respiration there is one energy that crosses the organism boundary and cell boundaries to be involved in at the classification level of individual molecular processes. This continuity requires a single trans-level process. Information expression, as a process, can also cross levels. In fact all emergent processes will span levels. Contrary to Salthe, such translevel processes are common in biology. Salthe’s oversight seems to me to stem from his attempts to classify objects and processes by classification levels, imposing an artificial separation that depends on the classes of things involved rather than on observable and manipulatable dynamical connections in specific systems such as a single organism. Maturana and Varela make a similar error throughout their work [22].
  • 4 Examples of such levels are found in various papers in [34], as well as in Salthe’s hierarchies [39,40]. The mixing of classification and with dynamical considerations is common throughout systems theory, perhaps because of a misguided view that we can only deal with things as they are classified. Some systems are real, and we interact with them, not just with our ideas of them. (For some examples of classification/dynamics confusions, see http://www.isss.org/hierarchy.htm) In any case, since I deal here with dynamical levels as concrete particulars, and with concrete particular hierarchical structures rather than abstract hierarchies, I warn the reader of the difference. It is important for understanding the concept of a dynamical hierarchy below.

Share and Cite

MDPI and ACS Style

Collier, J. Hierarchical Dynamical Information Systems With a Focus on Biology. Entropy 2003, 5, 100-124. https://doi.org/10.3390/e5020100

AMA Style

Collier J. Hierarchical Dynamical Information Systems With a Focus on Biology. Entropy. 2003; 5(2):100-124. https://doi.org/10.3390/e5020100

Chicago/Turabian Style

Collier, John. 2003. "Hierarchical Dynamical Information Systems With a Focus on Biology" Entropy 5, no. 2: 100-124. https://doi.org/10.3390/e5020100

Article Metrics

Back to TopTop