CEPA eprint 2722

Autopoiesis and self-organization

Andrew A. M. (1979) Autopoiesis and self-organization. Journal of Cybernetics 9(4): 359–367. Available at http://cepa.info/2722
Table of Contents
Status of SOS Studies
The impact on SOS studies
Modeling self-preservation
The body schema
Consideration is given to the relevance of recent discussions of auto­poiesis to the study of self-organizing systems. Mechanisms that could underly the physical realization of an autopoietic system are discussed. It is concluded that autopoiesis does not, by itself, provide the essential ingredient whose omission has prevented SOS studies from being more productive. Two other important missing ingredients are discussed.
Maturana (1975) and his coworkers, notably Varela (1975), have intro­duced the term autopoiesis to designate what is arguably the most fundamental characteristic of living organisms. It refers to the capacity that living systems have to develop and maintain their own organization, the organization which is developed and maintained being identical with that which performs the development and maintenance.
Maturana seems to imply that the new point of view renders obsolete many of the older attempts to theorize about living organisms and particularly their nervous systems. Varela is less dogmatic, but also claims that the autopoietic viewpoint brings valuable new insight. The aim of the present paper is to consider how the viewpoint represented by some of the earlier attempts must be modified in light of the new ideas. The older viewpoint considered is that usually indicated by the term self-organizing systems (SOS). In particular, the attempt is made to decide whether autopoiesis provides the essential ingredient whose omission has prevented SOS studies from being more productive than they have.
The treatment confirms the view that autopoiesis provides new insight and may be regarded as contributing a missing ingredient of the SOS approach. Reference is made, however, to two other important missing ingredients which are not subsumed in the autopoietic viewpoint.
Status of SOS Studies
Interest in SOS was much in evidence in the 1950s and early 1960s, a number of conferences being devoted to the topic (Yovits and Cameron, 1960; Von Foerster and Zopf, 1962; Yovits, Jacobi, and Goldstein, 1962; Garvey, 1963). The essential idea is, and must remain, somewhat intuitive; some reasons for this have been discussed in previous publications (Andrew, 1973). In the first place, any term that refers to the self-modification of a system must be observer-dependent, since the same system may be described either as self-modifying or as a fixed system. In the terminology used by Glushkov (1966) a self-organizing system is decomposable into an operative automaton and a learning automaton. Whether a given system is judged to be self-organizing depends on whether an observer feels that such a decomposition provides an appropriate description.
The idea of SOS is also intuitive because it depends on a subjective judgement as to whether the changes that occur in the system as it interacts with its environment are sufficiently fundamental to warrant the term self-organizing as opposed to, say, self-optimizing. There are, in addition, some constraints on the means by which these changes may arise; the elements of the system must behave autonomously. That is to say, the changes should be determined locally within the system and not in response to the commands of an “adaption center” able to view the whole system.
The main objections that have been made to the SOS idea, however, have not been on grounds of imprecision of the concept. A number of workers have followed Ashby (1962) in suggesting that SOS studies are unprofitable because there is nothing to study. The implication is that there is nothing more which need, or can, be said once an adaptive system has been described as a state-determined system whose operating point drifts through a phase-space until stability is reached. Such a mode of description is valid but likely to be impracticable for nontrivial systems. Ashby has presented a somewhat different point of view in another publication (1960) where he talks about “Amplifying Adaptation.”
For at least the last decade, most workers in artificial intelligence (Al) have kept well clear of SOS ideas. Various reasons have been advanced but the main point is that SOS studies have been disappointingly unproductive.
By abandoning any attempt to make systems which self-organize, workers in AI have been able to forge ahead and produce spectacular results. Their programs, however, depend heavily on heuristic principles derived by manually-directed experimentation and use of the programmers’ own insight into the problems to be solved. The artificial intelligence of the systems produced is strongly laced with natural intelligence.
The autopoietic viewpoint emphasizes the essential circularity of the living organism. The idea of circularity, subsumed in that of feedback, is an old one in cybernetics. The new viewpoint pays particular attention to the circular process whose variables are the internal variables of the organism. A sharp distinction is drawn between this circular process, which consti­tutes the essential organization of the organism, and other circular pro­cesses associated with it. These other processes, which operate through input and output interfaces with the organism’s environment, are held to be part of the structure associated with the essential organization.
This partitioning of the organism into organization and structure has some surprising consequences. This is because the word organization must be interpreted in an extremely restricted sense and structure in a corre­spondingly expanded one. One consequence is that any action taken by the organism to protect itself from an impending danger must be initiated and executed purely by the structure. The capacity to make anticipatory responses has been claimed by Sommerhoff (1950), as directive correlation, to be the essential feature that distinguishes living from nonliving systems. It is surprising to find this feature relegated to mere structure by the treatment in terms of autopoiesis.
Nevertheless the partitioning-off of the organization, in this very restricted sense, can be defended on either of two grounds. One is the argument that such organization corresponds to an intuitive idea of what essentially constitutes a living organism. This is in conflict with Sommer­hoff, who has demonstrated that living organisms as we know them invariably employ directive correlation. On the other hand, the recognition of a system as living is not dependent on the operation of any one specific form of directive correlation, so might persist in the absence of any.
The other justification for the separation of a restricted organization within a living system is that it is necessary for precise formulation of the autopoiesis hypothesis. Clearly, those parts of the organism which learn, or otherwise adapt to the environment, cannot be said to be maintained constant by a circular process. It is necessary either to let “maintained” be interpreted in a weaker sense than “maintained constant” or to whittle down the part of the system to which the circular process is held to apply.
Whether the nature of living systems is such that it is legitimate to partition them into organization and structure as postulated is a matter for debate. Even if it is allowed the problems are not completely resolved; the organization cannot be the result of such severe whittling down that it remains absolutely constant. If it were it could not implement the circular process which maintains it. This is the crux of the problem of self-reference which has stimulated Varela’s elegant extension of Brown’s Laws of Form (1975) and his later work (1978) related to Scott’s treatment of fixed points of algebraic expressions.
The impact on SOS studies
In SOS studies systems are discussed in relation to the pursuit of goals. In the context of natural systems the goal-seeking behavior has to be a descriptive expedient only, or in other words a construct formed by an observer. It is, however, a form of description which fits many aspects of observed behavior so closely that it is not to be abandoned lightly. To be rigorous, a living system should be described, not as goal-seeking but as behaving as though it were pursuing some goal.
The autopoietic hypothesis could be rephrased in SOS terms by saying that living systems behave as though pursuing the goal of their own survival. Reformulating it in this way does not bypass any of the difficulties referred to in connection with the Maturana-Varela treatment; these still arise in the attempt to define “survival.” To behave as though pursuing the goal of its own survival a living system must also behave as though it distinguishes its own interior from its environment.
It could be argued, in terms of Varela’s paper (1978) that the above reformulation is a desperate attempt to depart minimally from the tradi­tional Fregean viewpoint. However, the reformulation does have value in emphasizing the fact that the organization of a living system, even in the most restricted sense possible, has to be seen in conjunction with its environment, or ecological niche. The circular process that maintains the organization is able to nullify some types of perturbation due to the environment, but it must depend on some “rules of the game” which are invariant. The Varela-Maturana-Uribe model (1974), for example, maintains an enclosing “membrane” despite random effects that break it up. How­ever, the membrane would not be maintained if random effects rendered the catalyst ineffective. An undue emphasis on a Brownian approach seems to obscure the essentially empirical nature of living systems.
Modeling self-preservation
Many well-known artifacts embody Sommerhoff’s directive correlation, which can also be termed feed-forward or anticipatory control. It can in fact be argued that servo-mechanisms embody it; if the control action has a derivative component (or even a proportional component whose magnitude depends on previous knowledge of the controlled system) the control action is to some extent anticipatory and therefore an implementation of directive correlation.
Autopoiesis can also be realized in artifacts, as shown by the Varela­Maturana-Uribe model. Hence, both of these characteristics intended to distinguish living systems from nonliving ones are in fact making a distinction which allows certain nonliving systems to fall into the same category as do living systems. These nonliving systems are artifacts that have inherited some lifelike characteristics from their designers; they are manifestations of life though not themselves alive.
The autopoietic viewpoint does, however, draw attention to one important difference between an artificial system pursuing a goal (and perhaps improving its performance by self-organization) and a subsystem of a living organism behaving in essentially the same way. The subsystem of the living organism is linked to its parent organism in two ways; in the first place it is operating to achieve its particular goal and secondly it is recognized by the organism as an integral part of itself to be maintained by autopoiesis. As stated earlier, it is a corollary of the autopoiesis hypothesis that a living system must behave as though it can distinguish its own interior from its environment. It was presumably this corollary that prompted Varela to look to Brown’s calculus, with its basis in a simple distinction between inclusion and exclusion, for a formal treatment of autopoiesis.
The dual nature of a living subsystem, which is simultaneously a goal-seeking system and part of an autopoietic system, is something which most artificial systems failto model. One system which does combine the two roles in an interesting way is the proposal of Svoboda (1960) for a model of the instinct of self-preservation. The model consisted of a small computer mounted on a self-propelled trolley, the steering and drive of the trolley being determined by commands from the computer. The computer was so programmed that it would seek a location in which some physical property of the environment was minimized. The method used to seek the minimum involved the mapping of values of the physical property on an internal representation in the computer of the area within which it was free to move. Search strategies of considerable sophistication could be embodied in a scheme of this kind.
What made Svoboda’s system specially interesting in the present context, however, was that there were no special transducers to measure the physical property which was to be minimized. Instead, the computer was designed to exploit redundancy to achieve automatic error-correction at all stages of the computation. (Schemes for automatic error-correction of messages arc treated by Peterson (1961) and were discussed by von Neumann (1956). The work of Winograd and Cowan (1963), is particularly interesting because the redundancy is in the computational structure rather than the messages.) It was arranged that the error-correcting system incorporated in Svoboda’s computer provided an indication of its own level of error-correcting activity, and it was this level which constituted the physical property the system sought to minimize.
Svoboda’s system allows a computer to seek a location in which it is minimally affected by unknown external effects (of which radioactivity could be one) tending to cause computational malfunctions but not actually damaging the computer. The same principle could be extended to allow the minimization of unknown effects actually damaging the com­puter. This could be done either by letting the property to be minimized be the time-derivative of the error-correction rate or by letting the system embody a facility for self-repair rather than one for error-correction. For the latter alternative the property to be minimized would be the level of repair activity. Schemes of this kind seem to be worthy of study because their operation corresponds in an interesting way to the dual nature of living subsystems as already discussed. Most proposals for SOS do not have this feature.
The body schema
Most human beings are conscious of a strong desire to avoid damage to their own bodies, and are in no doubt about the interface between the system to be preserved and the environment. It is often said that a person “looks after his own skin.”
Nathan (ERROR: No reference defined for @NP1969) makes some interesting observations on the nature of the body schema, which he says is “the basis of our feeling that our bodies are us, that they are placed in such or such a way in the environment, and that the parts of our bodies make up a whole.” The existence of a body schema apparently depends on the functioning of certain parts of the nervous system. Damage to the cortex of the parietal lobes of the brain can lead to part of the body being excluded from the schema. It is not necessary that the peripheral innervation of a part of the body be intact in order that the part be included in the schema; the schema can include “phantom limbs” corresponding to members that have been amputated.
The discussions of Svoboda’s model and of the neurologists’ concept of a body schema give different impressions of mechanisms that could subserve autopoiesis. The body schema appears to depend on special nervous mechanisms, whereas Svoboda’s model requires very little circuitry dedi­cated to autopoiesis. There is, of course, no reason to suppose that autopoiesis must invariably depend on any one type of mechanism.
While it is true that the autopoiesis hypothesis has a strong bearing on SOS studies, it does not seem to be the one essential ingredient needed for progress. There are many features of living systems which are difficult to describe otherwise than in terms of the pursuit of goals. They behave as though pursuing what might be termed meta-goals, or goals whose achieve­ment facilitates the achievement of lower-order goals. It can be argued, for instance, that living systems seek succinct or economical representations of their internal information, and that successful structural features are often replicated.
The evolution of higher forms of intelligent behavior requires some selective mechanism favoring succinct representation. In a somewhat dif­ferent form, this idea has been emphasized by Pask (1962) who discusses the emergence of meta-languages within living systems. The terms of a meta-language represent concepts not having a succinct representation in the lower-order language. In a recent discussion by Andrew (1978) it is argued that succinct representation can be considered at two levels, namely that corresponding to information stored in the learning automaton and that corresponding similarly to the operative automaton. It is, in fact, not necessary to restrict the number of levels to two. Glushkov, in introducing these two types of automaton considers the possibility of hierarchies of them. The operative automaton would be at the bottom of the hierarchy, which could have numerous levels.
Much work on learning systems for pattern recognition has been carried out without regard to succinct representation of the information acquired by learning. This work has been highly successful in achieving particular objectives (see, for example, the masterly review by Kohonen, 1977) but has an inherent limitation because there is no tendency to form succinct representations.
In all living structures, particular features are replicated many times over. In the context of nervous systems this is clearly illustrated by the studies of Lettvin et al. (1959) and of Hubei and Wiesel (1968) on visual systems of different animals. Replication is needed to match a succinct representation of a data-transformation process to the nonsuccinct input from the real world.
Future SOS studies must certainly be devised with regard to the lessons of autopoicsis, but also (since we cannot hope to run our simulations long enough to let them be evolved) to the embodiment of at least two meta-goals, namely those of seeking succinct representations and of replicating existing successful structural features.
Andrew, A. M. 1973. Significance feedback and redundancy reduction in self-organizing networks. In Advances in cybernetics and systems research, vol. I, ed. F. Pichler and R. Trappl, p. 244. London: Transcripta.
Andrew, A. M. 1978. Succinct representation in neural nets and general systems. In Applied general system research: Recent developments and trends, ed. G. J. Klir, p. 553. New York: Plenum.
Ashby, W. R. 1960. Design for a brain, (2nd ed.), p. 231. London: Chapman and Hall.
Ashby, W. R. 1962. Principles of the self-organizing system. In von Foerster and Zopf, p. 255.
Garvey, J. E. 1963. Self-organizing systems. Washington: Office of Naval Research.
Glushkov, V. M. 1966. Introduction to cybernetics. New York: Academic Press.
Hubel, D. H., and Wiesel, T. N. 1968. Receptive fields and functional architecture of monkey striate cortex. J. Physiol. 195:215.
Kohonen, T. 1977. Associative memory. Berlin: Springer.
Lettvin, J. Y., Maturana, H. R., McCulloch, W. S., and Pitts, W. H. 1959. What the frog’s eye tells the frog’s brain. Proc. I.R.E. 47:1940.
Maturana, H. R. 1975. The organization of the living. Int. J. Man-Mach. Stud. 7:313.
Nathan, P. 1969. The nervous system. New York: Penguin.
Pask, G. 1962. The logical type of illogical evolution. In Information processing 1962, p. 482. Amsterdam: North Holland.
Peterson, W. W. 1961. Error-correcting codes. New York: Wiley.
Sommerhoff, G. 1950. Analytical biology. New York: Oxford University Press.
Svoboda, A. 1960. Un modele d’instinct de conservation. In Proceedings, 2nd International Congress on Cybernetics, p. 866. Namur: Inter­national Assoc. of Cybernetics.
Varela, F. J., Maturana, H. R., and Uribe, R. 1974. Autopoiesis: the organization of living systems. Bio. Syst. 5:187.
Varela, F. J. 1975. A calculus for self-reference. Int. J. Gen. Syst. 2:5.
Varela, F. J., and Goguen, J. A. 1978. The arithmetic of closure. In Progress in Cybernetics and Systems Research, (vol. 3), ed. R. Trappl, G. J. Klir, and L. Ricciardi, pp. 48-64. Washington: Hemisphere.
von Foerster, H., and G. W. Zopf, eds. 1962. Principles of self-organization. Oxford: Pergamon.
von Neumann, J. 1956. Probabilistic logics and the synthesis of reliable organisms from unreliable components. In Automata studies, ed. C. E. Shannon and J. McCarthy. Princeton: Princeton Univ. Press.
Winograd, S., and Cowan, J. 1963. Reliable computation in the presence of noise. Cambridge: M.I.T. Press.
Yovits, M. C., and Cameron, S. 1960. Self-organizing systems. New York: Pergamon.
Yovits, M. C., Jacobi, G. T., and Goldstein, G. D. 1962. Self-organizing systems 1962. Washington: Spartan.
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/2722 on 2016-05-17 · Publication curated by Alexander Riegler