CEPA eprint 3899

Autonomy: What is it?

Boden M. (2008) Autonomy: What is it? BioSystems 91: 305–308. Available at http://cepa.info/3899
Autonomy is a buzz-word in current A-Life, and in some other areas of cognitive science too. And it is generally regarded as a Good Thing. However, neither the spreading use nor the growing approval have provided clarity.
This Special Issue of BioSystems is devoted to clarifying the concept, and to showing how it is being used in various examples of empirical research. Given the obscurity that still attends the concept, however, we should not expect to find that all the ‘clarifications’ are equivalent – or even mutually consistent. Similarly, we should not expect to find that the notion is applied identically in all the research that is reported here. So this preliminary sketch of the conceptual landscape may be helpful.
Very broadly speaking, autonomy is self-determination: the ability to do what one does independently, without being forced so to do by some outside power. The “doing” may be mental, behavioural, neurological, metabolic, or autopoietic: autonomy can be ascribed to a system on a number of different levels.
This does not rule out the possibility of the doing’s being affected, even triggered, by environmental events. To the contrary: work in ‘autonomous’ (i.e. situated) robotics, and in most computational neuroethology (CNE), focusses specifically on a creature’s reactive responses to environmental cues. Even research that is based on the theory of autopoiesis, which stresses the system’s ability to form (and maintain) itself as a functioning unity, allows that a cell, or an organism, is closely coupled with its environmental surround – so much so, that from one point of view they can be regarded as a single system (Maturana and Varela, 1980). So autonomy is not isolation. But it does involve a significant degree of independence from outside influences.
One major “outside influence” which A-Life enthusiasts have in mind to deny is the alien hand of the programmer – and, for autopoietic theorists (though not for situated roboticists), even the engineer/designer. (The Designer in the sky is eschewed too, of course – in favour of biological evolution by natural selection.) The explanatory focus is on the specifics of the system’s inherent structure and ‘intrinsic’ properties, not on any instructions that happen to be imposed on it by an outside agency. Only thus can the system’s autonomy be respected, or even posited.
Moreover, the traditions of situated robotics and autopoiesis both deny the role of internal/cerebral representations – not only in computer models, but in living organisms too. Some pioneering, and influential, work in CNE posits such representations (e.g. Arbib, 1981, 1987, 2003; Boden, 2006: 14.vii). But most, perhaps because it deals with insect behaviour, does not.
As a result, most workers in A-Life and CNE (including the authors of the papers in this Special Issue) reject GOFAI-based models of mind and behaviour – wherein programs are imposed on general-purpose machines, and internal representations are stressed. All too often, however, this rejection is expressed as scorn, more ideology than science. The editor of a professional newsletter in cognitive science has bemoaned the “frankly insulting” names commonly used by researchers for approaches different from their own, complaining that “The lack of tolerance [between different research programmes in AI/A-Life] is rarely positive, often absurd, and sometimes fanatical” (Whitby, 2002). Readers will find that no such insults have crept into the papers presented here. That is all to the good. For, quite apart from professional etiquette, one important example of autonomy is – in my view – best understood in largely GOFAI terms (see below).
Autonomy is a problematic concept partly because it can seem to be close to magic, or anyway to paradox. Self-determination is all very well – but how did the “self” (the system) get there in the first place? If the answer we are offered is that it spontaneously generated itself, this risks being seen as empty mystification. A key contribution of some current research on autonomy is that it disarms this paradox. But paradox is not the only source of difficulty here. The concept of autonomy is problematic also because there are various types, and varying degrees, of independence.
Three aspects of a system’s behaviour – or rather, of its control – are crucial here (Boden, 1996). (The “system” in question may be a whole organism or a subsystem, such as a neural network or metabolic cycle, or a computer model of either of these.) However, the three aspects do not necessarily run alongside each other, nor keep pace with each other even when they do.
The first is the extent to which response to the environment is direct (determined only by the present state of the external world) or indirect (mediated by inner mechanisms partly dependent on the creature’s previous history). The second is the extent to which the controlling mechanisms were self-generated rather than externally imposed. And the third is the extent to which any inner directing mechanisms can be reflected upon, and/or selectively modified in the light of general interests or the particularities of the current problem in the environmental context. In general, an individual’s autonomy is the greater, the more its behaviour is directed by self-generated (and idiosyncratic) inner mechanisms, nicely responsive to the specific problem-situation yet reflexively modifiable by wider concerns.
Clearly, then, autonomy is not an all-or-nothing property. And – even more confusing – the senses in which autopoietic systems or self-organizing networks are autonomous differ from each other, and from the sense in which situated robots are autonomous.
The confusion is compounded because, as the brief remarks above imply, autonomy is closely related to two other notoriously slippery notions: self-organization and freedom. No member of this problematic conceptual trio can be properly understood without also considering the other two.
Let us turn to self-organization, first (Boden, 2006: 15.i.b). This is the central feature of life. Not only is it commonly listed as a defining characteristic of life, but all the other properties that are so listed are special cases of it. These vital properties are emergence, autonomy (sic), growth, development, reproduction, evolution, adaptation, responsiveness, and metabolism.
Self-organization may be defined as the spontaneous emergence (and maintenance) of order, out of an origin that is ordered to a lesser degree. It concerns not mere superficial change but fundamental structural development, which can occur on successive levels of organization. And it is spontaneous, or autonomous, in that it results from the intrinsic character of the system (often in interaction with the environment) rather than being imposed by some external force or designer.
It is commonly, though not always, assumed that the generation and the functioning of a self-organized system is wholistic. In other words, they cannot be explained as being due to interactions between independently definable sub-parts. Whereas a classical AI program, or a car engine, can be analyzed into separate pieces (procedures, mechanical parts), a self-organized system cannot. Each ‘part’ is to some degree dependent on other ‘parts’ for its very existence, and for its identity as a ‘part’ of the relevant type. Theoretical approaches based on autopoiesis are especially likely to stress this aspect of self-organized systems.
In the early days of A-Life, long before the field had received its name, the concept of self-organization was being viewed with mistrust by some pioneers even while they were studying the phenomenon in illuminating ways. William Ross Ashby is a case in point. His Homeostat was a major advance in the theory, and modelling, of self-organizing systems (Ashby, 1947, 1948; Boden 4.viii.c-d, 15.xi.a). Nevertheless, he suggested that the term “self-organization” should be avoided. To be sure, he sometimes used it, and “self-coordinating” too (Ashby, 1960: 10). But he also complained that such phrases were “fundamentally confused and inconsistent” and “probably better allowed to die out” (Ashby, 1962: 269). He saw them as potentially mystifying because they imply that there is an organizer when in fact there is not. Some modern researchers agree, carefully avoiding ‘self-organization’ and referring instead to organisation (or metaorganization): the ability to act as a unified whole (e.g. Pellionisz and Llinas, 1985: Section. 4.1).
Yet the concept of self-organisation has not died out. It is employed today by many workers in A-Life and neuroscience – not to imply some mysterious ‘inner organizer,’ but to focus on the spontaneous origin and development of organisation at least as much as on its maintenance. As used in the papers collected below, the term normally carries that bias. The mystification, if not the marvelling, has lessened largely because computer models of various types of self-organization – from flocking (Reynolds, 1987), through neural networks (Von der Malsburg, 1973; Linsker, 1988, 1990), to the formation of cell-membranes (Zeleny, 1977; Zeleny et al., 1989)– now exist. Clearly, none of these works by magic.
As for human freedom, commonly regarded as the epitome of autonomy, this too – like the vital properties listed above –is a special case of self-organization. A-Lifers, who concern themselves with organisms well below Homo sapiens in the phylogenetic scale, rarely mention it explicitly. Occasionally, they admit that their work does not cover it (e.g. Bird et al., 2006: 2.1). But sometimes, their words seem to imply that they confuse it with autonomy as such. That is a mistake. The examples of autonomy considered in A-Life show varying degrees of independence from direct outside control. But none has the cognitive/motivational complexity that is required for freedom (remember the third aspect of autonomy listed above).
That is why the often-scorned GOFAI has got closer to an understanding of freedom than A-Life has done. Freedom is best understood in terms of a particular form of complex computational architecture (Dennett, 1984; Boden, 2006: 7.i.g-i). It requires psychological resources wherein reasoning, means-end planning, motivation, various sorts of prioritizing (including individual preferences and moral principles), analogy-recognition, the anticipation of unwanted side-effects, and deliberate self-monitoring can all combine to generate decisions/actions selected from a rich space of possibilities. (In the paradigm case, the choice is largely conscious. But an action may be termed “free” because, given the computational resources possessed by the person in question, it could have been consciously considered by them, and the decision could have differed accordingly.).
Compromises of freedom occur (for instance) in the clinical apraxias, in hypnosis, and when someone obeys hallucinated instructions from ‘saints’ or ‘aliens’. All these phenomena, wherein a person’s autonomy is significantly lessened, have been helpfully theorized and/or modelled in partly GOFAI terms (Boden, 2006: 7.i.h-i; 12.ix.b).
In hypnosis and hallucination, for example, a certain type of high-level self-monitoring is inhibited, leaving the person at the mercy of directives imported from outside or internally generated in an unconsidered way (Dienes and Perner, 2007). As for apraxia, a brain-damaged patient may be unable to plan a simple task, or to perform the relevant sub-tasks in the correct order; or they may be constantly diverted onto a different task while trying to carry out the first one. These debilitating syndromes involve the inappropriate activation and/or execution of hierarchical action-schemas (Norman and Shallice, 1980/86; Cooper et al., 1995, 1996). Such schemas may malfunction in various ways, and/or they may be triggered irrelevantly by patternrecognition mechanisms that divert control of the action onto unwanted paths. In short, apraxias are being modelled by hybrid systems, implementing both GOFAI and connectionist computations.
These theories/models of human freedom, and of its impairments, are relatively broad-brush. They are joined by a wide variety of A-Life models that seek to show even more precisely how autonomy, of various kinds, can occur.
CNE has provided some highly detailed explanations of certain aspects of insect behaviour, for example. The computer models concerned include ‘virtual’ simulations (e.g. Cliff, 1991a, b) and robots (e.g. Webb, 1996; Webb and Scutt, 2000; Beer, 1990). Research scattered across A-Life, connectionism, and neuroscience has offered many intriguing suggestions about spontaneous self-organization from a random base, and has provided demonstrations of this phenomenon too (see Boden, 2006: 12.ii, 12.v, 14.vi, 14.viii.c, and 15.viiviii). Some of these results are highly counterintuitive (e.g. Von der Malsburg, 1973; Linsker, 1988, 1990). Further examples are described/cited in the new papers that follow.
One thing is for sure: autonomy is marked on our intellectual map. And the ambiguity and unclarity that attend the concept have not swamped the excitement. There is plenty of that, here.
References
Arbib M. A. (1981) Visuomotor coordination: From neural nets to schema theory. Cogn. Brain Theory 4: 23–39.
Arbib M. A. (1987) Levels of modelling of visually guided behavior. Behav. Brain Sci. 10: 407–465.
Arbib M. A. (2003) Rana computatrix to human language: Towards a computational neuroethology of language. Philos. Transact. R. Soc. Lond. A 361: 2345–2379 (Special issue on ‘Biologically Inspired Robotics’)
Ashby W. R. (1947) The nervous system as a physical machine: With special reference to the origin of adaptive behaviour. Mind 56: 44–59.
Ashby W. R. (1948) Design for a brain. Electron. Eng. 20: 379–383.
Ashby W. R. (1960) Design for a brain: The origin of adaptive behaviour, second ed. revised. Chapman & Hall, London.
Ashby W. R. (1962) Principles of the self-organizing system. In: von Foerster H., Zopf G. W. (eds.) Principles of Self-Organization. Pergamon Press, New York: 255–278.
Beer R. D. (1990) Intelligence as adaptive behavior: An experiment in computational neuroethology. Academic Press, Boston.
Bird J., Stokes D., Husbands P., Brown P., Bigge B. (2006) Towards Autonomous Artworks, unpublished working paper: COGS/CCNR, University of Sussex. (Available from jonba@sussex.ac.uk.)
Boden M. A. (1996) Autonomy and artificiality. In: Boden M. A. (ed.) The Philosophy of Artificial Life. Oxford University Press, Oxford: 95–108.
Boden M. A. (2006) Mind as machine: A history of cognitive science, vol. 2. Oxford University Press, Oxford.
Cliff D. (1991a) The Computational hoverfly: A study in computational neuroethology. In: Meyer J.-A., Wilson S. W. (eds.) From Animals to Animats: Proceedings of the First International Conference on Simulation of Adaptive Behavior. Cambridge, Mass. MIT Press: 87–96.
Cliff D. (1991b) Computational neuroethology: A provisional manifesto. In: Meyer J.-A., Wilson S. W. (eds.) From Animals to Animats: Proceedings of the First International Conference on Simulation of Adaptive Behavior. Cambridge, Mass. MIT Press: 29–39.
Cooper R., Fox J., Farringdon J., Shallice T. (1996) Towards a systematic methodology for cognitive modelling. Artif. Intell. 85: 3–44.
Cooper R., Shallice T., Farringdon J. (1995) Symbolic and continuous processes in the automatic selection of actions. In: Hallam J. (ed.) Hybrid Problems, Hybrid Solutions. IOS Press, Oxford: 27–37.
Dennett D. C. (1984) Elbow Room: The Varieties of Free Will Worth Wanting. MIT Press, Cambridge, Mass.
Dienes Z., Perner J. (2007) The cold control theory of hypnosis. In: Jamieson G. (ed.) Hypnosis and Conscious States: The Cognitive Neuroscience Perspective. Oxford University Press, Oxford: 293–314.
Linsker R. (1988) Self-organization in a perceptual network. Computer 21: 105–117.
Linsker R. (1990) Perceptual neural organization: Some approaches based on network models and information theory. Annu. Rev. Neurosci. 13: 257–281.
Maturana H. R., Varela F. J. (1980) Autopoiesis and cognition: The realization of the living. Reidel, Boston.
Norman D. A., Shallice T., 1980/86. Attention to Action: Willed and Automatic Control of Behavior. CHIP Report 99. University of California San Diego. (1980) (Officially published in Davidson R., Schwartz G., Shapiro D. (eds.) (1986) Consciousness and Self Regulation: Advances in Research and Theory, vol. 4. Plenum, New York: 1–18.)
Pellionisz A., Llinas R. (1985) Tensor network theory of the metaorganization of functional geometries in the central nervous system. In: Berthoz A., Melvill Jones G. (eds.) Adaptive Mechanisms in Gaze Control. Elsevier, Amsterdam: 223–232.
Reynolds C. W. (1987) Flocks, herds, and schools: A distributed behavioral model. Comput. Graphics 21: 25–34.
von der Malsburg C. (1973) Self-organization of orientation sensitive cells in the striate cortex. Kybernetik 14: 85–100.
Webb B. (1996) A cricket robot. Sci. Am. 275 (6): 94–99.
Webb B., Scutt T. (2000) A simple latency-dependent spiking-neuron model of cricket phonotaxis. Biol. Cybern. 82: 247–269.
Whitby B. (2002) Let’s stop throwing stones. AISB Q. 109, 1.
Zeleny M. (1977) Self-organization of living systems: A formal model of autopoiesis. Int. Jo. Gen. Syst. 4: 13–28. http://cepa.info/1203
Zeleny M., Klir G. J., Hufford K. D. (1989) Precipitation membranes, osmotic growths, and synthetic biology. In: Langton C. G. (ed.) Artificial Life: The Proceedings of an Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems, held September 1987. Redwood City CA. Addison-Wesley: 125–139.
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/3899 on 2016-12-14 · Publication curated by Alexander Riegler