For millennia people have wondered what makes the living different from the non-living. Beginning in the mid-1980s, artificial life has studied living systems using a synthetic approach: build life in order to understand it better, be it by means of software, hardware, or wetware. This review provides a summary of the advances that led to the development of artificial life, its current research topics, and open problems and opportunities. We classify artificial life research into 14 themes: origins of life, autonomy, self-organization, adaptation (including evolution, development, and learning), ecology, artificial societies, behavior, computational biology, artificial chemistries, information, living technology, art, and philosophy. Being interdisciplinary, artificial life seems to be losing its boundaries and merging with other fields. Relevance: Artificial life has contributed to philosophy of biology and of cognitive science, thus making it an important field related to constructivism.
Predictive processing (PP) approaches to the mind are increasingly popular in the cognitive sciences. This surge of interest is accompanied by a proliferation of philosophical arguments, which seek to either extend or oppose various aspects of the emerging framework. In particular, the question of how to position predictive processing with respect to enactive and embodied cognition has become a topic of intense debate. While these arguments are certainly of valuable scientific and philosophical merit, they risk underestimating the variety of approaches gathered under the predictive label. Here, we first present a basic review of neuroscientific, cognitive, and philosophical approaches to PP, to illustrate how these range from solidly cognitivist applications – with a firm commitment to modular, internalistic mental representation – to more moderate views emphasizing the importance of ‘body-representations’, and finally to those which fit comfortably with radically enactive, embodied, and dynamic theories of mind. Any nascent predictive processing theory (e.g., of attention or consciousness) must take into account this continuum of views, and associated theoretical commitments. As a final point, we illustrate how the Free Energy Principle (FEP) attempts to dissolve tension between internalist and externalist accounts of cognition, by providing a formal synthetic account of how internal ‘representations’ arise from autopoietic self-organization. The FEP thus furnishes empirically productive process theories (e.g., predictive processing) by which to guide discovery through the formal modelling of the embodied mind.
The nature of cognition is being re-considered. Instead of emphasizing formal operations on abstract symbols, the new approach foregrounds the fact that cognition is, rather, a situated activity, and suggests that thinking beings ought therefore be considered first and foremost as acting beings. The essay reviews recent work in Embodied Cognition, provides a concise guide to its principles, attitudes and goals, and identifies the physical grounding project as its central research focus.
The article considers the complexities of thinking about the computer as a model of the mind. It examines the computer as being a model of the brain in several very different senses of “model‘. On the one hand the basic architecture of the first modern stored-program computers was „modeled on“ the brain by John von Neumann. Von Neumann also sought to build a mathematical model of the biological brain as a complex system. A similar but different approach to modeling the brain was taken by Alan Turing, who on the one hand believed that the mind simply was a universal computer, and who sought to show how brain-like networks could self-organize into Universal Turing Machines. And on the other hand, Turing saw the computer as the universal machine that could simulate any other machine, and thus any particular human skill and thereby could simulate human intelligence. This leads to a discussion of the nature of “simulation” and its relation to models and modeling. The article applies this analysis to a written correspondence between Ashby and Turing in which Turing urges Ashby to simulate his cybernetic Homeostat device on the ACE computer, rather than build a special machine.
This chapter sketches an intellectual portrait of W. Ross Ashby’s thought from his earliest work on the mechanisms of intelligence in 1940 through the birth of what is now called artificial intelligence (AI), around 1956, and to the end of his career in 1972. It begins by examining his earliest published works on adaptation and equilibrium, and the conceptual structure of his notions of the mechanisms of control in biological systems. In particular, it assesses his conceptions of mechanism, equilibrium, stability, and the role of breakdown in achieving equilibrium. It then proceeds to his work on refining the concept of “intelligence,” on the possibility of the mechanical augmentation and amplification of human intelligence, and on how machines might be built that surpass human understanding in their capabilities. Finally, the chapter considers the significance of his philosophy and its role in cybernetic thought.
The concept of “autonomy,” once at the core of the original enactivist proposal in The Embodied Mind (Varela et al. in The embodied mind: cognitive science and human experience. MIT Press, Cambridge, 1991), is nowadays ignored or neglected by some of the most prominent contemporary enactivists approaches. Theories of autonomy, however, come to fill a theoretical gap that sensorimotor accounts of cognition cannot ignore: they provide a naturalized account of normativity and the resources to ground the identity of a cognitive subject in its specific mode of organization. There are, however, good reasons for the contemporary neglect of autonomy as a relevant concept for enactivism. On the one hand, the concept of autonomy has too often been assimilated into autopoiesis (or basic autonomy in the molecular or biological realm) and the implications are not always clear for a dynamical sensorimotor approach to cognitive science. On the other hand, the foundational enactivist proposal displays a metaphysical tension between the concept of operational closure (autonomy), deployed as constitutive, and that of structural coupling (sensorimotor dynamics); making it hard to reconcile with the claim that experience is sensorimotorly constituted. This tension is particularly apparent when Varela et al. propose Bittorio (a 1D cellular automata) as a model of the operational closure of the nervous system as it fails to satisfy the required conditions for a sensorimotor constitution of experience. It is, however, possible to solve these problems by re-considering autonomy at the level of sensorimotor neurodynamics. Two recent robotic simulation models are used for this task, illustrating the notion of strong sensorimotor dependency of neurodynamic patterns, and their networked intertwinement. The concept of habit is proposed as an enactivist building block for cognitive theorizing, re-conceptualizing mental life as a habit ecology, tied within an agent’s behaviour generating mechanism in coordination with its environment. Norms can be naturalized in terms of dynamic, interactively self-sustaining, coherentism. This conception of autonomous sensorimotor agency is put in contrast with those enactive approaches that reject autonomy or neglect the theoretical resources it has to offer for the project of naturalizing minds.
Dynamicism has provided cognitive science with important tools to understand some aspects of “how cognitive agents work” but the issue of “what makes something cognitive” has not been sufficiently addressed yet and, we argue, the former will never be complete without the latter. Behavioristic characterizations of cognitive properties are criticized in favor of an organizational approach focused on the internal dynamic relationships that constitute cognitive systems. A definition of cognition as adaptive-autonomy in the embodied and situated neurodynamic domain is provided: the compensatory regulation of a web of stability dependencies between sensorimotor structures is created and pre served during a historical/developmental process. We highlight the functional role of emotional embodiment: internal bioregulatory processes coupled to the formation and adaptive regulation of neurodynamic autonomy. Finally, we discuss a “minimally cognitive behavior program” in evolutionary simulation modeling suggesting that much is to be learned from a complementary “minimally cognitive organization program”
In this article, we propose some fundamental requirements for the appearance of adaptivity. We argue that a basic metabolic organization, taken in its minimal sense, may provide the conceptual framework for naturalizing the origin of teleology and normative functionality as it appears in living systems. However, adaptivity also requires the emergence of a regulatory subsystem, which implies a certain form of dynamic decoupling within a globally integrated, autonomous system. Thus, we analyze several forms of minimal adaptivity, including the special case of motility. We go on to explain how an open-ended complexity growth of motility-based adaptive agency, namely, behavior, requires the appearance of the nervous system. Finally, we discuss some implications of these ideas for embodied robotics.
The concept of agency is of crucial importance in cognitive science and artificial intelligence, and it is often used as an intuitive and rather uncontroversial term, in contrast to more abstract and theoretically heavy-weighted terms like “intentionality”, “rationality” or “mind”. However, most of the available definitions of agency are either too loose or unspecific to allow for a progressive scientific program. They implicitly and unproblematically assume the features that characterize agents, thus obscuring the full potential and challenge of modeling agency. We identify three conditions that a system must meet in order to be considered as a genuine agent: a) a system must define its own individuality, b) it must be the active source of activity in its environment (interactional asymmetry) and c) it must regulate this activity in relation to certain norms (normativity). We find that even minimal forms of proto-cellular systems can already provide a paradigmatic example of genuine agency. By abstracting away some specific details of minimal models of living agency we define the kind of organization that is capable to meet the required conditions for agency (which is not restricted to living organisms). On this basis, we define agency as an autonomous organization that adaptively regulates its coupling with its environment and contributes to sustaining itself as a consequence. We find that spatiality and temporality are the two fundamental domains in which agency spans at different scales. We conclude by giving an outlook to the road that lies ahead in the pursuit to understand, model and synthesize agents.
In this article, we would like to discuss some aspects of a theoretical framework for Artificial Life, focusing on the problem of an explicit definition of living systems useful for an effective artificial construction of them. The limits of a descriptive approach will be critically discussed, and a constructive (synthetic) approach will be proposed on the basis of the autopoietic theory of Maturana and Varela.