For millennia people have wondered what makes the living different from the non-living. Beginning in the mid-1980s, artificial life has studied living systems using a synthetic approach: build life in order to understand it better, be it by means of software, hardware, or wetware. This review provides a summary of the advances that led to the development of artificial life, its current research topics, and open problems and opportunities. We classify artificial life research into 14 themes: origins of life, autonomy, self-organization, adaptation (including evolution, development, and learning), ecology, artificial societies, behavior, computational biology, artificial chemistries, information, living technology, art, and philosophy. Being interdisciplinary, artificial life seems to be losing its boundaries and merging with other fields. Relevance: Artificial life has contributed to philosophy of biology and of cognitive science, thus making it an important field related to constructivism.
This chapter sketches an intellectual portrait of W. Ross Ashby’s thought from his earliest work on the mechanisms of intelligence in 1940 through the birth of what is now called artificial intelligence (AI), around 1956, and to the end of his career in 1972. It begins by examining his earliest published works on adaptation and equilibrium, and the conceptual structure of his notions of the mechanisms of control in biological systems. In particular, it assesses his conceptions of mechanism, equilibrium, stability, and the role of breakdown in achieving equilibrium. It then proceeds to his work on refining the concept of “intelligence,” on the possibility of the mechanical augmentation and amplification of human intelligence, and on how machines might be built that surpass human understanding in their capabilities. Finally, the chapter considers the significance of his philosophy and its role in cybernetic thought.
The concept of agency is of crucial importance in cognitive science and artificial intelligence, and it is often used as an intuitive and rather uncontroversial term, in contrast to more abstract and theoretically heavy-weighted terms like “intentionality”, “rationality” or “mind”. However, most of the available definitions of agency are either too loose or unspecific to allow for a progressive scientific program. They implicitly and unproblematically assume the features that characterize agents, thus obscuring the full potential and challenge of modeling agency. We identify three conditions that a system must meet in order to be considered as a genuine agent: a) a system must define its own individuality, b) it must be the active source of activity in its environment (interactional asymmetry) and c) it must regulate this activity in relation to certain norms (normativity). We find that even minimal forms of proto-cellular systems can already provide a paradigmatic example of genuine agency. By abstracting away some specific details of minimal models of living agency we define the kind of organization that is capable to meet the required conditions for agency (which is not restricted to living organisms). On this basis, we define agency as an autonomous organization that adaptively regulates its coupling with its environment and contributes to sustaining itself as a consequence. We find that spatiality and temporality are the two fundamental domains in which agency spans at different scales. We conclude by giving an outlook to the road that lies ahead in the pursuit to understand, model and synthesize agents.
This Introduction is our attempt to clarify further the cluster of key notions: autonomy, viability, abduction and adaptation. These notions form the conceptual scaffolding within which the individual contribution contained in this volume can be placed. Hopefully, these global concepts represent fundamental signposts for future research that can spare us a mere flurry of modelling and simulations into which this new field could fall.
Artificial intelligence research has foundered on the issue of representation. When intelligence is approached in an incremental manner, with strict reliance on interfacing to the real world through perception and action, reliance on representation disappears. In this paper we outline our approach to incrementally building complete intelligent Creatures. The fundamental decomposition of the intelligent system is not into independent information processing units which must interface with each other via representations. Instead, the intelligent system is decomposed into independent and parallel activity producers which all interface directly to the world through perception and action, rather than interface to each other particularly much. The notions of central and peripheral systems evaporate – everything is both central and peripheral. Based on these principles we have built a very successful series of mobile robots which operate without supervision as Creatures in standard office environments.
Research on social cognition needs to overcome a disciplinary disintegration. On the one hand, in cognitive science and philosophy of mind – even in recent embodied approaches – the explanatory weight is still overly on individual capacities. In social science on the other hand, the investigation of the interaction process and interactional behaviour is not often brought to bear on individual aspects of social cognition. Not bringing these approaches together has unfairly limited the range of possible explanations of social understanding to the postulation of complicated internal mechanisms (contingency detection modules for instance). Starting from the question What is a social interaction? we propose a fresh look at the problem aimed at integrating individual cognition and the interaction process in order to arrive at more parsimonious explanations of social understanding. We show how an enactive framework can provide a way to do this, starting from the notions of autonomy, sense-making and coordination. We propose that not only each individual in a social encounter but also the interaction process itself has autonomy. Examples illustrate that these autonomies evolve throughout an encounter, and that collective as well as individual mechanisms are at play in all social interactions. We also introduce the notion of participatory sense-making in order to connect meaning-generation with coordination. This notion describes a spectrum of degrees of participation from the modulation of individual sense-making by coordination patterns, over orientation, to joint sense-making. Finally, we discuss implications for empirical research on social interaction, especially for studies of social contingency.
Autopoietic enactivism (AE) is a relatively young but increasingly influential approach within embodied cognitive science, which aims to offer a viable alternative framework to mainstream cognitivism. Similarly, in biology, the nascent field of biosemiotics has steadily been developing an increasingly influential alternative framework to mainstream biology. Despite sharing common objectives and clear theoretical overlap, there has to date been little to no exchange between the two fields. This paper takes this under-appreciated overlap as not only a much needed call to begin building bridges between the two areas but also as an opportunity to explore how AE could benefit from biosemiotics. As a first tentative step towards this end, the paper will draw from both fields to develop a novel synthesis – biosemiotic enactivism – which aims to clarify, develop and ultimately strengthen some key AE concepts. The paper has two main goals: (i) to propose a novel conception of cognition that could contribute to the ongoing theoretical developments of AE and (ii) to introduce some concepts and ideas from biosemiotics to the enactive community in order to stimulate further debate across the two fields.
This article deals with the links between the enaction paradigm and artificial intelligence. Enaction is considered a metaphor for artificial intelligence, as a number of the notions which it deals with are deemed incompatible with the phenomenal field of the virtual. After explaining this stance, we shall review previous works regarding this issue in terms of artifical life and robotics. We shall focus on the lack of recognition of co-evolution at the heart of these approaches. We propose to explicitly integrate the evolution of the environment into our approach in order to refine the ontogenesis of the artificial system, and to compare it with the enaction paradigm. The growing complexity of the ontogenetic mechanisms to be activated can therefore be compensated by an interactive guidance system emanating from the environment. This proposition does not however resolve that of the relevance of the meaning created by the machine (sense-making). Such reflections lead us to integrate human interaction into this environment in order to construct relevant meaning in terms of participative artificial intelligence. This raises a number of questions with regards to setting up an enactive interaction. The article concludes by exploring a number of issues, thereby enabling us to associate current approaches with the principles of morphogenesis, guidance, the phenomenology of interactions and the use of minimal enactive interfaces in setting up experiments which will deal with the problem of artificial intelligence in a variety of enaction-based ways.
Morphological computing emerged recently as an approach in robotics aimed at saving robots’ computational and other resources by utilizing physical properties of the robotic body to produce and control behavior automatically. The idea is that the morphology of an agent (a living organism or a machine) constrains its possible interactions with the environment as well as its development, including its growth and reconfiguration. The nature of morphological computing becomes especially apparent in the info-computational framework, which combines informational structural realism (the idea that the world, for an agent, is an informational structure) with natural computationalism (the view that all of nature forms a network of computational processes). Info-computationalism is a constructivist approach that describes morphological computation as a process of continuous self-structuring of information and shaping of both interactions and informational structures.
This article presents a constructivist model of human cognitive development during infancy. According to constructivism, the elements of mental representation-even such basic elements as the concept of physical object-are constructed afresh by each individual, rather than being innately supplied. A (partially-specified, yet-unimplemented) mechanism, the Schema Mechanism, is proposed here; this mechanism is intended to achieve a series of cognitive constructions characteristic of infants' sensorimotor-stage development, primarily as described by Piaget. In reference to Piaget's “genetic epistemology”, I call this approach genetic AI-“genetic” not in the sense of genes, but in the sense of genesis: development from the point of origin. The Schema Mechanism focuses on Piaget's concept of the activity and evolution of cognitive schemas. The schema is construed here as a context-sensitive prediction of what will follow a certain action. Schemas are used both as assertions about the world, and as elements of plans to achieve goals. A mechanism of attribution causes a schema's assertion to be extended or revised according to the observed effects of the schema's action; due to the possible relevance of conjunctions of context conditions, the attribution facility needs to be able to sort through a combinatorial explosion of hypotheses. Crucially, the mechanism constructs representations of new actions and state elements, in terms of which schemas are expressed. Included here is a sketch of the proposed Schema Mechanism, and highlights of a hypothetical scenario of the mechanism's operation. The Schema Mechanism starts with a set of sensory and motor primitives as its sole units of representation. As with the Piagetian neonate, this leads to a “solipsist” conception: the world consists of sensory impressions transformed by motor actions. My scenario suggests how the mechanism might progress from there to conceiving of objects in space-representing an object independently of how it is currently perceived, or even whether it is currently perceived. The details of this progression paralledl the Piagetian development of object conception from the first through fifth sensorimotor stage.