In a previous paper, an interpretation of spirituality along constructivist lines was proposed (Gash and Shine Thompson, 2002). One of the lines of exploration discussed personal transformation as a possible consequence of an experience of an epiphany – a moment of grace. Epiphanies are first, grounded in constructivist psychology as moments when a person shifts levels to reach new understandings (Gregory Bateson, 1987). Epiphanies are also moments of insight that allow the possibility of personal transformation, and hence potentially desirable experiences of spiritual growth. In the present paper we outline a series of experiences of epiphanies in children’s learning in the context of a project on constructionist learning led by one of us – Deirdre Butler. The purpose of the paper is to make a case for the importance of such moments as providing opportunities for personal growth, encapsulated in the title of the project EmpoweringMinds. Relevance: The value of wonder in education; using digital technology in classrooms
Problem: Is constructivism contradicted by the reductionist determinism inherent in digital computation? Method: Review of examples from dynamical systems sciences, agent-based modeling and artificial intelligence. Results: Recent scientific insights seem to give reason to consider constructivism in line with what computation is adding to our knowledge of interacting dynamics and the functioning of our brains. Implications: Constructivism is not necessarily contradictory to digital computation, in particular to computer-based modeling and simulation. Constructivist content: When viewed through the lens of computation, in many of its aspects constructivism seems in line with what currently is held to be valid in science.
Problem: The inclusion of the observer into scientific observation entails a vicious circle of having to observe the observer as dependent on observation. Second-order science has to clarify how its underlying circularity can be scientifically conceived. Method: Essayistic and conceptual analysis, sporadically illustrated with agent-based experiments. Results: Second-order science – implying science in general – is fundamentally and ineluctably circular. Implications: The circularity of second-order science asks for analytical methods able to cope with phenomena of complex causation and “synchronous asynchrony,” such as tools for analyzing non-linearly interacting dynamics, decentralized, clustered networks and in general, systems of complex interacting components.
Context: Referring to a recent proposition by Kauffman about the “fundamental nature of circularity in cybernetics and in scientific work in general,” I try to advance this insight with the help of system scientific concepts and a computational model. Problem: Often circularity seems to be taken as a metaphor that does not provide a firm epistemological base that fosters analysis. Method: The methodology builds on mathematics, computer-based modeling, and reasoning. Results: By building on conceptual suggestions for grasping the micro-macro difference of complex systems in terms of computational power, circularity can be conceived of as an emerging macro-level phenomenon. Implications: I show that the seemingly irritating - and traditionally evaded - concept of circularity is a fundamental and ubiquitous phenomenon in complex systems that can be grasped on a firm physical basis open to computational analysis. The proposal could support constructivist reasoning and help to eventually bridge the disconcerting gap between the humanities and natural sciences. Constructivist content: Circularity is a fundamental principle in the conception of second-order cybernetics and in particular in the observation of observing systems, as suggested by von Foerster. Trying to set it up on a firm analytical basis could advance the constructivist approach and further support it in becoming the contemporary scientific epistemology it deserves to be.
Context: By proposing to regard objects as “tokens for eigenbehavior,” von Foerster’s seminal paper opposes the intuitive subject-object dualism of traditional philosophy, which considers objects to be instances of an external world Problem: We argue that this proposal has two implications, one for epistemology and one for the demarcation between the natural sciences and the humanities. Method: Our arguments are based on insights gained in computational models and from reviewing the contributions to this special issue. Results: Epistemologically, von Foerster’s proposal suggests that what is called “reality” could be seen as an ensemble of eigenforms generated by the eigenbehavior that arises in the interaction of multiple dynamics. Regarding science, the contributions to this special issue demonstrate that the concept of eigenbehavior can be applied to a variety of disciplines from the formal and natural sciences to the humanities. Its universal applicability provides a strong argument for transdisciplinarity, and its emphasis on the observer points in the direction of an observer-inclusive science. Implications: Thinking in eigenbehavior may not only have implications for tearing down the barriers between sciences and humanities (although a common methodology based on von Foerster’s transdisciplinary approach is still to crystalize), a better understanding of eigenbehaviors may also have profound effects on our understanding of ourselves. This also opens the way to innovative behavior design/modification technologies.
Context: W. R. Ashby’s work on homeostasis as the basic mechanism underlying all kinds of physiological as well as cognitive functions has aroused renewed interest in cognitive science and related disciplines. Researchers have successfully incorporated some of Ashby’s technical results, such as ultrastability, into modern frameworks (e.g., CTRNN networks). Problem: The recovery of Ashby’s technical contributions has left in the background Ashby’s far more controversial non-technical views, according to which homeostatic adaptation to the environment governs all aspects of all forms of life. This thesis entails that life is fundamentally “heteronomous” and it is conceptually at odds with the autopoiesis framework adopted by Ashby’s recent defenders as well as with the primacy of autonomy in human life that most of the Western philosophical tradition upholds. The paper argues that the use of computer simulations focused on the more conceptual aspects of Ashby’s thought may help us recover, extend and consequently assess an overall view of life as heteronomy. Method: The paper discusses some computer simulations of Ashby’s original electro-mechanical device (the homeostat) that implement his techniques (double-feedback loops and random parameter-switching). Results: First simulation results show that even though Ashby’s claims about homeostatic adaptivity need to be slightly weakened, his overall results are confirmed, thereby suggesting that an extension to virtual robots engaged in minimal cognitive tasks may be successful. Implications: The paper shows that a fuller incorporation of Ashby’s original results into recent cognitive science research may trigger a philosophical and technical reevaluation of the traditional distinction between heteronomous and autonomous behavior. Constructivist content: The research outlined in the paper supports an extended constructionist perspective in which agency as autonomy plays a more limited role.
This paper introduces an original model to provide software agents and robots with the capacity of learning by interpreting regularities in their stream of sensorimotor experience rather than by exploiting data that would give them ontological information about a predefined domain. Specifically, this model pulls inspiration from: a) the movement of embodied cognition, b) the philosophy of knowledge, c) constructivist epistemology, and d) the theory of enaction. Respectively to these four influences: a) Our agents discover their environment through their body’s active capacity of experimentation. b) They do not know their environment “as such” but only “as they can experience it.” c) They construct knowledge from regularities of sensorimotor experience. d) They have some level of constitutive autonomy. Technically, this model differs from the traditional perception/cognition/action model in that it rests upon atomic sensorimotor experiences rather than separating percepts from actions. We present algorithms that implement this model, and we describe experiments to validate these algorithms. These experiments show that the agents exhibit a certain form of intelligence through their behaviors, as they construct proto-ontological knowledge of the phenomena that appear to them when they observe persistent possibilities of sensorimotor experiences in time and space. These results promote a theory of artificial intelligence without ontological data about a presupposed reality. An application includes a more robust way of creating robots capable of constructing their own knowledge and goals in the real world, which could be initially unknown to them and un-modeled by their designers.
User experience research in the field of human-computer interaction tries to understand how humans experience the interaction with technological artefacts. It is a young and still emerging field that exists in an area of tension. There is no consensus on how the concept of user experience should be defined or on how it should be researched. This paper focuses on two major strands of research in the field that are competing. It tries to give an overview over both and relate them to each other. Both start from the same premise: usability (focusing on performance) is not enough. It is only part of the interaction with technological artefacts. And further: user experience is not very different from experience in general. Then they develop quite different accounts of the concept. While one focuses more on uncovering the objective in the subjective, on the precise and the formal, the other one stresses the ambiguous, the human and suggests to live with the subjectivity that is inherent in the concept of (user) experience. One focuses more on evaluation rather than design and the other more on design than evaluation. One is a model and the other one more a framework of thought. Both can be criticised. The model can be questioned in terms of validity and the results of the other approach do not easily generalize across contexts – the reliability can be questioned. Sometimes the need for a unified view in user experience research is emphasized. While I doubt the possibility of a unified view I think it is possible to combine the two approaches. This combination has only rarely been attempted and not been critically reflected.
In spite of decades of use of agent-based modelling in social policy research and in educational contexts, very little work has been done on combining the two. This paper accounts for a proof-of-concept single case-study conducted in a college-level Social Policy course, using agent-based modelling to teach students about the social and human aspects of urban planning and regional development. The study finds that an agent-based model helped a group of students think through a social policy design decision by acting as an object-to-think-with, and helped students better connect social policy outcomes with behaviours at the level of individual citizens. The study also suggests a set of new issues facing the design of Constructionist activities or environments for the social sciences.