CEPA eprint 1503 (EVG-216)

The cybernetic insights of Jean Piaget

Glasersfeld E. von (1999) The cybernetic insights of Jean Piaget. Cybernetics and Systems 30(2): 105–112. Available at http://cepa.info/1503
Self-organization is one of the central concerns of cybernetics. In 1937, 12 years before Norbert Wiener’s first book, Jean Piaget published an extensive study of his model of the child as a self-organizing cognitive agent that constructs its own knowledge. In this paper, I shall survey some of the fundamental commonalities of Piaget’s genetic epistemology and cybernetical thinking.
In 1937 Piaget published one of the most important books of his career. This was 60 years ago, and most of you, if you were there at all, had not yet learned to read. Some of you may have read the book since then, but I doubt that you gave it the inordinate amount of attention required to come to a full interpretation of it. The point is that this book requires something that, in those years, had not yet been invented – it requires parallel processing.
The book consists of five chapters, printed one after the other, but in order to get an adequate picture of the model Piaget is presenting, they have to be coordinated so that one comes to see them side-by-side as an integrated whole. The title, The Construction of Reality (Piaget, 1937), suggests this, but to carry it out on the basis of Piaget’s text is a Herculean task. Only in the last chapter are you given the principle that underlies the whole: The mind organizes the world by organizing itself.
Self-organization had, of course, been mentioned before, by von Bertalanffy, Bogdanov, and perhaps others who had not yet been discovered as forerunners of cybernetics. No one, however, had taken self-organization as the basic principle of human knowing. This was exactly what Piaget did.
And he went one step further; he suggested that the notion of the split between self and world, between the knower and the known, arises out of unconscious action. This action, he suggested, was driven by the needs of the organism and constrained by an environment which, at that point in development, is not yet experienced as something independent from the experiencer.
Piaget did not explicitly describe how the self comes to separate itself from the original oneness, but he said enough to suggest an educated guess. In many of his writings he reiterated two basic theoretical principles:
Knowledge invariably springs from action, andrhythm and regularity are its basis.
Unfortunately, we have no way of observing how the child begins to organize an experiential world during the nine months in the womb. It surely does, but we have to wait until it is born. Even at this point, the infant’s range of actions is still limited. But there is one action that is difficult to ignore, because it is rather recurrent. The infant cries. No one who has to do with infants can help noticing it. The science of psychology has been reluctant to investigate the phenomenon experimentally, no doubt for humanitarian reasons. To run experiments in which a baby is deliberately made to cry is considered out of bounds. Behavioral scientists, therefore, provided an ad hoc explanation.
Crying, as a rule, is not random behavior. It is directed, and it ceases when a specific change in the infant’s environment takes place. The caretaker may not always immediately recognize what particular change it requires, but the choice is limited. The infant may have been hungry, wet, too hot or cold, or subject to some other physical perturbation. A mere handful of causes, and so the student of development could assume that there is fixed wiring in the infant that connects the crying to these specific perturbations. The blatant behavior could, therefore, be written off as a kind of reflex, an automatic reaction that needs neither knowledge nor deliberation.
A few months later, however, the infant will cry when a toy is taken from him. It would be rather cumbersome to explain this by yet another specific reflex. It seems a good deal more parsimonious to say that there is a generic reflex that prompts the infant to cry whenever there is a perturbation of any kind. This seems a satisfactory explanation unless one asks how such generalizations could be achieved. But “stimulus generalization” was a powerful cover term for the science of behavior. It was simply taken for granted.
As far as I know, Piaget was the first to provide a model to explain the process of generalization. He saw it as a by-product of assimilation. But what he meant by “assimilation” has been very successfully obscured by innumerable introductory texts that purport to explain Piaget’s theory.
The simplest way to understand his concept of assimilation is to remember the old card-sorting machines that we struggled with in the early days of cybernetics. When we wanted the machine to extract from a deck of cards all those that had a particular constellation of holes punched into them, we provided a prototype of this constellation and told the machine to find all that matched it. The machine was able to do this. Indeed, it did it at a speed that, in those days, seemed miraculous. Part of the secret was that it looked for the particular constellation and for nothing else. The cards it selected might have all sorts of other holes, but this did not matter, because the class was defined by the holes in the given constellation. The machine did not see the other holes – it assimilated the cards to the prototype.
Assimilation is not, as so many textbooks say, a modification of the input. It is a restrictive way of seeing. To assimilate means to perceive only what fits one’s concepts.
The mechanized assimilation in the sorting machine was as powerful a tool in the technology of cybernetics as stimulus generalization was in behavior technology. Both were based on a trick which, many years ago, I called “the disregarding of differences.” The machine disregarded all the punched holes that were not the same as those of the given constellation – and the baby was supposed to be oblivious to the causes of its perturbations.
Behaviorists rarely, if ever, investigated why a slight hunger perturbation will prompt you to look for a restaurant when you are leisurely strolling through a foreign city, whereas you often disregard quite strong signals from the stomach when you are engrossed in interesting work.
The early leaders of cybernetics had aims that differed from those of the behavioral scientists. They were interested in the design of gadgets that could control sequences of events which hitherto had required the supervision of a human perceiver. Consequently, they had to know the specific events and how they could be causally connected. They still did not have to ask how the targets were set, relative to which events were to be controlled. These targets, or reference values, were set by the human designer or user of the gadgets. The question of how the values were chosen was not immediately relevant to engineers. It came to the forefront, however, as soon as cybernetics jumped to the second order.
Because I was interested in the development of knowing from an epistemological perspective I soon came up against that question of values. Piaget had said that knowledge springs from action, and Maturana complemented this by speaking of “effective action.” The moment we look at something that has no immediate effect on survival, it makes no sense to speak of control, or even of action, unless we posit values. If effectiveness is to be judged, we need criteria and a scale. Of course, I do not mean high-flown values such as liberty, honesty, or justice, but only primitive likes and dislikes.
I would suggest that the simplest form of values becomes manifest in what we call “preference.” The child, for instance, could be said to prefer being with the toy than to being without it. To prefer one thing or condition over another does not require a very sophisticated system.
With the help of their favorite contraption, the “T-maze,” the behaviorists have shown that even lowly creatures such as earthworms, when given a choice, turn to wet rather than to dry soil. If you imagine yourself in the skin of a worm, you will appreciate that it’s easier to move through a slippery medium than through a dry and scratchy one. To describe this situation, one might be tempted to say that earthworms prefer moist humus and try to avoid the dry stuff.
The hard-nosed scientists among you will be inclined to say: “There’s no preference – it’s simply the law of least resistance!” Others might say it’s the law of least effort. Both would be appropriate. Indeed, in the case of the earthworm, it is a little fanciful to speak of preference, but it’s only a shade more anthropomorphic than all our other descriptions of the world. We always look at it from our human point-of-view – even at the quarks that we cannot see.
Let me return to the human animal. There is a startling experiment that has been replicated many times. Shortly after birth you put a contact under an infant’s pillow, so that a bell will ring if the infant turns her head to the right. This has to happen only once by accident, and the infant will ring the bell again and again by turning her head, until she gets bored with the sound.
There is no more resistance against turning a head right than against turning it left, and the effort is exactly the same. The ringing of the bell, however, is something that seems to interest the child. A behaviorist would say, the sound of the bell is reinforcing, and leave it at that. But reinforcement is merely an empty cover term when it is applied to things that are not of physical benefit. And there is no physical reason why the baby should prefer the sound to silence. It is simply a preference, and a preference that is not determined biologically, entails a value.
The experiment with the baby and the bell also shows another thing. The baby immediately catches on to the repeatability of the sequence |Turn head| → |Bell rings|,and she demonstrates a perturbation if, after three or four successful turns of the head, you switch off the bell, so that it no longer rings. In other words, babies expect regularity. If A has been followed by B several times, B is expected to follow after every A.
There is nothing infantile about this. As adults, too, we expect future experience to honor the rules we have abstracted from the past. As David Hume observed more than 200 years ago, if we did not expect such regularity, we would have no grounds whatever for making inferences or predictions.
The babies’ preferences and expectation of regularity are obviously not conscious in the sense we usually give that word. Yet, as Maturana (1970) wrote 27 years ago:
A living system, due to its circular organization, is an inductive system and functions always in a predictive manner: what happened once will occur again. Its organization (genetic and otherwise) is conservative and repeats only that which works. (Biology of Cognition, 1970)
This describes very well what one observes with the baby and the bell and it fits innumerable other observations. But it does not reveal how the baby distinguishes what works from what does not work. Such a distinction requires an evaluation and a preference.
I have no idea what kind of agency could carry out an evaluation on the preconscious level, but then I do not (nor, it seems to me, does anyone else) have a viable model of the agency that, a little later in life, allows us to make conscious evaluations. It is an integral part of what we call our self, and I shall leave it at that.
Here, I merely want to suggest that catching on to regularities is sufficient to construct the dichotomy of self and world within a sensorimotor manifold that has no such split to begin with. I have borrowed the term “manifold” from Kant, because it neatly fits the totality of nervous impulses that whirr about in the closed nervous system. William James called it “a blooming, buzzing confusion.”
When an infant’s hand touches one of its legs, there are two simultaneous sequences of sensory impulses – one from the hand, the other from the leg. When it touches the blanket or the cot, there is only one. This is a difference that regularly recurs in a great variety of situations and is apt to supply a first elementary distinction between self and world. It is not only human infants that can make this distinction.
As I have mentioned elsewhere, when a kitten plays with its littler mates, it soon learns that biting its own tail is different from biting another’s.
When motion in the visual field is considered, similar linkages of sensory sequences permit a similar distinction. And so does any motion that is self-initiated whenever it is compared to sensory changes that are not.
On the strength of these regularities in the domains of sensorimotor experience the self places itself, as Piaget says in the last section of the book I mentioned, into a world that it considers independent of its own actions.
What gives this theory its cybernetic character is the fact that is stated in the initial quotation. It is the mind itself that generates the organization of self and world, and it does this on the basis of sensorimotor experience, without the traditional crutch of external information.
The principle that knowledge springs from action is no less foundational on the level of conceptual construction. In Piaget’s theory, anything “logical” or “mathematical” originates from action, when a pattern of acting is abstracted from the material with which it was performed.
The famous phenomenon called “the conservation of number” is a good example. Thousands of experiments and replications have been run to demonstrate that children are not born with a fixed conservation routine. They invariably begin by thinking that in order to know how many items there are in a collection of pebbles of poker chips, the collection has to be counted each time its spatial arrangement has been changed. Only gradually do they come to realize that the number remains the same as long as nothing is added or taken away.
The trouble with most of the educational applications of these studies is not that the teachers know that the number will remain the same, but that most of them believe it is a perceptual property of the counted objects. Consequently, they do not help their students to become aware of the fact that the sameness underlying conservation lies in their own counting activity and does not reside in the specific objects. The pebbles or chips matter only insofar as they are countable.
The principle of conservation, be it applied to number, energy, or the concept of ourselves, is the foundation that allows us to build a relatively stable world of experience. William James called it a “Denkmittel” – an instrument of thought, and that is a good characterization.
Without conservation there would be no regularity, no rules, and no control – and control, after all, is what a large part of cybernetics is about. I could go on and show, as I have done elsewhere, that Piaget’s learning theory is based on the notion of feedback in the context of schemes of action and operation, but I have said enough to rest my case.
As cyberneticians, then, I think we should be grateful to Piaget for having provided a model of how, with the help of assimilation, we may construct regularities and rules that to some extent give us the possibility to predict and control our experience. And don’t let us forget another cybernetic principle inherent in Piaget’s work; namely, that a model, no matter how well it works for our purposes, is the embodiment of a possibility – not the embodiment of a truth about an independently real world.
References
Maturana H. R. (1970) Biology of cognition. Biological Computer Laboratory (BCL) Research Report BCL 9.0. University of Illinois, Urbana. Available at http://cepa.info/535
Piaget J. (1937) La construction du reel chez l’enfant. Geneva: Delachaux et Niéstle.
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/1503 on 2016-10-02 · Publication curated by Alexander Riegler