CEPA eprint 1333 (EVG-043)
Cybernetics and cognitive development
Glasersfeld E. von (1976) Cybernetics and cognitive development. Cybernetics Forum 8: 115–120. Available at http://cepa.info/1333
Table of Contents
In this paper I shall briefly discuss some points concerning the generation of invariances and rules that seem to be much the same for theories of cognitive development and for cybernetic control theory. The stress should be on some. Given the time limit, much will have to be left out. Nevertheless I hope to show that there are certain logical and epistemological aspects to the construction of invariances and the making of rules that cannot be disregarded in any study of cognitive functions or intelligent self-regulation.
One of the characteristics shared by theories of cognitive development is, as William Kessen (1966) observed, that they employ, explicitly or implicitly, the notion of rule. Without entering into the discussion about what a rule is and how that term should be defined (cf. Harré, 1974; Toulmin, 1974), I think we can agree on one point: A rule, no matter what it says or does, posits an actual or projected regularity in our experience, something that is experienced more than once, something that is repeated and thus in some sense an invariance.
If the cognitive developmentalist must examine and explain the generation and use of rules, his task is not so different from that of a cyberneticist who hypothesizes (and perhaps even attempts physically to construct) the algorithms of a mechanism that is intended to model the functions of some black box. In both cases the main difficulty springs from the fact that the invariances and rules that govern the observed organism’s “output” or “behavior” are themselves not part of the observer’s experience. This difficulty was perhaps given a little too much weight by those behaviorists who decided never to consider what might be going on inside the organism they were observing. Such a reaction seems exaggerated, given the success other sciences have had in hypothesizing the unobservable. At the other extreme, however,
I feel that the difficulty has been given too little weight by some cognitivists, who seem to take for granted that whatever regularities and rules they manage to glean from their own experience are necessarily those that any other organism should arrive at.
The conception of intelligence as the creator of constancies and invariances is the one factor that lends cohesion to the many levels and branches of Piaget’s theory of development. Invariances become manifest in the child’s construction of “permanent” objects, in the various types of “conservation”, and, most generally, in the ubiquitous activity of “assimilation” (Piaget, 1967a, 1967b; 1969; 1970). In all these cases it is the active organism that constructs an invariant item out of two or more experiences by holding on to certain parts of the experience and discarding others (von Glasersfeld, 1976). The creation of a permanent object is perhaps the most illuminating example. In order to attribute continuous existence to an item, the child must posit that it is one and the same item regardless of the angle from which it happens to be perceived (which may change the item’s shape), regardless of the distance (which will change its size), regardless of the light (which may change its color), and regardless of any number of other contextual differences. In short, what the child has to learn to hold constant is a rather flexible constellation of diverse characteristics each of which has its own range of variability.
The point that interests us here, however, is not the complexity of such an object concept but the fact that it has to be produced by the organism’s own construction regardless of whether we believe that the concept corresponds to a thing-in-itself that exists “out there” independent of any organism’s experience. Even if we do believe that perception is mere replication of an objective world, we cannot consider the concept of an object a simple “given” because the organism’s sensory experience of the item will never be exactly the same twice. From the point of view of the experiencer, there can be no object until he himself coordinates several experiences and thus constructs the invariant concept of the object.
When it comes to invariances of the type that we would call “rules” the situation is no different. It is again the organism that must do the coordinating, and the only things an organism can coordinate are his own sensory signals and the constructs already made out of them. The simplest rule of the form “If A, then B” presupposes that the organism has already coordinated certain sensory signals to form two recognizable invariant patterns A and B and that these two patterns have recurred frequently enough in the same sequential order for the organism to take that order as invariant.[Note 1] To “take something as invariant” is the prerequisite for the inferred expectation that that something will be experienced again. “All inferences from experience suppose, as their foundation, that the future will resemble the past,” said Hume (1748/1963, p.47), and since “the past” can be no more than the organism’s record of had experience, it should be clear that what an organism predicts by means of his inferences is not really the course of a world but rather the course of his own experience in terms of proximal data.
In the view I have presented here, knowledge – at least the kind we call “rational” or “scientific” knowledge – is no more and no less than the possession of conceptual invariances and rules with which to explore, order, and predict experience. I say “experience” because what I have been discussing is not affected by the question whether or not this experience reflects an absolute reality. That question (and, indeed, any question a philosopher would call “ontological”) is of no relevance in theories of cognitive development. Seen from the point of view of the developing organism at the beginning of his cognitive career, it makes no difference at all whether regularities, invariances, and rules “exist” in a “real” world or not, since the only place where the organism can find them, learn them, and know them is in his own experience.
A simple feedback-control system has very few parts (e.g. Powers, 1973). It has a sensory organ of some kind that can send signals to a comparator, i.e. a central unit that compares the signals received from the sensor with a reference value that has been previously fixed. As long as the sensory signal and the reference value are the same, the system does nothing. If they cease to be the same, the comparator generates an error- signal that is conveyed to the effector organ of the system and there triggers an activity. The thermostat in a refrigerator is perhaps the most common example: When the temperature rises above a set reference value, the cooling machinery is switched on; when the temperature sinks below the reference value, the cooling machinery is switched off again. The refrigerator has one sense organ and one activity, and thanks to the engineer who designed it, the activity has a fairly reliable effect on the signals sent by the sensor, i.e. the cooling machinery rarely fails to reduce the sensed temperature to the reference value. The refrigerator did not have to learn the rule “If error-signal, then switch on cooler,” it was designed to enact it. The homeostatic devices that automatically keep constant the internal environment of living organisms are of very much the same kind (Cannon, 1932/1963) and they seem to have been “designed” by the evolutionary method of selection by survival. But there are also innumerable rules of the same form that living organisms can learn individually.
“A living system,” says Humberto Maturana (1970), “is an inductive system and functions always in a predictive manner: what happened once will occur again. Its organization (genetic and otherwise) is conservative and repeats only that which works.”[Note 2] The inductive principle of repeating that which works is, of course, the very principle Ross Ashby (1970) incorporated in his project of a learning homeostat. This was to be an organism in which there was no longer only one error-signal and one activity, but rather several of both, as well as a small computing device that would gradually make appropriate connections by learning on the basis of its own experience which activities helped to reduce which error-signals. What such a system has to learn is, indeed, a set of rules of the kind: “If error-signal a, carry out activity 1; if error-signal b, carry out activity 2;” etc. The organism starts out with the very general rule (“genetic disposition”): “When there is an error- signal, carry out some activity.” It keeps a record of error-signals, ensuing activities, and consequent changes in the error-signal, and connects and disconnects on the basis of “what works.” Under the most favourable circumstances, the organism will end up with as many rules specifying one particular activity as there are error-signals.
Again, what interests us here is not the technical intricacy of such an organism but the epistemological lesson we can learn from it. The cyberneticist calls any feedback-control system a “closed loop” because he considers the system to be circular in its action. The environment impinges upon the sensor, the sensor sends a signal; if the signal does not match the reference value the comparator generates an error-signal, the error-signal triggers the system’s activity, and the activity modifies the environment so that the sensor sends a modified signal. This, clearly, is an observer’s theory or model of an observed organism, and from the observer’s point of view the least problematic part of the loop is the effect of the organism’s activity on its environment. It is a simple if-A-then-B rule coordinated within the observer’s experiential field. What precisely constitutes the organism’s sensory signal, and the reference value to which the organism compares it, is not accessible to the observer in terms of his own sensory signals but remains part of his hypothesized model of the observed black box.
Seen from inside the organism, however, there is a radical difference: there is no obvious environment at all. There are sensory signals, error- signals, and proprioceptive or reafferent signals concerning the activities. With the possible addition of reference values that is all the experience there can possibly be. Whatever rules the system learns can be based only on the cumulative experience that a given activity is usually followed by the reduction or elimination of a particular error-signal; and that is to say, what‑ever “knowledge” such a system may be said to acquire will always be knowledge that is derived exclusively from (and concerns nothing but) sequences, regularities, and ultimately patterns of the system’s internal signals.
Though psychologists by and large have not been very anxious to come to grips with questions concerning the relationship between the observer and the observed, these questions can hardly be disregarded when the area under investigation is cognitive development. In studies of cognition the relationship is an intricate one. At first sight it might appear to be rather circular: The observer attempts to isolate and explain, in the observed subject, the acquisition and perfection of the very functions and processes which he, the observer, is using in observing and explaining his subject. In terms of the kind of rules I have mentioned before, this would mean that the cognitive developmentalist is really trying to find out how the child comes to acquire the rules that he, the observer, is using.
The surface appearance of circularity, however, disappears if we apply to this relationship what we have earlier said about regularities and rules. To observe any particular item usually means that attention is focused on a particular area of one’s field of experience, and the item, whatever it may be, has already been discriminated from the rest of the experiential field. But the item is nevertheless part of the observer’s experience, which is to say it is something the observer has put together using the invariances, rules, and concepts he has constructed for himself in the course of his past experience. To observe further means looking for interactions between, on the one hand, the part of the observer’s experience that he considers an organism and, on the other hand, the rest of his experiential field, which he now considers that item’s environment. This division between the observed organism and its environment is both legitimate and useful, provided we remain aware of the fact that what we now call the observed organism’s environment is still part of the observer’s own experiential field (von Foerster, 1970). This awareness alone can help us avoid two traps that have. caused considerable confusion in the past. First, there is the tempting but logically erroneous idea that what we rightly call “environment” relative to an organism when both the organism and its environment are being observed by us, must also be our environment and can, therefore, be held causally responsible for what we ourselves experience. Second, there is the mistaken belief that that “environment” which is part of our experiential field has to be identical with the experiential field of the observed organism.
The first of these traps may be, as some would say, a “purely epistemological” affair and not really worth worrying about if you want to get on with psychology. I don’t agree, but that discussion would lead far beyond the goal for today.
The second trap is less abstruse and better known. Early in this century, von Uexküll (1928) drew attention to the fact that, for any organism, the “effective environment” is no more and no less than what that organism can perceive. This conception of a relative environment has had a certain influence on the growth of ethology, but as far as developmental psychology is concerned it has not yet had the impact it should have. This may be because the doctrine is often stated in a diluted form that invites misunderstanding. Be that as it may, I am here stating it in a way that, if anything, goes further than what von Uexküll had in mind.
Given what was said above about the individual generation of regularities, invariances, and rules, and about the essential difference between the observer’s and the observed subject’s fields of experience, it should be clear that the environment of, for instance, a small child must necessarily have a structure that is very different from the structure of an adult’s environment. By this I do not merely mean that the child’s environment may contain less items than the adult’s, or that it may be less “sophisticated” but compatible. No, I mean that the child’s environment is made up of invariances, rules, and concepts, many of which are substantially different from those of the adult. There is plenty of support for this idea, from conservation tests to Furth’s (1975) study of children’s conception of certain social institutions.
This essential difference, and often incompatibility, of concepts lies at the heart of Piaget’s theory of developmental stages, Progress from one stage to another is not implemented by an addition, but by a change, a reorganization, a shift in the point of view that is very like what Kuhn (1962/1970) has called a paradigm shift. Conceptual shifts in the cognitive development of the child as well as in the history of science can be understood only from within, because such shifts are not determined by changes in any observable environment but by changes in the way of experiencing. That is to say, they are not observable as such but only through their effects. Hence, the observer faces the well- known problem of having to infer competence from performance, a problem with which cybernetics is probably more familiar than any other discipline. Every black box constitutes the challenge to infer internal invariances, rules, and algorithms from input-output relations. The kinds of black boxes cognitive developmentalists are concerned with present the added problem that we can never be
quite sure as to what their effective input is. As observers, we are nearly always more than one paradigm shift away from the children we observe and we are therefore constantly prone to explain their performance in terms of our competence.
If we accept the notion that rational knowledge involves the generation and use of invariances and rules, we cannot help asking how these invariances are generated and what they concern. I have argued that an examination of how a developing organism may come to have cognitive structures leads inevitably to the conclusion that all these structures, be they the simplest invariance, a permanent object, or a complex rule of action or prediction, must be the result of the organism’s active coordination of his own experience. Even if we postulate a fully structured, pre-existing world, the growing organism can build his representation of that world from no other material than the proximal data of his experience. This proposition holds also for any theoretical or actual feedback-control model. Though to the observer of such a model the system may seem closed through what the observer calls the system’s environment, the system itself has no possible access to the environment beyond its own internal signals. Such regularities and rules as the system may construct and learn to use, therefore, have to be derived exclusively from proximal data.
In trying to devise models of complex organisms and complex activities, cyberneticists have come to realize that the behavior of organisms, their rules and their knowledge, can be determined only by the world the organisms themselves construct out of their own experience, and their experience may indeed be very different from the observer’s.
Though cognitive psychology has begun to adopt the view that the child constructs his reality, it still sees development only in terms of the socially accepted adult constructs and consequently sees “progress” only where there is a change towards the particular construction of invariances that we, the adults, have come to accept as the reality of our world. I would suggest that we do not have to look far afield today in order to suspect – with Voltaire – that the way in which we customarily structure our experience may not be conducive to the best of possible worlds.
Cybernetics deliberately leads us inside the organism and demonstrates that there are many ways of computing invariances and that the formation of concepts is constrained only by the invariances and concepts we have already formed. If cybernetics were to achieve no more than that, the confirmation of this insight alone would be an invaluable contribution to the cognitive development of mankind.
I am indebted to John Richards for his helpful comments on an earlier version of this paper.
Ashby, W. Ross (1970) Learning in a homeostat. Symposium on Artificial Intelligence, University of Tennessee, Tullahoma, April 1970.
Cannon, Walter B. (1963) The Wisdom of the Body, New York: Norton (Originally published, 1932).
Furth, Hans (1975) What is so revolutionary for psychology about Piaget’s theory? Fifth Annual Symposium of the Jean Piaget Society, Philadelphia, PA., June, 1975.
Harré, R. (1974) Some remarks on ’rule’ as a scientific concept. In Theodore Mischel (Ed.) Understanding Other Persons, Oxford, England: Blackwell.
Hume, David (1963) An Enquiry Concerning Human Understanding. New York: Washington Square Press (Originally published, 1748).
Kessen, William (1966) Questions for a theory of cognitive development. Monographs of SRCD 31: 55-70.
Kuhn, Thomas S. (1962) The Structure of Scientific Revolutions. Chicago, IL.: University of Chicago Press, 1962 (2nd edition, 1970).
Maturana, Humberto R. (1970) Biology of Cognition (Report No. 9.0). Urbana, IL.: Biological Computer Lab., University of Illinois.
Piaget, Jean (1967a) Biologie et Connaissance. Paris: Gallimard, .
Piaget, Jean (1967b) Six psychological studies. New York: Vintage Books.
Piaget, Jean (1969) Psycholoqie et pedaqogie. Paris: Denoel.
Piaget, Jean (1970) Le Structuralisme. Paris: Presses Universitaires de France.
Powers, William T. (1973) Feedback: Beyond behaviorism. Science 179: 351-356.
Toulmin, Stephen (1974) Rules and their relevance for understanding human behavior. In Theodore Mischel (ED.), Understanding other persons. Oxford, England: Blackwell.
von Foerster, Heinz (1970) Thoughts and notes on cognition. In Paul Garvin (Ed.), Cognition: A Multiple View. New York: Spartan Books.
von Glasersfeld, Ernst (1976) The construct of identity or the art of disregarding a difference. Fourth Biennial Southeastern Conference on Human Development, Nashville, Tennessee, April 1976.
von Uexküll, Jakob, J. (1928) Theoretische Biologie. Berlin, Germany: Springer.
What is “frequently enough” depends on the number and type of experiences had. If A comprises a strong aversive sensation and B successfully terminates that sensation, one occurrence of the A-B sequence may be sufficient to establish it as a rule for the future. Once burned, twice shy.
Note that this inductive principle appears in psychology camouflaged as the “Law of Effect.”
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/1333 on 2016-09-16 · Publication curated by Alexander Riegler