CEPA eprint 1352 (EVG-062)
The concept of equilibration in a constructivist theory of knowledge
Glasersfeld E. von (1980) The concept of equilibration in a constructivist theory of knowledge. In: Benseler F., Hejl P. M. & Koeck W. K. (eds.) Autopoiesis, communication, and society. Campus, Frankfurt/New York: 75–85. Available at http://cepa.info/1352
At the end of my talk at the symposium in Paderborn, Humberto MATURANA raised the question of “goals.” The discussion that followed was not conclusive and the question, I felt, was left hanging in the air. Since it is an important question and particularly relevant to any theory of equilibration, I shall try to begin this written version of my talk by explaining the connections I see between the concepts of perturbation , equilibration, and goal-directedness.
A rock resting on the bottom of the sea is in stable equilibrium. So, we assume, are the houses we live in, although we know that our trust may be shattered by tornados or earthquakes that raze them to the ground. In those cases, external forces maintain or destroy the equilibrium, and neither rock nor house can push one way or the other. If something upsets their equilibrium we could not observe anything that would prompt us to attribute an active equilibration to the rock or to the house, and we would consider them wholly passive victims of forces that act on them.
When we walk or ride a bicycle, or when a bird perches on a telegraph wire on a windy day, we have different situation: a labile equilibrium is being maintained in the face of all sorts of hostile forces or perturbations. The perturbed item itself is constantly acting, and the equilibrium will be maintained only as long as the perturbations can be compen-sated by counteractions.
By and large, the ability to counteract perturbations is a characteristic of living organisms – and of artificial homeostatic devices. In fact, any cybernetic theory of systems or organisms explicitly or implicitly refers to such abilities and that is why Norbert WIENER put the word “controls’ in the title of the book with which he launched cybernetics (1948). The concept of ‘control’ was still somewhat fuzzy with WIENER, and so was that of ‘perturbation.’ Both have been crystallized by the work of William POWERS (1973). Control always involves at least three things:
a possible state of equilibrium defined as a constant relation between an input variable and a reference variable;perturbations that tend to modify the relation between the input and the reference values; andan activity by means of which the system can, at least potentially, restore the equilibrium.
As POWERS has long maintained and recently demonstrated in a series of experiments (POWERS 1978), once we adopt the model of the control system, we can give up the traditional idea that living organisms act in response to external stimuli which an observer can observe. The model allows us to say that organisms act as a result of perturbations – and perturbations are not just inputs but only such inputs as upset the organism’s equilibrium. In a control system, an upset of the equilibrium generates an error signal and this, in turn, triggers an activity (1). If the activity “causes” the input variable to change towards the value that, relative to the reference value, constitutes the equilibrium, then and only then is the system a viable one and can be said to be capable of control and equilibration. I put “cause” in quotation marks to indicate that it is not intended in any absolute sense; rather, it indicates an inferential connection and therefore, strictly speaking, it requires an indication also as to who has made the inference. From my point of view, that is an extremely important consideration and wholly compatible with MATURANA’s fundamental statement that “everything said is said by an observer.” – Causal connections are always made by an experiencer and they connect items he has isolated in his experience. The connections arise, as Hume said, through association, and association is the result of some form of recurrent contiguity in experience (2).
In the homeostatic devices that have been built, the causative connection is always previously established by the designer who then exploits it in the design of the device. In a thermostat, for instance, the input variable may be the height of the column of mercury in a thermometer (provided a mercury thermometer is used) because the designer has established a causal connection between the temperature of the surrounding air and the height of mercury in the thermometer. Hence there are fixed connections in the thermostat, such that, if the mercury rises beyond a set height, a cooling machine is switched on, and if the mercury sinks below the set height, a heater is activated. The thermostat does not have to learn these connections, they are given to it by the designer and may therefore be considered similar to innate reflexive connections in a living organism. The simple thermostat, for that very reason, is not the most interesting model to apply to living organisms: It can be used only as a model for reflex-like equilibrations or the kind of homeostatic pheno-mena that were described by Walter CANNON (1963).
Far more interesting would be a learning homeostat that establishes its own causal connections and uses them in a self-regulatory fashion, which is to say, constructs its own methods of equilibration. This idea was first suggested (in terms we would now call cybernetic) by Kenneth CRAIK during the Second World War (CRAIK 1966). The principle on which it is based is the principle that Hume formulated as Inductive Inference. He presented two propositions:
“I have found that such an Object has always been attended with such an Effect” and“I foresee, that other Objects, which are, to Appearance, similar, will be attended with similar Effects.” (p. 44)
Although Hume’s scepticism sprang from his realization that there is, as he said, no possible chain of reasoning that could justify inferring (2) from (1), he observed that, in spite of the logical gap, we constantly make that kind of inference. The inference is based on the simple assumption that what we experienced with some regularity in the past, will be experienced with much the same regularity in the future – for no other reason than that we have, so far, fared quite well with that assumption and continue to experience many useful regularities. A homeostatic system, of course, does not have to know any of that. It may simply be built or have evolved in such a way that “its organization (both genetic and otherwise) is conservative and repeats only that which works” (MATURANA 1970, 15-16).
Simple though it sounds, “to repeat only that which works” requires a good many things, if the knowledge of what works is not built in. In phylogeny there is no problem: any organism that does not work (i.e., survive individually or reproduce sufficiently to keep the species going) will be eliminated. That is in fact a tautology, because in the context of evolution, “to work” can mean no more than not to be eliminated (cf. E. von GLASERSFELD 1979). Hence, whatever organisms we find living at the moment when we survey our world, are organisms that have so far worked or, as I prefer to say, are viable. Repetition, in that context, can be nothing but reproduction and survival.
In ontogeny, however, there is one big difference: responses, behaviors, activities (or whatever we want to call it when we credit an organism with acting as a subject) that do not work, are not necessarily fatal. In other words, it is not a binary situation of either survival or elimination, but, as MATURANA (1978a, 40) puts it: “It is the dif-ferential effectiveness of the actual operation … that constitutes the process of selection in living systems.” There is a parallel to evolution, in that “effective” operations survive the selective process and thus are viable; but the parallel breaks down on the negative side, because ineffective operations (thank God) do not always kill the organism, nor are they necessarily eliminated as potential ways of operating – they may merely be given up in the particular context in which they proved ineffective (3).
That at once raises the question: What does it mean for an operation to be effective if an ineffective one does not kill the organism? In a control system, the answer to that question is very simple: An operation is effective if it neutralizes a perturbation and thus eliminates an error signal and helps to restore equilibrium. For the thermostat I mentioned before, there is no problem, because there is no selection of activities. The thermostat need not and could not establish that switching on the heater is an effective operation when the mercury is below the reference mark, but an ineffective operation when the mercury is above the mark. The operations that are respectively effective in neutralizing the two kinds of perturbation that occur in the thermostat have been selected a priori by the designer. It would be a meaningless statement, therefore, to say that the thermostat is “conservative and repeats only that which works” – the question as to what works, and what not, never arises, because the selection of effective operations was made before the thermostat was born and its connections are pre-established and fixed. One might say the thermostat has no ontogeny.
However, we can conceive of a thermostat that does not have fixed connections between error signals and compensating activities, a thermostat that most make those connections itself on the basis of its own experience. As Kenneth CHAIR (and others since then) realized, such a system will have to be much more complex and sophisticated. The main problem it has to solve is precisely the problem of distinguishing, for each of the error signals, which of the available operations is effective and which not. Having no pre-established “knowledge” but the built-in conservative or inductive attitude that what has worked once will work again, it must have an opportunity to establish what works. That is to say, it must have an opportunity to try, and it must operate in an environment that will not kill it at the first wrong move. Also, it must be inclined to try – which, in this context, means that when there is a perturbation, it must act. But that is still not enough. To be conservative, to repeat what has worked before, the system must have some trace or record of which operation was effective in previous occurrences of a particular perturbation. If the system is set up in that way, and if the world in which it exists remains stable to the extent that the system’s heater will successfully move up the column of mercury and the system’s cooling mechanism will bring it down, then it will not take long before the system operates as though it had fixed rules for the neutralization of temperature perturbations (i.e., rules of the kind: If mercury is low, then switch on the heater). These rules, no matter how they might be implemented, are purposive rules. They were established inductively, by trial and error, because they were effective as means to neutralize perturbations and thus to restore equilibrium, and they remain in action as long as they are effective.
I can see no reason why, in that context, one should not speak of equilibration as a “goal” and of the system as acting towards it. If equilibration is defined as a relation between input and reference values, and perturbation is defined as an alteration of that relation, then any activity such a system carries out can be said to be triggered by a perturbation, and it will be selected as an “effective” activity on the basis of the success it has had in counteracting that perturbation. That is to say, operations or activities come into use and remain in use because of the results they have procured in the system’s prior experience. Since these results lie in the past, the present use of the activities that produced them cannot be considered “teleological” in the traditional sense: the future is not involved. But that does not make the activities any less “goal-directed.” Whenever they are performed, they are performed in order to eliminate a perturbation, and this goal-directedness springs from nothing but the experiential “fact” that, hitherto, they have done just that and have been “selected” because of it.
To conclude these remarks on “goals,” I want to point out that much of the confusion that surrounds processes of selection stems from the fact that we use the word “selection” for two conceptually different situations. In evolution we must conceive of selection as negative, in that Nature or the environment eliminates all that does not work, so that what looks like the “selection” of organisms that are viable and do work is actually not due their effectiness, but to the ineffectiveness of those that are eliminated. On the other hand, when we speak of processes that follow the pattern of inductive inference, we conceive of selection as positive in that the operation, the strategy, or the construct that is selected for repetition is always one that has proven successful in past experience.
The concepts of “perturbation” and “equilibration” are of great importance also in PIAGET’s theory of cognition, a theory that was derived from the observation of children rather than from the building of homeostatic devices, and led to remarkably similar epistemological conclusions. These conclusions are quite incompatible with the traditional theory of knowledge and, what makes them more difficult still, they run counter to the ideas of “knowledge” and “knowing” implicit in our ordinary use of language. Our languages, whether we like it or not, have incorporated the common sense view of the world according to which the knower comes into a ready- made “reality” which he then gets to know (more or less) in that he somehow acquires a picture of it. Piaget calls himself a “constructivist” and as such he refutes that iconic theory of knowledge. A cognitive organism, he holds, acquires knowledge through action and the optimal relation between knowledge and the real world is not one of “truth” but of “adaptation.”
I have elsewhere argued that “adaptation” is a deceptive term (von GLASERSFELD 1979; 1981) because it can be, and often is, taken to indicate some kind of correspondence between the adapted item and the world to which it is or had adapted; I therefore prefer the term “viability,” which merely indicates that whatever the item is or does allows it to remain in existence.
Piaget’s position in that regard is somewhat ambiguous. In a recent discussion he says that it is, indeed, an act of projection when the physicist or the child attribute their operations (sensory-motor or logico-mathematical) to physical objects in an attempt to understand them. If the cognizing subject did not have these operations, it could never understand the object. “D’autre part l’objet se laisse faire.” The object, that is, allows the operation or submits to it (INFIELDER et. al. 1977, 64). I interpret that as tantamount to the concept of viability as I have developed it – indicating that whatever operations the physicist or the child have come to consider constitutive of an object, their experience allows them to use these operations recurrently without contradiction. But Piaget then expands his idea and says:
“But if one finally comes to have true theories, it is because the object has allowed one’s doing; which amounts to saying that it contained something analogous to my operations. This in no way means that my operations come from the object: I am limited to reconstituting it and I cannot move outside myself to enter the object…” (loc. cit. 64; my translation.)
This introduction of analogy is, from my point of view, gratuitous and misleading. If objects, or reality in general, allow us to operate in certain ways, that does not justify the conclusion that there must be an analogy between what we do and what there is. The fact that a sieve allows sand to fall through, does not constitute an analogy between it and the falling sand – it could be characterized only in terms of items it obstructs (4).
Though this difference of view profoundly affects the way one sees knowledge related to reality, it does not necessarily affect one’s model for the process of cognitive development or the acquisition of knowledge. And it is there that Piaget’s ideas of equilibration and the two activities that constitute it are an invaluable contribution.
The kind of equilibrium we are interested in, as I suggested at the beginning, is never a fixed, immobile state, but rather a point of balance between two or more forces that continually neutralize one another. The activities that achieve that neutralization are assimilation and accommodation.
All the recent textbooks on developmental psychology in the United States have at least a paragraph on Piagets’s use of the two terms, but the explanations given tend to simplify to the point of distortion. By and large, what they say is this: The cognizing organism has certain physical and conceptual structures which it imposes on its input in the acts of knowing or recognizing. The organism thus modifies the input to fit its structures, and that modification is called “assimilation.” If the discrep ancy between input and structures is too great, the organism has to modify its structures or come up with new ones, and that is called “accomodation.” – In a very generic, superficial way, that explanation is not all wrong. (5) What is wrong, however, is the suggestion of a simple and direct interaction between the organism and its environment. The explanation implies that, on the one side there is the organism with its structures, on the other an environment that provides input, and by and by the organism’s structures become more like the structures that are outside in the environment. Since the translators of Piaget’s writings, as a rule, seem to have little or no familiarity with the problems of epistemology, they trust their common sense view of the world and (involuntarily, I’m sure) proceed to give everything he says a far more realist direction in English than it has in the original French. (6) Piaget is thus turned into a more or less straightforward “interactionist” (which makes it easy and comfortable for conventional psychologists to disregard what he intends to say and to find fault with his experimental methodology). But, as I read Piaget, the interactions he has in mind are of a very different kind.
Piaget’s conception of assimilation and accommodation remain incomprehensible unless it is placed within the framework of his theory of knowledge and, specifically, into the context that he calls scheme. “Schemes” are basic sequences of events that consist of three parts. An initial part that serves as trigger or occasion. In schemes of action, this roughly corresponds to what behaviorists would call “stimulus,” i.e. , a sensory- motor pattern. The second part, that follows upon it, is an action (“response”) or an operation (conceptual or internalized activity). These two are, as a rule, explicitly mentioned when schemes are discussed. The third part is often only implied, but that does not make it any less important: it is what I call the result or sequel of the activity (and here, again, there is a rough and only superficial correspondence to what behaviorists would call “reinforcement”).
The organism, in Piaget’s view, does not start out as a blank slate. It has a certain number of schemes built in before it begins its cognitive career. They are the reflexes, or fixed action patterns, that are genetically predisposed as the result of evolution. If, for instance, one touches the cheek of a normal human new-born, it will turn its head, try to take whatever touched it in its mouth, and begin to suck. From an observer’s point of view, that makes good evolutionary sense. Given the way infants used to be held, what frequently touched their cheek could supply their food. Hence, those infants that did not readily turn towards it, lessened their chances of survival (which were slim enough in any case). But mother’s nipple was not all that touched their cheek. Sometimes it could be their own thumb, and if they had not already done it in the womb, they would start to suck it then. In Piaget’s terms, that is a good example of assimilation. The thumb is assimilated as object of the sucking activity. The crucial point, here, is that from the infant’s point of view, both nipple and thumb fit the sensory pattern that triggers the sucking action and, at that level, they are therefore indistinguishable. It is at that point that the third part of the scheme becomes relevant, because the result of the scheme is now no longer uniform: sometimes the sucking action neutralizes that gnawing perturbation (that we call hunger), at other times it does not. This discrepancy in the sequence of experiential events is a powerful disturbance for an organism whose entire functioning is based on, and has evolved through, the exploitation of regularities and the repetition of what works. Hence the organism soon establishes a new experiential regularity by discriminating the sensory pattern that initiates the sequence of events that ends with milk, from the sensory pattern that initiates the sequence of events that ends without milk. The organism can do that because the visual, tactual, and proprioceptive sensory elements available in the case of the nipple are different enough from those available in the case of the thumb. They were available all along, but until the scheme was upset by the irregularity in its results, there was no perturbation – and without perturbation there is nothing to initiate a change in the system.
Any such change that leads to splitting one scheme into two, or to the constitution of a new scheme, is an accommodation. New schemes function like the old, in that experiential elements are assimilated to them until there is again some discrepancy that triggers a change or rearrangement.
I have, of course, drastically simplified the infant’s sequence of experiences and thus the processes of assimilation and accommodation; but the example should nevertheless serve as an illustration of the basic epistemological points I want to make. These points are three. First, there is indeed interaction between schemes, sensory elements, and events, but the organism does not get to “know” the environment in the sense that its own schemes and structures come to resemble or in any sense reflect structures as they might be in the outside world. What I have called sensory elements or experiential events are always and exclusively elements and events within the system that constitutes the organism. Hence it is not the kind of interaction that an observer might speak of, who has both the observed organism and its environment in his experiential field. Second, the process of assimilation does not discover recurrent sensory patterns but it imposes them by disregarding differences. For instance, as long as thumb and nipple are being assimilated to one and the same scheme of action, they are for the infant recurrences of one and the same item, because none of the sensory elements that might serve to discriminate the one form from the other are taken into account. Third, accommodation can take place only when there is an irregularity or disturbance in the functioning of an established scheme, and not otherwise. This feature of the Piagetian model, as I see it, constitutes its main basis as a constructivist theory of cognition in which “knowledge” is no longer a true or false representation of reality but simply the schemes of action and the schemes of operating that are functioning reliably and effectively.
If equilibrium is understood not merely as a stable state, but also dynamically as the regularity of recursive processes, then any irregularity will constitute a perturbation and any action taken to eliminate a perturbation will be an act (or at least attempt) of equilibration. One area in which the assimilation-accommodation-equilibration model is particularly easy to apply is the acquisition of language. Let us assume a child that has grown to the age of two or three years in a house where there is also a poodle. First it has learned to call the poodle “Tessa.” Experientially, that is a formidable task. As a compound of sensory-motor elements in the child’s nervous system, the poodle has to be isolated as a different pattern from a different background almost every time it is seen. Somehow a common pattern has to be derived from a number of different experiential situations, such that all further Tessa experiences can be assimilated to it. Although we have little idea how that might be done (and consequently have not got far in programming computers for pattern recognition), children and adults are doing it all the time. Next, our child has come to know that there are occasions when Tessa may be called something else. When the family is ready to go somewhere, father almost invariably says: “Do we really have to take the dog?” The Tessa pattern, therefore, has been associated also with “dog”; a relatively minor kind of accommodation that has to be made for almost every word of the language, and often more than once. – At that point the neighbours enter the story. One day they come and proudly announce that they, too, now own a dog. What they then walk into the room may be spectacularly unlike Tessa. It could be a dachshund or a Great Dane. This time the accommodation the child has to make is almost inconceivable. Nevertheless, children demonstrate every day that it can be made. A structure of sensory elements has to be compiled such that both Tessa and the new animal can be assimilated to it; at the same time, the structure that was associated with “Tessa” has to be maintained, but, from now on, separate from the new structure that is associated with “dog.” As experience widens, as St. Bernards, Pekinese, and wolfe-hounds enter the field, there will be many more perturbations and ensuing accommodations concerning the structure that constitutes “dogness” because ultimately it ought to be such that all breeds of dog can be assimilated to it. Indeed, where canines are concerned, this process of “adaptation” may continue far beyond childhood because, from the sensory point of view, they are a class of rather varied items. Epistemologically, however, one of the interesting aspects of this development is this: The cognizing organism has his knowledge of dogness at every stage and it remains good, viable knowledge, in the sense that he successfully assimilates sensory patterns to it in a state of equilibration, until there is some perturbation, which is to say, there is a malfunction and some scheme that involves the dogness pattern produces an irregular result.
In principle that is analogous to what continues to happen to us when we are adult and read books. As we read the first twenty or thirty pages of a novel, for instance, we build up a structure of things, places, people, and relationships. At every moment this structure is our interpretation of the text and we add to it as we go along. Occasionally, however, we may come across something that suddenly jars. A word or a sentence – or rather, our interpretation of it – does not fit into the structure we already have, but is a misfit that shakes the balance and creates a perturbation. In some way we have to re-establish equilibrium if we want to go on. If we are honest, interested readers, and have some faith in the author, we have to accommodate and either change our interpretation of that last word or sentence, or rearrange something in the structure we already have built up. To “understand,” we must find a non-perturbed, i.e., non-contradictory interpretation. There are, of course, occasions where we lack motivation and simply do not care enough to go to any trouble; or we may prefer to think that the author made a mistake. But that does not detract from the principle I stressed above: As far as we are concerned, our interpretation is a viable interpretation until we run into something that upsets it.
Earlier in this symposium there was a reference to Thomas KUHN’s Structure of Scientific Revolutions (1962). The basic pattern he draws, his analysis of the history of science, seems wholly compatible with the concepts of equilibrium, perturbation, and viability I have presented here. Scientific hypotheses, theories, and laws, I contend, are all interpretations of experience. One thing that makes them different from each other is the frequency with which they have been experientially tested and “confirmed” – i.e., the frequency with which experiential situations have been successfully assimilated to them. Another aspect that differentiates them is the degree to which they can be integrated with scientific structures and schemes in other areas of experience. On the other hand, hypotheses, theories, and laws remain viable only as long as no experience or experiment involving them creates a discrepancy, a perturbation. If that happens, then (as for the perturbed reader of a novel) there are always two ways of possible equilibration: the scientist may modify the interpretation of the experiment in order to make it assimilable to the theory; or he may change the theory in order to accommodate to the new experiential situation.
As KUHN has tried to show, taking examples from the history of science, there is always some resistance against modifying a theory that has survived for some time, and there is strong resistance against scrapping it in favour of a new one. That should not surprise anyone who proceeds on the assumption that organisms operate inductively and are, as MATURANA said, “conservative and repeat only that which works.” If something has worked for a long time, it is necessarily linked to many other things and has wide-spread ramifications. In fact, the longer an idea or theory has been effective in maintaining an equilibrium, the wider will be the range of that equilibrium and the greater the effort required for re-equilibration if that idea or theory has to be replaced by another. In times like ours, where theories are no longer mere ways of thinking but are materialized in immensely costly apparatus and instru ments for the assimilation of experience, the material investment inevitably drives scientists towards conservatism, and theories towards becoming dogma.
One contemporary author who is battling against that development of scientific dogma is Paul FEYERABEND. He put his finger on an extremely important point, when he said that science should adopt the slogan “anything goes” (1975). Insofar as science is the business of solving new problems – providing equilibration in the face of perturbation – it is no advantage to restrict it to traditional, conventional, dogmatic ways of providing solutions. If problems are seen to arise when a scheme does not lead to the accustomed result, the failure (perturbation) can always be reduced to an excessive assimilation. The excess of assimilation may be in the categorization of the result, of the activity that was supposed to lead to it, or of the initial situation that triggered the scheme. The more conventional a science becomes, the more vigorously it will assimilate experiential situations to its “proven” schemes. It achieves this by a more and more rigorous restriction of the observational procedures that establish the initial situations (cf. the definition of “scientific method” MATURANA has presented in this symposium). The point that FEYERABEND emphasizes is that whatever we consider the “problem” may look very different and may become amenable to quite different schemes if we change the observational and categorial restrictions and thus open the way to assimilating the experiential situation to other structures.
FEYERABEND’s argument is valid and timely, given the dominant view of the world today. Hence I agree with “anything goes” as a slogan. But epistemologically it is no more acceptable than any other slogan or statement that purports to be absolute. In the last analysis it amounts to saying that every piece of experience should be taken as a novelty and thus as the beginning of a new construction. From my point of view, that constitutes a contradiction. It would be one thing to say, as the mystics do, that experience should be taken as experience, enjoyed, suffered, lived, or whatever, but not cut up into pieces, compared, categorized, and built into schemes. If we follow that direction, we give up constructing knowledge. That may well be the one and only path to permanent equilibrium – the equilibrium of total detachment from our consciousness of active selves. But if we assume that position, we can no longer speak of science. For science, like any rational world we construct for ourselves, is built on the concept of regularity, and regularity can be established only by cutting experience into pieces, comparing them, and creating equalities by assimilation, i. e., by disregarding whatever differences there might be. Unless we cut, compare, and establish equivalences and identities, we can have no elements, relations, structures, or schemes, and we can have no inferences of any kind. Indeed, if we ceased to carry out these basic operations, we would cease to be observers and, to reverse MATURANA’s fundamental maxim, there would be nothing to be said.
As I have said quite a lot, I want to reiterate that I have said it as observer – and to me that means an active agent who tentatively cuts up his experience, constructs relations, objects, and theories, in order to maintain a modicum of equilibrium, but certainly not to describe a world as it might be before it is experienced.
CANNON, W. B.: 1963, “The Wisdom of the Body,” New York:Norton. Rev. and Enl. Ed edition. Originally published in 1932.
CRAIK, K. J. W.: 1966, “The Nature of Psychology,” (Papers written between 1939 and 1945, ed. by S. L. Sherwood), Cambridge: University Press.
FEYERABEND, P. K. : 1975, “Against Method, Outline of an Anarchistic Theory of Knowledge”, London: NLB; Atlantic Highlands: Humanities Press.
GLASERSFELD, E. von: 1974, ‘Because’ and the Concepts of Causation,” in: Semiotica 12, 2, 129-144. http://cepa.info/1321
GLASERSFELD, E. von: 1979, “The Concepts of Adaptation and Viability in a Radical Constructivist Theory of Knowledge (Paper presented at the Piaget Society, Philadelphia 1977),” in: SIGEL, I. /GOLINKOFF, R. ERODZINSKIY, D. (ed.), 1979, New Directions in Piagetian Theory and their Application to Education, Hillsdale, N.J.: Erlbaum, in press. http://cepa.info/1357
GLASERSFELD, E. von: 1981, “An Epistemology for Cognitive Systems,” in: ROTH, G. /SCHWEGLER, H. (eds.), Self-organizing systems. Campus, Frankfurt: 121–131
INHELDER, B. /GARCIA, R. /VONSCHE, J.: 1977, “Epistemologie genetique et equilibration” Neuchatel: Delachaux et Niestle.
KUHN, Th. S.: 1962, “The Structure of Scientific Revolutions,” Chicago: University of Chicago Press.
MATURANA, H.R.: 1970, “Biology of Cognition,” Rep. No. 9.0 des Biological Computer Laboratory, Dep. of Electrical Engin., University of Illinois, Urb., Ill.; German translation (Biologie der Kognition), by W.K. KÖCK/G.ROTH, Paderborn: FEoLL, 1974. http://cepa.info/535
MATURANA, H.R.: 1978a, “Biology of Language. The Epistemology of Reality,” in: MILLER, G..A. /LENNEBERG, E. (eds.), 1978, 27-63. http://cepa.info/549
McCULLOCH, W. S. : 1948, “Through the Den of the Metaphysician,”. Reprinted in: McCULLOCH, W.S., 1970, Embodiments of Mind, Cambridge, Mass. : M.I.T. Press, 142-156. http://cepa.info/2817
POWERS, W, T.: 1973, “Behavior: The Control of Perception,” Chicago: Aldine.
POWERS, W. T. : 1978, “Quantitative Analysis of Purposive Systems: Some Spadework at the Foundations of Scientific Psychology,” in: Psychological Review 85, 417-435.
SELIGMAN, M. E. P. : 1975, “Helplessness”, San Francisco: Freeman.
WIENER, N.: 1948, “Cybernetics,” Cambridge, Mass.: M.I.T. Press.
POWERS’ model also demonstrates that activities can be triggered without input perturbation, namely when an error signal is generated by a change in the reference value. When that happens, an observer may call it “spontaneous action.”
I have provided a tentative model that specifies some of the experiential requirements of “contiguity” and the operational steps that lead to the concepts of causation (E. von GLASERSFELD 1974); that model, however, does not take into account the function of analogy which, I have come to believe, plays an important role in the construction of causal connections.
This is well known to all of us. Frequently, when some problem solving method breaks down, we revert (regress) to methods that were discarded long ago in other contexts. The most striking demonstration, however, can be found in the work on “learned helplessness” (SELIGMAN 1975), i.e., the phenomenon of an animal ceasing to act altogether when all his responses turn out to be ineffective; put into a kinder en-vironment, the depressed animal must rediscover that activity is again effective but it does not have to relearn the activities themselves.
Compare the statement by Warren MCCULLOCH (1948): “To have proved a hypothesis false is indeed the peak of knowledge.”
Note that this textbook explanation creates an unsolvable problem: If the act of cognizing were always assimilatory in that simple fashion, the organism could never “know” of discrepancies between the input and its own structures, and there would, therefore, never be any reason to accommodate.
As I have never examined a German translation of Piaget I cannot say whether they manifest the same distortions. Among the English ones there are, of course, exceptions. Those prepared by philosophically trained authors, such as Wolfe Mays and Eleanor Duckworth are as reliable as any translation can be.
Found a mistake? Contact corrections/at/cepa.info
Downloaded from http://cepa.info/1352 on 2017-08-24 ·
Publication curated by Roger Harnden
TIP: Make sure to login
to read this paper well-formatted