CEPA eprint 2459

The question of cybernetics

Glanville R. (1987) The question of cybernetics. Cybernetics and Systems: An International Journal 18: 99–112. Available at http://cepa.info/2459
Table of Contents
Prologue
Introduction
First-order cybernetics
Second-order cybernetics
Concerns and consequences
The question of cybernetics
References
The recently developed notions of second-order cybernetics bring into focus a number of traditional concerns (e.g., self-reference). Here, a simple derivation of some of these notions is made from a brief characterization of the initial cybernetics ideas developed by N. Wiener and others. Matters of concern and consequence are then stated, and are summarized as facets of what should be the major area of research in the field: the question of cybernetics.
Prologue
The style of this paper has been kept, intentionally, as simple as possible. Nevertheless, I ask that even the expert cybernetician should read the introductory sections, for they provide the background for raising the main point of this paper: “the question of cybernetics.”
Heinz von Foerster tells a story of Ross Ashby’s first lecture series at the Biological Computer Lab at the University of Illinois, Urbana. After three or four sessions, the students came to see Heinz, and said that they were upset by Professor Ashby, because everything he said was so simple and obvious. Heinz, worried, suggested they gave Ross time to settle in. When next he saw Ross, Heinz tried, tactfully, to convey the students’ concern. Instead of being offended, Ross burst out into an enormous guffaw of delighted laughter. He said, “It’s taken me 20 years to make it simple!”.
Introduction
In the course of the last 15 years, there has been a profound change in the understanding held by cyberneticians of the philosophical basis of the field – the fundamental concepts – and of what cybernetics has to teach philosophy. An indicator on the vastness of this change can be gauged by the doubts cast, in the new view, of the validity of such concepts as “basic” and “fundamental,” and even of the validity of the concept of “validity.”
These changes may be adumbrated under the term “Second-Order Cybernetics” (also known as the cybernetics of cybernetics and the new cybernetics).
This paper is organized, therefore, in four main sections: an introduction to early cybernetics; the development to second-order cybernetics; concerns and consequences; and finally, the question itself.
First-order cybernetics
Cybernetics, although an old discipline, based on some concepts commonly used also in other fields (e.g., steering feedback, stability), only became formally expressed as a field as a result of discussions in the 1940s at MIT, which were summarized by Norbert Wiener in his 1948 book Cybernetics. This was subtitled Control and Communication in the Animal and the Machine. The subtitle, preceded by “The Science of ‘ was the definition of the subject’s area and style. Although for a time I felt this was an inadequate and even downright misleading definition, I have now come to see its subtlety, for it allows much more than Wiener and his colleagues could at that time achieve. I would, however, change “Science” to “Study” (unless an honest approach is made to the nature of science), for reasons that will become apparent (see also Wiener’s (1950) The Human Use of Human Beings, which is addressed to a more general audience and is far more philosophical in approach).
In the definition there is a certain redundancy, for it is hard to conceive of any one thing controlling any other without some form of communication. So the central area of investigation is control in animal and machine.
The classical notion of control is of command. For instance, Newtonian mechanics describes a world where, if I hit something, it moves. Thus, I have controlled its motion (and, if sensate, its pain quotient). This is the causality effect: such-and-such an action causes such-and-such an effect. It is highly mechanical, and is a very good (though outdated) description of a particular universe of discourse. It can, for instance, be used to describe many human experiences: unprotected love-making resulting in pregnancy; oratorical dictators driving the crowds; generals engaging armies in battle; teachers and lecturers imparting information to their classes.
But it’s not true. Sex does not always produce babies even when they are most wanted, and teachers and lecturers do not absolutely command the behavior of their classes. There is a nice illustration of this. A fable has it that the behaviorist psychologist B. F. Skinner was once giving a lecture before a class that had decided earlier to smile if he moved to the right and frown if he moved to the left. He ended up lecturing immobile from the right-hand corner of the room. The students wore frozen grins.
What is involved here is a process called feedback. Skinner is sending a verbal and also a locational message to his students, and they are responding to at least the locational message by sending a message of their own back, which affects Skinner’s location, which … in the end a stable state is arrived at within the universe concerned – the room: Skinner is in the right-hand corner, the students are smiling, and all remains in these two stable states.
Three important concepts for early cybernetics arise here: feedback, stability, state. Indeed, for those for whom cybernetics is not robotics, or science fiction, or suspended animation, or computing (preferably the mad professor variety), it is usually taken to be synonymous with feedback.
But feedback is, perhaps, not at the base, since feedback is a mechanism for achieving a goal (cybernetics is concerned with purposiveness and goal orientation), and a goal is a stable (predesignated) state. Hitler, for instance, wanted to get the German people into a particular state and then to keep them there. The wish to manipulate people is a common feature of dictators – and closely analogous to the “classical” notion of control, whether of policing crowds or running a fully automated factory.
This introduces another concern: dynamics and change. To achieve a particular state there must be change, yet the thing to be changed must remain, in other respects, unchanging, so it must be at once dynamic and stable. And there is a further point: using inertia as an analogy, how do you stop all at once in the required state? Stafford Beer (1975) gave a very clear description of this while discussing inherent instability dangers during the changes Allende made in Chile and of the possibility the dynamic and the achievement might lead to the terminal destruction of the identity of Chile, and as we know, it did.
So there are states, and goals for other states to be achieved, and dynamics to change between goals, stability, and feedback. There is also an other concept to be included here: the idea that models may not be exact and that there is something called, by Shannon and Weaver (1949), in the slightly different area of Information Theory, as interpreted in cybernetics, “noise” – i.e., the unwanted and unexpected that causes unpredicted behavior, sometimes outside not only the predicted behavior of the state model but even the structure of the model, itself.
All these were thoroughly examined and developed in both theory and practice, making cybernetics, by the middle of the 1960s, a powerful and wide-ranging tool (and usually used as such by workers in other fields, without acknowledgement). It was the body of the science of (communication and) control – always of the classical, observed systems, animal and machine.
About this time it became apparent that maybe there was something a little odd and unexplored about the nature of control and of the relationship of the observer who describes all this to the observed systems.
Second-order cybernetics
The conventional and traditional notion of control is, as pointed out, essentially fascist! The idea is that there is a controller who has the power to make the controlled do exactly what the controller wants, without the controller being in any way affected.
Let us look at this in terms of a very commonplace device: the thermostat. This is a system designed to pump heat into spaces to maintain a desired temperature. The system consists of a boiler (furnace) for making heat, radiators (convectors) for distributing heat in the spaces, a heat sink (cold weather outside), a pump for distributing the heat carrying medium (water) to the radiators, and heat-activated switch, positioned in the space, that is set to a desired temperature. When the temperature falls bellow the chosen temperature, the boiler and pump are switched on, heat is transferred to the radiators, which heat the space until the temperature just exceeds the chosen, whereupon the switch turns off the boiler and pump. When the temperature falls, the cycle repeats. It is hardly possible to conceive of a more traditional, classical, casual system. The switch is the controller, causing heat to be delivered to cold spaces, which regulates the temperature.
Yet, the switch changes state, and it is reasonable to ask what causes this. The answer, of course, is the temperature, which is modified by the heat sink, the radiator, which is modified by the boiler and pump. So what controls the switch? Everything else. The controller (switch) is just as con trolled by the controlled (everything else) as it controls them. So the classical notion of control doesn’t hold up: what is the controller and what is the controlled is a matter of role; each is each to the other and the control producing the required stability exists in the loop connecting them all together, as even this most simple of examples shows.
Another, roughly analogous, shorter example is that of a typical elementary scientific experiment in optics to find the focal length of a lens. The scientist (observer) assembles the experiment – lens, light source, screen and places them on a metered board. Except by the most extraordinary chance, the image on the screen will not be in focus, so he will move his experiment around, observing all the time, until the image according to his observation, is in focus and he can take his readings. But what made him move the elements in the experiment? The results he was getting of course, while they were not what he wanted! His behavior was being controlled by (his perception of) the experiment’s behavior, which can be said, using the word in the nonvisual sense, to have been “observing” him (remember, a machine could do what our scientist does: I am not trying to anthropomorphize by giving life-like or animist qualities to the equipment any more than I would to a machine observer).
Thus observer and observed are as circular, as role defined, as are controller and controlled.
This is the profound, disturbing, and yet peculiarly simple result of inquiries into what controlling and observing involve. The insight thus born is what second-order cybernetics is based on, bringing with it a few concerns and consequences that need to be examined before we arrive at the question of cybernetics.
Although the examples are mine, deriving from my Theory of Objects (1975, 1978, 1980) and interest in Maxwell and Ashby (1979a, b) black boxes, I do not mean to credit myself with these discoveries exclusively. Several people, loosely connected but working largely independently and in different fields, produced similar findings by about 1975. They include: Heinz von Foerster in epistemology (1973, 1974, 1976a); Gregory Bateson in anthropology (1973, 1979); Gordon Pask in education and cognition (1970, 1975, 1980); Francisco Varela, Humberto Maturana, and Ricardo Uribe in biology (1974); and Stafford Beer in management (1975).
There were many others concerned in the discussions who undoubtedly contributed enormously, e.g., Gotthard Gunther, Lars Loefgren, Stuart Urnpleby, and Jean Piaget, and many more have taken much of the early work much farther. The list above is meant as a sampler, and is not exclusive.
Concerns and consequences
From the way control was discussed, it becomes apparent that the process of control is circular; that which is (given the role of being) “controlled” also controls that which is (given the role of being) the “controller.” This is a profoundly revolutionary discovery that means that control (and hence, naturally, power) exists only through the mutual interaction of the so-called controller and controlled.
We are, in common experience, aware that the use of control systems, such as the thermostat, provide for a stable state (or, at least one minimally oscillating around some “fixed” point, such as some desired temperature). We have normally considered such a stable state to be static, yet our examination shows this is not so; stability in a control system (and cyberneticians will, in extremes, argue all systems are control systems) is the result of a process of interaction between the “controller” and the “controlled.” This is what, in terms of my discussion of black boxes [Glanville, (1979a, b)], I refer to as a “functional descriptions,” which exists between two (black to the other, white to themselves) boxes, mutually “observing” each other. See Fig. 1.
Figure 1: Figure 1
It is essentially dynamic, (the static value not being a goal that is preset) but the result of the circular iterative process of control. It is, thus, dynamic.
As it happens, there is a class of iterative (recursive) mathematical functions that, when continuously applied to themselves produce a stable (to the external observer) output, called Eigen-functions [see von Foerster (personal communications, 1976b, 1981)]. They provide attractive illustrations. For instance, take any number, N, divide it by 2 and add 1 to the result to give R. Now let R be the new value of N and repeat the operation; the result will progressively converge towards 2 (unless N = 2, when it will be 2), this can be summarized as a chain of operations, as shown in Fig. 2. It is easy to write a computer program to do this convergence. Indeed, it is more a computational than a mathematical function.
Figure 2: Figure 2
The result is that stability in control systems is dynamic and dependent on a mutual interaction. This is not only important because it requires us to redesign some of our conceptual apparatus, for, equally, it brings up questions as to how the sort of role-absolutism that permits fascism can be allowed, how it can work. But these are ethical matters and will not be considered here. (Hint: compare coding with conversation as means of communicating meanings).
Concomitant with this revised idea of control is the need to reconsider causality. Causality can be seen as the operation of a control system (that is, the cause controls the behavior of some system to give the effect; for instance, Newton’s First Law of Motion “Every body continues in its state of rest or uniform motion in a straight line except in so far as it is compelled by external forces to change that state” [Uvarov and Chapman (1951)] is a cause (external force acting) and effect (change of motion) law, and can be rewritten in terms of control: “the application of an external force controls the change in motion of an object.” It becomes apparent that, if control must be considered as a circular and stability dynamic, then causality is also circular; that is, we have to modify our notions of control. I believe Heinz von Foerster was the first to introduce this notion in about 1970, but cannot find the reference. Thus, the cause is the effect to the effect’s cause (compare the earlier description of the optical experiment). So (straight-line) causality loses its position of mechanistic dominance, just as it lost its historically based predictive power through Wittgenstein (1966). This, in turn, raises problems concerning the nature of the truth: because control is mutual, the result of interaction between two objects (in my technical sense – Glanville (1975, 1978, 1980), it is a product of the “self’ of the whole control system, and an “other” observing this “self’ must make its own “functional description,” called a Behavior, through mutual interaction between its “self’ and the “self’ of the whole control system, which two may not be the, same – see below).
Furthermore, we are now talking about process, i.e., action. Stability is an activity (note how this relates to Piaget’s (1955) idea of concept development needed in the conversation of objects in children). Thus, we are concerned with a universe of action, not of fact, of knowing, not of knowledge. Among those who have proposed formulations of this sort are von Foerster (1973, 1974, 1976a) who proposed cognition as the “computations of,” Pask, Scott, and Kallikourdis (Pask, 1975, Pask et al., 1975) who discussed cyclic entailment meshes and the local cyclicity of their topics, Varela, Maturana and Uribe (Varela et al., 1974), and Zeleny (1980) with the autopoietic formulation, and my own objects (of attention) (1975, 1978, 1980) that exist through a phase shift being self-observing and self-observed. That this list is largely a re-iteration of the earlier one is no coincidence!
We have seen that stability in a control system derives from the interaction of the two components, conventionally given the roles of controlled and controller, and is a dynamic property of the whole system, i.e., the “self.” In the sense that the system is a “self’ that has a “dynamic” stability, the result of a process, the system may be said to be self- (re-) producing. It is normally considered that, for a system to be self- (re-) producing, it must have a complete, precise, and consistent self-description built into it that permits the (perfect) (re-) production of the system including its self- description. We are familiar with such a system in life; our cells self-produce and self (re-) produce and that’s how life works (see Varela et al. (1975) Zeleny (1980)).
However, there is a problem here, of the greatest importance. For while we experience life and continue as cellular self- (re-) producing entities, the most powerful of our cognitive, manipulative tools, logic, tells us this is impossible, and refers to it as the problem of self-reference [things that self (re-) produce do so by reference to their own self-description, which is (re-) produced in the (re-) production].
The reason logicians are so adamant about self-reference is that by token of its vicious circularity it produces paradox; so it is undecidable. Consider the statement:
This sentence is false
Then, if the statement is true it is false, and if it is false it is true; so is it true or false? [For a delicious collection see Hughes and Brecht (1976)]. It is interesting that in other belief systems, paradox is often considered most valuable, and is used to help in the attainment of enlightenment. Compare this with our own Christain medieval embodiment of paradox, the oreboros, which is a snake eating its own tail, and which was a symbol of ultimate evil! See Fig. 3.
Figure 3: Figure 3
The problem, however, is not just a logical game, but has to do with the validity of the knowledge systems we create. For instance, at around the turn of the century, when several non-Euclidean geometries had been developed (e.g., Riemann) and were then shown by Einstein to better fit his finer description of the universe in curved space-time, there was a great deal of insecurity in the mathematical world, for apparent and incontrovertible “truths” had been contravened! So the question arose, what value has mathematics – can it stand on its own two feet? this is, in effect, asking whether mathematics is mathematical. Here is the self-reference problem – can all mathematical problems be solved purely mathematically, which is normally taken to be equivalent to asking whether mathematics is simultaneously complete, consistent, and decidable. [For an excellent discussion of this see Hodges (1983)]. The answer to this question, first posed by Hilbert, was shown, in the case, first, of arithmetic, to be a resounding “No!” [Gödel (1931), Nagel and Newman (1959)]. Several later and independent demonstrations in related areas [e.g., Turing (1937) and Church (1936)] further support this view.
So we have the peculiar difficulty that logic disowns self-reference, yet we as living beings as well as in our current cybernetic understanding of control (the cybernetics of cybernetics) find ourselves constantly drawn to self- (re-) production made by reference to complete and consistent self- descriptions that are integrated in the whole “self.” What, then, do we believe – our own experience or the strictures of the most powerful intellectual tool we have yet developed?
There is one final group of concerns and consequences that should be brought up here, and that is the nature of enclosure vis-a-vis the “self.” Conventional wisdom would have it that the systems we refer to as our selves are constituted of parts, themselves made up of other parts organized in a hierarchy that terminates in the fundamental particles of physics (whatever they are). Furthermore, ourselves are parts in larger entities (families, clubs, societies, classes, religious, nationalities, etc.), so the hierarchy extends up from ourselves as well as down.
We have also organized our knowledge to reflect this, into subjects that slice layers through this hierarchy (and combination subjects, such as bio chemistry, which knit these layers together). The whole of our culture is deeply imbibed with the concept of hierarchy, and of going up and down it. Yet this is not what our current understandings of cybernetics lead us to believe, for we can no longer accept that the controller (or the observer) is “above” that controlled (or observed), since the process is circular. That the controller (or observer) is described as such, as a matter of psychological convenience, just as when of Pask’s entailment meshes (1975) (Pask et al., 1975) is cut to make it accessible to a learner at certain points, it appears to be a hierarchy (or, more probably) hierarchy. Thus, levels (and hierarchy) are product delusions of psychology; they are cognitive creations that allow us to handle the environment in chunks. It is interesting to note that design ers often jump in “scale” between large concerns of planning and fine details of construction [Glanville (1975)]. (For a detailed discussion of levels and boundaries see Glanville, 1979c, 1986 and Glanville and Varela, 1980).
The question of cybernetics
So far, I have described the initial, formal development of cybernetics, its further development into second-order cybernetics (the cybernetics of cybernetics), and have highlighted a few concerns and consequences of that later development. And the reader may well ask, “where are we, where we are going, and why it should be of any interest.”
In the characterization I have given there are, I believe, a number of problems that lead me to an answer to the “where are we, where we are going, and why should it be of any interest,” for they profoundly concern our understanding of how we understand (the understanding of understanding).
Why should the insights I have claimed for cybernetics, leading to the above questions matter? In one view, one would claim that nothing to do with cybernetics matters at all – and not all the proponents of this view are abstruse scientists and philosophers. In another view, cybernetics lets us better understand control, thus providing us with a worthwhile repository of techniques (a view surprisingly common amongst cyberneticians).
Clearly, I do not subscribe to either of these views; if I did I should not have traced the initial ideas that constitute the original formulation of Cybernetics: Control and Communication in the Animal and the Machine, and have shown how progressively more refined and rigorous analyses of the processes of control have lead to the development of a second-order cybernetics in which control is circular, and controller and controlled are seen as psychological rather than logical roles.
This is where the question of cybernetics appears. For concerns and consequences are very peculiar ideas. In general I have proposed them by using conventional logical argument. In doing so I have produced a number of paradoxical contradictions. I have, for instance, both hinted and indicated by giving precedence to logic, that the psychological must take precedence over logic (to which I gave precedence) and in demonstrating the psychological I have established a logical description of “things as they must be, out there in reality.” I have used logic, as well as experience (or paradox), a need in itself paradoxical because logical systems may not contain paradoxes. So what I have done, generally and yet in several specific instances, is to demonstrate, paradoxically, the need for paradox in systems from which it is prohibited.
This, in its turn, is worrying, for if it is so, we need to radically change our conception of knowing, knowledge, laws of nature, reality, causality, and so on. And if not, then the basis, not only of cybernetics but all that we base all our faith in knowing, is untenable, and so we must nevertheless, change. I have argued logically for the unlogical, which leaves me unsure where to stand. How can I justify the unlogical, logically; especially when what I need is the unlogical, logically and hence, unlogically I need the logic.
Which leads us to the question of cybernetics: how to effect this change, how to be confident that the change will be productive, and what the change will mean to us as cognitive entities?
Can we resolve the question, reformulate it so that it is cured (it disappears), by-pass it altogether? Or must we change our expectations, especially about knowing, certainty, and communication. Or do we turn a blind eye, either discarding it or just hoping it will go away?
I cannot answer the question, or how the community of cyberneticians will react to it. But it seems to me a serious enough question to be critical. And I believe that we owe ourselves, our subject, and in the end the art of knowing, the honor of tackling it seriously and urgently.
You know,They’re growing mechanical trees:They go to their full height,And then, they chop themselves Down.Anderson (1985)
References
Anderson, L. (1985) “Sharkey’s Day,” Warner Brothers Records.
Bateson, G. (1973) Steps to an Ecology of Mind, London, Paladin.
Bateson, G. (1979) Mind and Nature – A Necessary Unit, London, Wildwood House.
Beer, S. (1975) Platform for Change, London, Wiley.
Church, A. (1936) A Note on the Entscheidungsproblem, J. Symbol. Logic.
von Foerster, H. (1973) On Constructing a Reality, in Preiser, W. (ed.), Environmental Design Research, vol. 2, Stroudsburg, Dowden Hutchinson and Ross.
von Foerster, H. (1974) The Cybernetics of Cybernetics, BCL 73.38, University of Illinois, Urbana.
von Foerster, H. (1976a) Notes on an Epistemology for Living Things, in Morin, E. (ed.), L’Unite de l’Homme, Paris, Editions de Sevel.
von Foerster, H. (1976b) Objects: Tokens for (Eigen) Systems, paper presented for Jean Piaget’s 80th birthday, personal communication.
von Foerster, H. (1981) Notes on Eigen Operators, personal communication. Glanville, R. (1975) The Object of Objects, the Point of Points, or, Something about Things, unpublished Ph.D. thesis, Brunel University, Uxbridge.
Glanville, R. (1976) Is Architecture just a Hollow Space, or is it the Empty Set, Arch. Assoc. Q., vol. 8, no. 4.
Glanville, R. (1978) What Is Memory That It Can Remember What It Is?, in Trappl, R. (ed.), Progress in Cybernetics and Systems Research, vol. 4, Washington, D.C., Hemisphere.
Glanville, R. (1979a) The Form of Cybernetics – Whitening the Black Box, in Proc. 24 S. G. S. R. Meeting, Louisville, S.G.S.R.
Glanville, R. (1979b) Inside Every White Box There Are Two Black Boxes Trying To Get Out, Behavioural Science.
Glanville, R. (1979c) Beyond the Boundaries, in Proc. S.G. S. R. Silver Jubilee Meet-ing, London, Springer.
Glanville, R. (1980) Consciousness, and So On, in Trappl, R. (ed.), Progress in Cybernetics and Systems Research, vol. 10, Washington, D.C., Hemisphere.
Glanville, R. (1985) The One Armed Bandit, in Powell, J., Cooper, I., and Leera, S., Design for Building Utilisation, London, Spon.
Glanville, R. (1986) Levels and Boundaries of Problems (one of 2 papers under the general title of Dis Appearing Knowledge, in de Zeeuw, G. (ed.), Proc. Conf on Problems of Disappearing Knowledge, Utrecht, Systemica.
Glanville, R., and Varela, F. (1980) Your Inside Is Out and Your Outside Is In, in Lasker, A. (ed.), Applied Systems and Cybernetics, vol II, Oxford, Pergamon.
Gödel, K. (1931) Ober formal Unentscheidbare Satze der Principia Mathematica and Verwandter Systeme, Monats. Math. Phy.
Hodges, A. (1983) Alan Turing – the Enigma of Intelligence, Hemel Hempstead, Unwin.
Hughes, P., and Brecht, G. (1976) Vicious Circles and Infinity, London, Cape,
Maturana, H. (1979) The Biology of Cognition, BCL 9.0, University of Illinois, Urbana.
Nagel, E., and Newman, J. (1959) Gödel’s Proof, London, Routledge and Kegan Paul.
Pask, G. (1970) The Meaning of Cybernetics in the Behavioural Sciences, in Rose, J. (ed.), Progress in Cybernetics and Systems Research. vol. 1, London, Gordon and Breach.
Pask, G. (1975) Conversation Theory, London, Hutchinson.
Pask, G. (1980) The Limits of Togetherness, in Lavington, S. (ed.), Information Processing, New York, North Holland.
Pask, G., Scott, B., and Kallikourdis, D. (1975) The Representation of Knowables, Int. J. Man Machine Stud.
Piaget, J. (1955) The Child’s Construction of Reality, New York, Basic Books.
Shannon, C., and Weaver, W. (1949) Mathematical Theory of Communication, Chi-cago, University of Illinois.
Turing, A. (1937) On Computable Numbers, with an Application to the Entscheidungsproblem, Proc. London Math. Soc.
Varela, F., Maturana, H., and Uribe, R. (1974) Autopoiesis, Bio Systems. vol. 5. Weiner, N. (1948) Cybernetics, Cambridge, M.I.T.
Weiner, N. (1950) The Human Use of Human Beings, New York, Houghton Mifflin.
Wittgenstein, L. (1966) Tractatus Logico Philosophicus, (2d ed.), London, Routlege and Kegan Paul.
Zeleny, M. (ed.) (1980) Autopoiesis, New York, Elsevier North Holland.
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/2459 on 2016-05-10 · Publication curated by Alexander Riegler