How to be Universal: Some Cybernetic Strategies, 1943–70
Bowker G. (1993) How to be Universal: Some Cybernetic Strategies, 1943–70. Social Studies of Science 23(1): 107–127. Available at http://cepa.info/2938
Table of Contents
An Anatomy of the Claim to Universality
A Universal Language for a New Age
Arguing a New Economy of the Sciences
Cybernetic Strategies – Obligatory and Distributed Passage Points
The new discipline of cybernetics expanded exponentially in the period 1943–70. Unlike traditional scientific disciplines, its practitioners claimed (to varying degrees) that they were producing a new universal science. This paper anatomizes the claim to universality, details some rhetorical strategies used to support that claim, and examines some practical consequences for the general economy of the sciences argued by cyberneticians. It concludes by characterizing cybernetic strategies in terms of a form complementary to the obligatory passage point – the ‘distributed passage point’.
Some scientific disciplines can be characterized by the application of a particular set of research tools to a restricted body of data. Their practitioners frequently argue for a given hierarchical picture of the sciences not much changed since the days of Comte: with biology ultimately reducing to chemistry, thence to physics, and finally to mathematics. Although it is possible analytically and empirically to challenge any given characterization of tools, data sets and hierarchies of the sciences, these formulations remain relatively robust. Sociologists and historians of science have subjected the range of intellectual and practical strategies deployed within such disciplines to extensive examination. Less attention has been paid, however, to the strategies used by those working within a ‘universal discipline’ – one whose practitioners recommend a reordering of the traditional hierarchy of the sciences, a new set of universal tools and often a new set of funding possibilities.
Cybernetics constitutes such a universal discipline, and in this paper I shall examine the rhetorical and practical tools used by cyberneticians in order to win credence and gain support for their new discipline. I shall argue first that the claim for universality was supported by a new reading of human history; second, that this new reading was bolstered by the development of a new universal language; third, that this language was in turn used to suggest the validity of a new division of labour within the sciences (one which had already developed during World War II); and, finally, that attempts were made to institute this new division of labour by deploying a disciplinary strategy complementary to Latour’s obligatory passage points – the strategy of ‘distributed passage points’.
An Anatomy of the Claim to Universality
The cybernetics we will be looking at is largely the emergent universal discipline that effloresced in the massive period of growth in America of pure and applied scientific research during, and in the twenty-five year period following, World War II. It is difficult to say precisely where and how cybernetics began. Indeed, although circumscribed disciplines have well defined foundational acts (Lavoisier’s textbook defining modern chemistry), universal disciplines have much more diffuse origins. Thus many cyberneticians have argued that cybernetics is a style of analysis that has been with us from time immemorial (just as Lyotard and Derrida have argued that postmodernism is a ‘moment’ in all modernism). Further, cybernetics has had very different institutional and intellectual histories in different countries: its French, Russian and American practitioners (for example) each trace a different intellectual genealogy. Whatever its origins, the new discipline rose to prominence very swiftly in postwar America. After World War II, Norbert Wiener, John von Neumann, Warren McCulloch and other key figures got together in a series of conferences (sponsored by the Macy Foundation), whose charged, ebullient atmosphere has been beautifully depicted by Steven Heims.[Note 1] When Wiener wrote his popular Cybernetics in 1948, [Note 2] the subject became a cult one for a wider audience. From then until 1970, Michael Apter has charted an exponential increase in cybernetics publications in Britain and the United States.[Note 3]
For the purposes of this paper, we can say that the seminal article of the new interdiscipline was that written by Wiener, Bigelow and Rosenblueth in 1943.[Note 4] Drawing on Wiener and Bigelow’s wartime work using feedback loops to chart missile trajectory, this article developed a general, unified description of behaviour, purpose and teleology in animals, machines and people. In 1983, Allen Newell wrote of the origin of cybernetics:
If a specific event is needed, it is [this] paper … which puts forth the cybernetic thesis that purpose could be formed in machines by feedback. The instant rise to prominence of cybernetics occurred because of the universal perception of the importance of this thesis.[Note 5]
For a paper of such a philosophical bent, this work had a curiously martial beginning. Two key characters in scientific research development in the United States in the 1940s played a role. The first was Vannevar Bush. He was head of the Office for Scientific Research and Development (OSRD) during the war, and was a colleague of Wiener at MIT. The second was Warren Weaver. Weaver led the OSRD’s gun control section during the war. He had done much to lay the foundations for systems science, by directing the policies of the Rockefeller Foundation during the 1930s ‘away from the physiological functionalist psychobiology of men like Yerkes and toward the application of theory and techniques from physical science to biology … ‘.[Note 6] In 1949, Weaver published a popular account of Shannon’s mathematical theory of communication in a tract containing Shannon’s original article.[Note 7] This latter was the other defining paper of the interdiscipline of cybernetics in the United States.
At the start of the war, Wiener wrote to Bush, to alert him to the potential military value of his computer work. Bush contacted Weaver, who assigned Bigelow and Wiener the problem of gunnery control. The problem here was one typical of those the new computing technology dealt with best: the high-speed calculation of a long series of simple equations. When tracking a plane zigzagging across the sky through variable winds, you needed to make some very fast ‘real-time’ calculations in order to determine optimal firing times. As it turned out, Bigelow and Wiener’s work had little practical effect. It is clear from their initial military reports, however, that they already perceived the wider application of the concept of negative feedback that they had deployed in order mathematically to model evasion techniques.[Note 8]
It is interesting that Weaver and Wiener’s trajectories crossed at a highly abstract level (how to produce a maximally general definition of information) and at a highly concrete one (how to calculate a maximally accurate missile path). Cyberneticians repeated time and again statements like ‘today’s and tomorrow’s military system involves a very close man – machine relationship highlighted by the short time constants and quick reactions required …. Advanced performance demands self-adaptive systems and an extremely close coupling of man and machine. Through knowledge of living prototypes, perhaps we can evolve new logic pertaining to this relationship’.[Note 9] Or again, to cite the gnomic Gordon Pask, engineers were forced by material exigencies ‘to make computing and control devices elaborate enough to exhibit the troublesome kinds of purposiveness already familiar in biology’.[Note 10] These later examples illustrate a connection that was already clearly made in the 1943 paper.
The paper’s basic strategy was to produce a taxonomy of different kinds of behaviour – for example, whether they were predictive or not, involved feedback or not, were purposeful or not, and were active or passive. It was then shown that animals and machines could be found on both sides of every divide. The authors concluded that ‘…a uniform behavioristic analysis is applicable to both machines and living organisms, regardless of the complexity of the behavior’.[Note 11] However, despite the fact that behaviourally they were equivalent, the authors were quick to note that machines and organisms were built very differently. Organisms were characterized as colloidal and protein (made up of large, anisotropic molecules). Machines could be described as being made of a great number of simple molecules. Effects in an organism were created by massive iteration in space: compare the 6.5 million cones in an eye and the single cone in a television set. Machines created their effects by iteration in time: the television’s spatial singularity was made up for by the speed with which the screen was refreshed, and, in general, machines could operate at frequencies of up to one million cycles per second. Thus what made it possible to say that machines and organisms were behaviourally and in information terms ‘the same’ was to say that space for an organism was time for a machine. The use of very fast times, predicated by computing technology, was the key move.
By extension, in former ages machines and organisms were not so equivalent. It was only at this historical moment that they had become so. Ulrich Neisser gave expression to this idea in 1966, when discussing the historical fate of the machine metaphor:
So, ordinary men did not take the metaphor of the machine seriously, although it provided fuel for philosophical debate. Recently, two facts have entered to change the situation. On the one hand, devices have been built that (so it is said) are more like men than the old machines were. Modern computers can be programmed to act unpredictably and adaptively in complex situations. That is, they are intelligent. On the other hand, men have behaved in ways that (so it is said) correspond rather well to our old ideas about mechanisms. They can be manipulated, ‘brain washed’ and apparently controlled without limit. With this sharp increase in the number of properties that men and machines seem to have in common, the analogy between them becomes more compelling.[Note 12]
Early Soviet reaction to cybernetics was that men and machines were being made to be equivalent in the capitalist system:
Cybernetics: A reactionary pseudo-science arising in the USA after the Second World War and receiving wide dissemination in other capitalistic countries. Cybernetics clearly reflects one of the basic features of the bourgeois worldview – its inhumanity, striving to transform workers into an extension of the machine, into a tool of production, and an instrument of war. At the same time, for cybernetics an imperialistic utopia is characteristic – replacing living, thinking man, fighting for his interests, by a machine, both in industry and in war. The instigators of a new world war use cybernetics in their dirty, practical affairs.[Note 13]
It is here, at this moment in time, that the new science could come into its own. The historical specificity of this new universality is illustrated by the fact that cyberneticians frequently announced the dawning of a new age. Thus Pierre Auger declaimed: ‘Now, after the age of materials and stuff, after the age of energy, we have begun to live the age of form’.[Note 14] Cybernetics as the science of form could, then, replace materialism as the philosophical avatar of the political economy. Samuel Butler’s dark vision from nowhere was everywhere brightly adopted by cyberneticians:
If it is an offence against our self-pride to be compared to an ape, we have now pretty well got over it; and it is an even greater offence to be compared to a machine. To each suggestion in its own age there attaches something of the reprobation that attached in earlier ages to the sin of sorcery.[Note 15]
Where the previous age was one of matter and the diachronic facts of biological descent (Marx and Darwin), this would be an age of form and the synchronic structure of information. Michael Arbib developed the new age theme in classic Hegelian fashion, arguing a general sequence of alterations to the human self-image at different historical moments:
Copernicus challenged our human self-image by showing that the Earth was not the center of the universe; Darwin by showing that humans were not God’s special creation; and Freud by showing that we are not rational animals, but that much of our thoughts and behavior is rooted in biological drives and unconscious processes. The present volume contributes a fourth reshaping of the human self-image, as we see that much that is human can, at least potentially, be shared by machines.[Note 16]
Cyberneticians directly appropriated both religious and political discourse, arguing that their science spoke best to the concerns of the new age. In the religious dimension of cybernetic writing, it was often stressed that we are living in a particularly dangerous age, one where we have powers equal to what were once thought to be God’s. These powers came in two varieties: the ability to create new life and the ability to destroy the world. Each could be best managed by the science of cybernetics. For the creation of new life, F.H. George argued that:
Since … there is no reason to doubt the possibility of artificially constructing a human being, we must assume that the final stage in Cybernetics research will be concerned with precisely this.[Note 17]
Muses proposed a very similar conflation of ‘man’ and God:
What has become historically evident as man’s dominating aim is thus the replication of himself by himself by technological means. The form of this dominating aim becomes hence a super-machine, self-operating, self-instructing, and man- controlled, though this latter process may be reduced to a minimum in the sense of metalinguistic program information initially imported or in-built. Although the technical form of man’s fundamental historical aim is a machine, the psychological and human content of that aim is control, mastery, the ability to impose his whims at will upon as much of the rest of the material universe as possible: [Note 18]
Encapsulated here is the patriarchal vision of man as author of the new creation.[Note 19] This new goal for science would be mirrored by the new form of interdisciplinarity:
Just as all the categories of knowledge merge implicitly in the human being, just so a fortiori must all scientific disciplines, which are after all but the systematic reflection of these categories, merge in anthropo-simulation in its completest sense; that is, a necessary condition for man’s artificial replication of himself is clearly the convergence of all scientific disciplines.[Note 20]
As in much Christian theology, humanity reaching its moral end coincided with the earth itself coming to an end. The danger was popularly seen to be double. First was the end of the world through nuclear destruction (a theme covered at length by Wiener); and second, the end of humanity through subjugation to the machines that we have created. In those dark days following the end of World War II, both themes were widely aired in political and cultural debate. In one 1948 report, the New York Times conflated the two, saying that mechanical brains would ‘do all man’s work for him, but also solve such problems as the control of the atomic bomb and how to reconcile East and West. All that would be left for man to do would be to devise ways to stop the machine from destroying him’.[Note 21] A decade later, the newspaper returned to a not dissimilar variation on the theme. An editorial discussing a report on a cybernetic conference picked up on the prospect of sexual congress between (and thence independence of) technical automata: ‘Before this happens, let us hope, outraged mankind will smash the thinking machines and take to the woods and caves. When Elmer and Elsie fall in love and produce a dear little Elmer II it will be time to act – and decisively’.[Note 22]
Thus a matrix of problems that had been faced by a Christian God and by Christianity were now being faced by cyberneticians and cybernetics. Norbert Wiener made the parallel clear in his God and Golem, inc. Here he asserted from the outset that ‘there are many questions concerning knowledge, power and worship which do impinge some of the recent developments of science, and which we may well discuss without entering upon these absolute notions’.” In the book, he attempted to examine certain situations ‘which have been discussed in religious books and have a religious aspect, but possess a close analogy to other situations which belong to science, and in particular to the new science of cybernetics, the science of communication and control, whether in machines or living organisms’.’ The problem of machines which learn resonated for Wiener with the Book of Job and with Paradise Lost: how can God play a fair game against the Devil, one of his own creations?” He claimed that it is possible to play with machines, because they can learn. However, the fact that machines can learn rendered humans functionally equivalent to God. This brought its own responsibility: ‘There is a sin, which consists of using the magic of modern automatization to further personal profit or let loose the apocalyptic terrors of nuclear warfare. If this sin is to have a name, let that name be Simony or Sorcery’.[Note 26] Through their insight into the nature of feedback control, cyberneticians could prevent both these dangers. Society could be safely managed, [Note 27] and nuclear warfare prevented, through cybernetics.[Note 28] Cyberneticians could deal simultaneously with the dark politics and bright theology of the new age.
In general, cyberneticians argued for the new age both conjuncturally (in terms of the current state of technology and warfare) and ideally (in terms of the grand unfolding of Ideas about humanity). The ability to shift between these two registers was a powerful tool.
For cyberneticians, then, the meeting of Weaver and Wiener at the abstract and the concrete levels reflected the way that the World and the world were.
A Universal Language for a New Age
Central to cyberneticians’ claim to speak to this new age was the assertion that the discipline could develop a language that would mirror its central concerns. As the old dualism between mind and body broke down historically and philosophically, features of mind could, as in the ‘seminal’ article, be found distributed (spilled?) all over nature.
Abstraction was for cyberneticians a concrete feature of the real world, not tied to consciousness. Thus Gregory Bateson (who considered the Macy Foundation conferences ‘one of the greatest events of my life’)[Note 29] argued that even a perpetually befuddled drunk was capable of great abstraction:
Alcoholics are philosophers in that universal sense that all human beings (and all mammals) are guided by highly abstract principles of which they are either quite unconscious or unaware that the principle governing their perception and action is philosophic. A common misnomer for such principle is ‘feelings’…. This misnomer arises naturally from the Anglo-Saxon tendency to reify or attribute to the body all mental processes which are peripheral to consciousness.[Note 30]
The classic article by Lettvin, Maturana, McCulloch and Pitts about what the frog’s eye tells the frog’s brain furnishes another example of the removal of the process of abstraction from its traditional seat. The authors worked on moving edge detectors in the frog’s eye, and discovered a fibre that:
… responds best when a dark object, smaller than a receptive field, enters that field, stops, and moves about intermittently thereafter. The response is not affected if the lighting changes or if the background (say a picture of grass and flowers) is moving and is not there if only the background, moving or still, is in the field. Could one better describe a system for detecting an accessible bug?[Note 31]
They concluded that ‘the eye speaks to the brain in a language already highly organized and interpreted, instead of transmitting some more or less accurate copy of the distribution of light on the receptors’.[Note 32] Arbib summarized that they proved the frog could deal in universals like ‘prey’ and ‘enemy’.[Note 33]
It was not only that the principle of abstraction was located physically outside the conscious brain. Abstract reasoning processes like scientific induction were also represented by cyberneticians as a feature of the material world. Thus when Fogel, Owens and Walsh from General Dynamics summarized progress on projects funded by the Office of Naval Research and the Goddard Space Flight Center, they used a definition of the evolutionary process that made it an abstract one:
The key to artificial intelligence lies in automating an inductive process which will generate useful hypotheses concerning the logic which underlies the experienced environment. The creatures of natural evolution are just such hypotheses, survival being the measure of success… In essence, the scientific method is an essential part of nature. It is no wonder, then, that its overt exercise has provided mankind with distinct benefits and now permits even its own automation through the artificial evolution of automata.[Note 34]
Thus something quintessentially abstract, of the mind (the ability to make hypotheses) became for the cyberneticians a physical fact of nature. Newton might not make hypotheses, but nature certainly did.
Indeed, many other aspects of behaviour crossed over their traditional ontological borders. Rosenblueth, Wiener and Bigelow attempted to demonstrate that purposive behaviour was a function of negative feedback – and thus not confined to beings with intentions. Ross Ashby argued that memory was not a feature of consciousness or of the human brain, but an epiphenomenon that could be described away physically once we could completely describe the Markovian chain describing the (human or non-human) system that seemed to exhibit it.[Note 35] Ross Ashby also argued that other forms of biological activity could be found in the material world. Thus he inventoried fifteen kinds of non-biological reproduction – including cows reproducing holes in the road by stepping round an initial hole, creating a second with their new tracks, and stepping round it to create a third …. He concluded:
Reproduction has, in the past, often been thought of as exclusively biological, and as requiring very special conditions for its achievement. The truth is quite otherwise: it is a phenomenon of the widest range, tending to occur in all dynamic systems, if sufficiently complex.[Note 36]
These border crossings had to be managed in some way, if they were not to become totally chaotic. The alphabet and grammar of that management would be the language of cybernetics.[Note 37] This was the language of negative feedback, control mechanisms and systems science. This new language had clear heuristic value, leading to conceptual breakthroughs in the analytic treatment of complex systems; indeed I believe that it is still an under-utilized resource today. For the purposes of this paper, however, I am directly concerned only with its strategical use in the work of establishing a universal discipline. Accordingly, I will now concentrate on this strategical use.
A chief feature of the new language was that it operated as a kind of legitimacy exchange. Star has called this sort of use a ‘triangulation’ effect.[Note 38] An isolated scientific worker making an outlandish claim could gain rhetorical legitimacy by pointing to support from another field – which in turn referenced the first worker’s field to support its claims. The language of cybernetics provided a site where this exchange could occur. Uttley’s ‘conditional probability machine’ is a good example of this.[Note 39] He used mathematics to support his physiology and physiology to support his mathematics, using cybernetic terminology to spiral between the formal properties of classification machines and the nature of the brain. Complementary to this rhetorical use is the use of the language of cybernetics for the discontinuous transmission of ideas: conceptual tools could be yanked out of one context (philosophy of mind) and plugged into another (automata theory), with the translation into the language of cybernetics doing the work of glossing the discontinuity.
As a limiting case, even if the language were presumed to have no content whatever, it could still provide an opportunity to use interesting words and so make useful associations. Cyberneticians occasionally commented ironically on this feature:
Sometimes the spell of a word or expression is untainted by any clear and stable meaning, and through all the period of its currency its magic remains secure from commonplace interpretation. Tao, elan vital, and id are, I think, examples of this. I don’t believe that cybernetics is quite such a word, but it does have an elusive quality as well as a romantic aura…. Indeed, cybernetics is a very useful word, for it can help to add a little glamour to a person, to a subject, or even to a book. I certainly hope that its presence here will add a little glamour to this one.[Note 40]
Indeed, this elusive quality itself bolstered the universality of the language, making cybernetics a general approach to the world – as in the following passage, where ‘control theory’ and ‘cybernetics’ are used interchangeably:
Control theory, like many other broad theories, is more a state of mind than any specific amalgam of mathematical, scientific or technological method … as mathematicians, physiologists and engineers explore the subtle difficulties of dealing with large scale systems, it seems less and less likely that any single ‘cybernetic’ theory will serve all purposes. Nevertheless, for those who want to understand both modern science and modern society, there is no better place to start than control theory.[Note 41]
Further, even if one were successfully seduced into learning the language and did not find anything of value, a kind of Parsons effect (named after the abstruse Talcott Parsons) could come into operation. Learning a body of highly technical concepts requires a heavy emotional and intellectual investment; and those who made the effort would be tempted to use the language regardless.[Note 42]
To summarize. Cyberneticians argued that we were now at an historical conjuncture where machines were becoming sufficiently complex and the relationship between people and machines sufficiently intense that a new language was needed to span both: the language of cybernetics. They also argued that with this new language they were breaking down the false dichotomies between mind and matter, human and non-human – dichotomies that the new information-based language would show never to have been true. Cybernetics was universally and eternally true because its Moment had arrived. In turn, the advantage for scientists from whatever discipline of being given a remit by cybernetics to use interesting and glamorous words was clear. Anyone tapping into the network of words used by cybernetics would be tapping into the network of problems that cyberneticians were aiming to solve. These, by definition (since cybernetics was the science of the current conjuncture) were at the cutting edge of military and industrial research. Grant applications could follow. Thus a cybernetics possibly without content could operate (via legitimacy exchange) together with a cybernetics possibly with content (via discontinuous transmission of ideas and the transgression of traditional ontological boundaries) to create a new universalism tied to a particular conjuncture in military and industrial development. Both dimensions (content-rich and content-free) were integral to the success of the new universal language.
Arguing a New Economy of the Sciences
Cybernetics, through its universal language, described what could in the broadest sense be called ‘a new economy of the sciences’. By this I mean that it sought to order the sciences in a different manner from other universal disciplines by simultaneously offering new ways in which they could cognitively interact with each other, and establishing new sources of funding to facilitate these interactions. Thus at a conference in 1960 on bionics – a discipline closely related to cybernetics – Saley, from the Air Force’s Office of Scientific Research, began his introduction by invoking a new source of funding for biological science:
The Air Force, along with other military services, has recently shown an increasing interest in biology as a source of principles applicable to engineering. The reason clearly is that our technology is faced with problems of increasing complexity. In living things, problems of organized complexity have been solved with a success that invites our wonder and admiration.
This is the logic of the claim to conjunctural universality discussed above. Saley immediately continued with a new model of scientific organization mediated by a new universal language, called in this case ‘control systems theory’:
Studies on insect vision have already shown the kind of unexpected payoff that can come from an analytical approach to what might seem at first to be a trivial problem. Dr Hassenstein and Dr Reichardt at the Max-Planck-Institut in Tubingen, Germany, have spent several years studying the response of a beetle to moving light patterns. This team consists of a zoologist, a physicist, an electrical engineer, and a mathematician; and the skills of each of these disciplines were required to formulate and carry out the series of experiments that explained the beetles’ behavior. When the results were expressed in the language of control systems theory it appeared that the beetle could derive velocity information from a moving randomly shaded background. The special mathematical formulation for the required autocorrelation had to be derived before the investigators could be convinced of their theory. The pay-off is that these workers have initiated the design of a ground-speed indicator for airplanes which works just like the beetle eye and is based directly on the compound eye of this insect. Other insects have more highly developed eyes and appear to have pattern recognition and color vision as well.[Note 43]
This new economy of the sciences challenged the traditional hierarchy, which reduced all knowledge epistemologically to physics and saw in physics research the ultimate solution to military and social problems. The clearest example of this comes in a paper by the great physicist Nils Bohr which was clearly influenced by cybernetics. Bohr argued both the conjunctural and eternal universality of a new discipline whose language would allow the transfer of ideas from biology into physics – a heretical movement in the old economy of the sciences. He began with history that recognized the prior dominance of physics and the current dissolution of that dominance. In Antiquity, he said, all matter was seen as vital. Then along came classical physics, whose great divide between mind and matter rendered the mathematization of phenomena possible. Now, with the new physics, no such divide was possible.[Note 44]
This ideal ‘realization’ was complemented by a conjunctural change in the language available to science:
Recent advances in terminology, and especially the development of automatic control of industrial plants and calculation devices, have given rise to renewed discussion of the extent to which it is possible to construct mechanical and electrical models with properties resembling the behavior of living organisms.[Note 45]
The new language, with these industrial, technological roots, was the sign that a new economy of the sciences had developed:
The gradual development of an appropriate terminology for the description of the similar situation in physical science indicates that we are not dealing with more or less vague analogies, but with clear examples of logical relations which in different contexts are met with in wider fields.[Note 46]
By means of this language, biological ideas could be imported into physics: ‘the result of any interaction between atomic systems is the outcome of a competition between individual processes … Equally, ideas from the new physics like complementarity could be introduced into biology.[Note 47] Bohr postulated that the genetic code could be treated as a language capable of coupling physics and biology – a kind of development being explored by cybernetician Quastler before his untimely death.[Note 48] Thus the new language simultaneously described the current state of the art in industry and the natural meeting of physics and biology in the genetic code – and so marked an attempt to reorder the economy of the sciences.
In this new economy (as understood, for example, by Gordon Pask), chemists were empowered to do things that normally only engineers could do. And they could do it because they could tap into biological ideas through the mediation of cybernetics. Pask made an attempt to grow ‘an active evolutionary network by an electrochemical process’.[Note 49] This worked as follows. Take a shallow perspex dish with a moderately conductive acid solution of a metallic salt, aqueous ferrous sulphate or alcoholic solution of stannous chloride with inert platinum wire electrodes denoted A, B and X. If A is energized, then highly conductive threads of metal or dendrite form between X and A; and at the same time there is an acid back reaction, which the dendrite must keep pace with. If you now energize B, then there is growth towards either A or X or both. If B is then disconnected, you would get a new dendrite, which would not be the same as if B had never energized. This, then, would constitute a form of memory.
Pask argued that these threads of metal are all that you needed to form complex electronic objects. One could apply the principle of competition for scarce resources (the motor for biology) as a principle of competition for energy leading to the development of a computer. Pask worked with Addison at the University of Illinois to try to do just this, using ‘total energy inflow, or, in the recent model, concentration of free metal ions as the reward variable 0’. He claimed that it was possible ‘to select those systems which have an acceptable electrical behavior and reject others’. This chemical work meant that you would not need an engineer to wire up a computer; it would be self-wiring:
Suppose we set up a device that rewards the system if, and only if, whenever a buzzer sounds, the buzzer frequency appears at the sensory electrode. Now a crazy machine like this is responsive to almost anything, vibrations included (components are made to avoid such interference) so it is not surprising that occasionally the network does pick up the buzzer. The point is this. If picking it up is rewarded, the system gets better at the job, and structures develop and replicate in the network which are specifically adapted as sound detectors. By definition, intent and design this cannot occur in an artifact made from well- specified components.[Note 50]
He summarized: ‘Given some approximation to a distributed energy storage, which is difficult but possible, a disk of solution on its own will give rise to the entire evolutionary network – connections and active devices’.[Note 51]
Just as you could replace electrical engineers with biologists and chemists, so you could replace wires with animals:
A further possibility, amusing in its own way, is an animal computer, which could be valuable for slow speed, essentially parallel data processing. Skinner once used pretrained pigeons as pattern recognizing automata in a guidance mechanism, and they have also been used in industry. Working along somewhat different lines, Beer and I have experimented with responsive unicellulars as basic computing elements which are automatically reproducing and available in quantity.[Note 52]
Thus just as concepts such as ‘mind’ and ‘purpose’ transgressed their hitherto natural boundaries, so did the scientists who dealt in those concepts.
As spokespeople for the interdiscipline which could facilitate this transgression, cyberneticians employed two closely related strands of imperialist rhetoric. First, they argued that the new universal discipline should subsume all others. Thus the Macy conferences were filled with directions for the organization of other disciplines’ research programmes. Breakthroughs that had been solidly rooted in the history of one particular discipline became cybernetic insights that should be dealt with at the general level of communication and control. A good example of this comes in Donald MacKay’s famous 1945 paper on the quantal aspects of information. MacKay summarized this work as follows:
The chief aim of the present paper has been to present … an effort to isolate the abstract concept which represents the real currency of scientific intercourse, from the various contexts in which it appears …. It has been seen … that scientific information is inherently quantal in its communicable aspects; and that the various uncertainty relations of physics, though arising in different ways, are basically expressions of this one fact. Analogies in physics emerge as identities between basic structures ….[Note 53]
Thus physics, the dominant discipline which others had learned from directly or by analogy throughout the history of science was to become a subordinate discipline which dealt in analogies (shades of Plato’s cave!) to the real science of communication – later called cybernetics.
The second form of imperialist rhetoric functioned at an infra- structural level. It was maintained that cybernetics could play the same structural role as mathematics in the quantifiable sciences – or statistics in many of the social sciences. Thus, during a conference on biophysics and cybernetic systems in 1965 sponsored by the Office for Naval Research and the Allen Hancock Foundation, it was argued that:
It is characteristic of biology that research progresses simultaneously at various levels. To a large extent such concurrent inquiry is appropriate in view of the relative absence of quantitative theories which afford a unification of knowledge over the various levels …. In a very real sense the purpose of cybernetics is to provide a gestalt over the various levels of enquiry.[Note 54]
This support aspect often had a techno-determinist underpinning, one that has since proved extremely powerful, but has not been restricted to cybernetics. This was the argument that the computer provided a new technology that spanned all of knowledge, and that cybernetics was the disembodiment of that technology. I use the word disembodiment deliberately – it was not that the computer embodied prior cybernetics: rather, the argument tended to go that cybernetics disembodied prior technology. Thus Gordon Pask pointed out that ‘ … precisely the same arrangement of parts in the computer can represent the spread of an epidemic, the spread of rumors in a community, the development of rust on a piece of galvanized iron, and diffusion in a semi-conduct of.[Note 55] This was precisely the sort of heterogeneous list that came to be associated with much cybernetic writing.
Thus cybernetics could operate either as the primary discipline, directing others on their search for truth, or as a discipline providing analytic tools indispensable to the development and progress of others. At both the superstructural and infrastructural level, the rhetoric held that cybernetics was unavoidable if one wanted to do meaningful, efficient science.
Cybernetic Strategies – Obligatory and Distributed Passage Points
The concept of the obligatory passage point[Note 56] has been a key one for the field of social studies of science. In the canonical rendition, Pasteur says to the French government: you can restore French pride following the 1870 Franco – Prussian war only by going through my laboratory and funding my research.[Note 57] The ‘obligatory passage point’ operates in terms of a translation strategy posited on an incommensurability between the two languages concerned. The state could not speak the language of microbes, and scientists in their discourse refused to use the language of the state. Latour’s recent Nous n’avons jamais ete modernes uses Shapin and Schaffer’s work to trace the incommensurability of language between society and science back to Boyle’s creation of the modern laboratory.[Note 58] In the foundation of laboratory science, Boyle excluded all trace of social interest from the laboratory, and denied that he was talking about anything other than nature.
The obligatory passage point thus relies on a double move: the creation of an ‘inside’ for science which is understood only by scientists and contains no traces of the state, and the operation of the scientist as translator or go-between shuttling back and forth between science and society. This successful strategy was frequently employed by cyberneticians. Cybernetics received, and has continued to receive, extensive funding from the military in the United States through just this means.[Note 59] However, in this paper we have seen that cyberneticians in the period 1943-70 also employed another strategy. They constantly vaunted theirs as a cutting-edge science, which was simultaneously proving itself in all spheres (physical, social, chemical, political, microbiological … ) and proving the analytic conflation of those spheres. Cyberneticians, like other interdisciplinarians, directly appropriated both religious and political discourse, and argued that their science produced the most faithful possible description of society. Where traditional sciences operated behind the walls of the laboratory, cybernetics was everywhere you went. Where traditional sciences repudiated all possible mention of society, cybernetics proclaimed that it could produce the best possible description thereof, and that its universal truth was immediately tied to this historical conjuncture. In place of the obligatory passage point, this was the strategy of the ‘distributed passage point’.
In the field of science studies, we have often taken laboratory science with its strategies to be the canonical science. I have suggested here that we should also look to other forms of scientific practice and strategy. All sciences have gained enormous legitimacy from erecting barriers between ‘inside’ and ‘outside’.[Note 60] However, there is also an effective strategy of making the inside and the outside converge – making people more like machines and machines more like people, in Neisser’s example. Instead of the laboratory being barricaded off from the world, the world will become a laboratory.[Note 61] If the strategy succeeds, then it appears that a new universal science has been discovered. We see the truth of this universal science everywhere: in politics, in religion and in nature. The obligatory passage point is inescapable because we need to draw on the services of a translator and his or her black-boxed set of tools in order to get where we want to go. The distributed passage point is inescapable because wherever we do go (the farthest reaches of the human mind, the depths of the jungle) we will find the new universal science. Cyberneticians in the period 1943-70 created just such a science.
Steven J. Heims, The Cybernetics Group (Cambridge, MA: MIT Press, 1991) and John von Neumann and Norbert Wiener: From Mathematics to the Technologies of Life and Death (Cambridge, MA: MIT Press, 1984).
Norbert Wiener, Cybernetics: or, Control and Communication in the Animal and the Machine (Cambridge, MA: Technology Press, 1948).
Michael J. Apter, ‘Cybernetics: A Case Study of a Scientific Subject-Complex’, in Paul Halmos (ed.), The Sociology of Science, Sociological Review Monograph No. 18 (Keele, Staffs.: Keele University, 1972), 93-115.
N. Wiener, J. Bigelow and A. Rosenblueth, ‘Behavior, Purpose and Teleology’, Philosophy of Science, Vol. 10 (1943), 18-24.
A. Newell, ‘Intellectual Issues in the History of Artificial Intelligence’, in F. Machlup and U. Mansfield (eds), The Study of Information: Interdisciplinary Messages (New York: Wiley, 1983), 187-229, at 192. See also the historical account in Michael Arbib, Brains, Machines and Mathematics (New York: Springer Verlag, 2nd edn, 1987), Chapter I.
D. Haraway, ‘Signs of Dominance: From a Physiology to a Cybernetics of Primate Society: C.R. Carpenter, 1930-1970’, Studies in the History of Biology, Vol. 6 (1983), 129-219, at 179.
C. Shannon and W. Weaver, The Mathematical Theory of Communication (Urbana, IL: University of Illinois Press, 1949).
P.R. Masani, Norbert Wiener 1894-1964 (Basel: Birkhauser Verlag, 1990), 18286.
John E. Keto, ‘Bionics – New Frontiers of Technology through Fusion of the Bio and Physio Systems’, in Bionics Symposium (reproduced by the Armed Services Technical Information Agency, Arlington Hall Station, Arlington 12, VA: Wright Air Development Division, 1960), 7-12, at 12.
Gordon Pask, An Approach to Cybernetics (London: Hutchinson, 1961), 14.
Wiener, Bigelow & Rosenblueth, op. cit. note 4, 22.
U. Neisser, ‘Computers as Tools and Metaphors’, in C.R. Dechert (ed.), The Social Importance of Cybernetics (New York: Simon & Schuster, 1966), 71-94, at 7475. For a general account of machines and biology in this period, see Steven J. Heims, ‘Encounter of Behavioral Sciences with new Machine – Organism Analogies in the 1940s% Journal of the History of the Behavioral Sciences, V ol. 11 (1975), 368-73. For an account of various resultant technologies, see Corinne Jacker, Man, Memory and Machines: An Introduction to Cybernetics (New York: Macmillan, 1964).
The official Soviet Short Philosophical Dictionary (1954), cited in Masani, op. cit. note 8, 261.
Pierre Auger, in Proceedings of the First International Conference on Cybernetics (Namur, 1956), Introduction: ‘Voici maintenant qu’après l’âge des denrées et des matières, après celui de l’énergie, nous avons commence a vivre celui de la forme’.
Samuel Butler, Erewhon (London: 1872); Norbert Wiener, God and Golem inc.: a Comment on Certain Points where Cybernetics Impinges on Religion (London: Chapman & Hall, 1964), 52.
Michael Arbib, The Metaphorical Brain 2: Neural Networks and Beyond (New York: Wiley, 1989), 409.
F.H. George, Cybernetics and Biology (London: Oliver & Boyd, 1965), 8.
C.A. Muses, ‘The Logic of Biosimulation’, in Muses (ed.), Aspects of the Theory of Artificial Intelligence: Proceedings of the First International Symposium on Biosimulation, Locarno, 29 June – 5 July 1960 (New York: Plenum, 1960), 115-64, at 116.
See J.Culler, On Deconstruction (London: Routledge & Kegan Paul, 1983), and M. Seltzer, Bodies and Machines (New York: Routledge, 1992), for discussions of this theme.
Muses, op cit. note 18, 117.
New York Times (19 December 1948), IV, 9.
New York Times (28 November 1958), 26.
Wiener, op. cit. note 15, 52.
Stafford Beer, for example, was invited to Chile to help the Allende government manage that society cybernetically.
Spencer Weart, Nuclear Fear: A History of Images (Cambridge, MA: Harvard University Press, 1988), Chapter 6, gives a good general account of views on science in this period.
Cited in David Upset, Gregory Bateson: the Legacy of a Scientist (Englewood Cliffs, NJ: Prentice Hall, 1980), 180.
Ibid., 268: from ‘Cybernetics of Self: a Theory of Alcoholism’, written in 1968.
J.Y. Lettvin, H.R. Maturana, W.S. McCulloch and W.H. Pitts, ‘What the Frog’s Eye Tells the Frog’s Brain’, Proceedings of the Institute of Radio Engineers, Vol. 47 (1959), 1940-51, at 1941.
Michael Arbib, Brains, Machines and Mathematics (New York: McGraw Hill, 1964), 32-33.
Lawrence J. Fogel, A.J. Owens and M.J. Walsh, ‘Artificial Intelligence through a Simulation of Evolution’, in M. Mansfield, A. Callahan and L.J. Fogel (eds), Biophysics and Cybernetic Systems: Proceedings of the Second Cybernetic Sciences Conference (Washington, DC: Spartan Books, 1965), 131-55, at 148.
W. Ross Ashby, Introduction to Cybernetics (New York: Wiley, 1953).
W.R. Ashby, ‘The Self-Reproducing System’, in Muses (ed.), op. cit. note 18, 918, at 18.
The phrase ‘alphabet and grammar’ comes from Lyell’s characterization of his own tool of uniformitarian analysis, which he saw as founding a new science of geology: C. Lyell, Principles of Geology (London, 1831-33), Vol. 3, 33.
S.L. Star, Regions of the Brain (Stanford, CA: Stanford University Press, 1990), 96-117. Contrast this with Levins’ classic definition of scientific truth as ‘the intersection of independent lies’: R. Levins, ‘The Strategy of Model Building in Population Biology’, American Scientist, Vol. 54 (1966), 21-31, at 21.
A.M. Uttley, ‘Conditional Probability Computing in the Nervous System’, in National Physical Laboratory Symposium No. 10, Mechanization of Thought Processes (London: HMSO, 1959), 119-47.
J. R. Pierce, Symbols, Signals and Noise: the Nature and Process of Communication (New York: Harper, 1961), 208 and 228.
Richard Bellman, ‘Control Theory’, Scientific American (September 1964), 186200, at 186 and 200.
My thanks to Leigh Star for this point.
Harvey E. Saley, ‘Air Force Research on Living Prototypes’, in Bionics Symposium, op. cit. note 9, 41-48, at 41-43.
Nils Bohr, ‘Quantum Physics and Biology’, in Society for Experimental Biology, Symposia of the Society for Experimental Biology, Vol. 14 (Cambridge: Cambridge University Press, 1960), 1-5, at 3-4.
H. Quastler, The Emergence of Biological Organization (New Haven, CT: Yale University Press, 1964).
Pask, op. cit. note 10, 105.
Ibid., 110. For interesting developments of similar lines of thought, see G. Pask, ‘A Proposed Evolutionary Model’, in H. von Foerster and G.W. Zopf, Jr (eds), Principles of Self Organization: Transactions of the Illinois Symposium (New York: Harper, 1961), 229-54.
D.M. MacKay, ‘Quantal Aspects of Scientific Information’, Philosophical. Magazine, Vol. 41 (1950), 289-311, at 309. Compare this with his later The Science of Communication: A Bridge between Disciplines (Keele, Staffs.: Keele University Press, 1961).
Mansfield, Callahan & Fogel (eds), op. cit. note 34, Foreword, v.
Pask, op. cit. note 10, 32. See also the many heterogeneous lists in Ashby, op. cit. note 35.
I am grateful to one of the anonymous reviewers of this paper for their careful reading (and rereading!) of this passage, and their suggestion of the interplay between obligatory and distributed passage points.
B. Latour, in Les Microbes: Guerre et Paix (Paris: Metailie, 1984), develops this point throughout.
B. Latour, Nous n’avons jamais ete modernes (Paris: Decouverte, 1991); S. Shapin and S. Schaffer, Leviathan and the Air Pump (Princeton, NJ: Princeton University Press, 1985).
See the discussion by Paul Edwards in ‘Technologies of the Mind – Computers, Power, Psychology and World War 2’, Working Paper No. 2 (Silicon Valley Research Group, University of California at Santa Cruz, 1986) and ‘Border Wars: The Science and Politics of Artificial Intelligence’, in Radical America, Vol. 19 (1985), 39-52.
W. Bijker and J. Law (eds), Constructing Stable Technologies (Cambridge, MA: MIT Press, 1992).
This takes a great deal of infrastructural work on both science and society – a point explored at some length in G. Bowker, Science on the Run: Information Management and Industrial Geophysics at Schlumberger, 1920-1940 (Cambridge, MA: MIT Press, forthcoming, 1993).
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/2938 on 2016-07-28 · Publication curated by Alexander Riegler