Andrew A. M.
What is there to know?
Cite as: Andrew A. M. (1994) What is there to know? Kybernetes 23(6/7): 104–110. Available at
Log in to see this properly formatted
The emergence and early development of cybernetics, in the 1940s, 50s and early 60s, was largely stimulated by the desire to understand the working of the nervous system, especially the brain. This is acknowledged by Masani[Note 1]Note 1. NOTETEXT-1 in his masterly biography of Norbert Wiener, where he attributes the origin of cybernetic ideas to intellectual collaboration with Warren McCulloch and Walter Pitts and a number of other workers including Rosenblueth, Bigelow and von Neumann. All of these had a strong bias towards biological, and particularly neurological, investigations.
A feature of this quest was a great deal of speculation about possible mechanisms of self-organization in the nervous system. This is evident fromthe proceedings of a number of conferences with “Self-organizing Systems” as the nominal topic (see Andrew[Note 2]Note 2. NOTETEXT-2 for a review), as well as much other published material at the time. The first edition of Wiener’s famous book[Note 3]Note 3. NOTETEXT-3 does not mention self-organizing systems as such, but the later edition has two extra chapters with the respective titles: “On Learning and Self-reproducing Machines” and “Brain Waves and Self-organizing Systems.”
For a time it was widely believed that the development of self-organizing networks of neuron-like elements would be the route by which intelligent artefacts might be created. The term “Artificial Intelligence” (with capitals) was only introduced in the mid or late 1950s, and initially implied rejection of the neural-net approach in favour of a computer-based, strongly-symbolic alternative.
As is well known, in relatively recent years there has been a resurgence of interest in artificial neural nets. With this, and partly underlying it, has come a revival of interest in self-organization and in the benefits it could offer in showing how to achieve effective massively-parallel operation of computer hardware. From an AI viewpoint, the resurgence can be seen as confirmation that the limitations of the strongly-symbolic “classical” approach have become apparent.
Although one of the avowed aims of neural-net studies is elucidation of the working of real nervous systems, most of the work on artificial neural nets does not entail precise modelling of the living prototype. It is entirely possible that, even if model neurons had not been invented and introduced in the biological context by McCulloch and Pitts[4], [5], networks of such elements with variable input “weights” might well have been devised as a purely technological expedient. Such nets provide a useful way of implementing learning systems, essentially because they allow successive small parameter adjustments to accumulate and eventually to modify discrete patterns of response.
It is unfortunate that, at least in certain quarters, the main context in which the names of McCulloch and Pitts are remembered is their introduction of these simple model neurons. They were well aware that the model neurons were only a crude representation of a small subset of the properties of real neurons. The aim of the study reported in the 1943 paper[Note 4]Note 4. NOTETEXT-4 was to examine, purely as an aid to further speculation, the computational capabilities of networks of neurons of this unrealistic kind. At the time this should have been a useful exploratory step.
Strictly, the original theory did not allow for any kind of synaptic variability or other mechanism for self-modification of the networks, though the possibility of some such extension was mentioned in the discussion. The companion paper[Note 5]Note 5. NOTETEXT-5 gives a modified theory which is arguably slightly more realistic insofar as it is applicable to non-synchronous nets, but it still embodies very simple model neurons. It must be emphasized that McCulloch and Pitts were well aware of the shortcomings.
It has been argued[Note 2]Note 2. NOTETEXT-2 that the general approach usually indicated by reference to self-organization is potentially fruitful despite being loosely defined. The current literature abounds with reports of applications of artificial neural nets, particularly in image processing, and their value stems mainly from their capacity for improving their performance by experience. Even if the correspondence to biological nets is tenuous, it is likely that the resulting insight into the nature of the recognition task, and how performance can be improved by self-organization, will be of indirect value to biologists.
At the same time, it has to be acknowledged that the subject-area is rife with problems associated with definitions of terms. For a start, it is not easy to say precisely what should be understood by “organization,” nor what is meant by claiming to “understand” a neuronal process. It has in fact been claimed by Ross Ashby[Note 6]Note 6. NOTETEXT-6 that the term “self-organizing system” serves no useful purpose and should be dropped from use. (It can however also be argued that the alternative approach he favours in this particular paper is less precise than at first appears.)
In the previously-mentioned book[Note 2]Note 2. NOTETEXT-2 the methodological, or semantic, difficulty was acknowledged by saying that it was necessary to proceed in the spirit that someone (the late Oliver Wells, I think) characterized in ringing terms as that of:
Modern science, which not only seeks to know, but also asks what is there to know, and what do we mean by knowing?
Although the warning was sounded, its implications were not properly examined. The question “what do we mean by knowing?” has been treated in a separate article[Note 7]Note 7. NOTETEXT-7 , and will be briefly reviewed here.
What Do We Mean by “Knowing”?
Thought processes are commonly seen from at least two distinct viewpoints that are difficult to reconcile. On the one hand, all biological processes are generally considered to result, essentially, from Darwinian evolution.
The word “essentially” has been included because the course of development has probably been influenced by factors and occurrences that are not immediately suggested by the term “evolution.” These include events that are readily seen as accidental, for example the emergence of sexual reproduction as an apparent legacy of a form of parasitism[8], [9]. Also, certain evolved features have been in the nature of heuristic rules that have been applied outside their domain of origin. For a simple example, it can be seen that curiosity is a manifestation of such a heuristic rule, since in general it is useful to collect information about the environment (but subject to some selection criteria) even when the information is of no immediate material value. The heuristic of curiosity has been generalized and is the motivation for a great deal of scientific research that is well removed from any prospect, at least in the short term, of material advantage, for example in cosmology.
Subject to these qualifications, however, there seems to be no reasonable alternative to the view that the brain is a product of evolution. Ross Ashby was fond of saying that the brain should be viewed as a “specialized organ of survival.”
This view of mental activity is not easily reconciled with the feeling of certainty with which we adhere to particular views, nor with the confidence with which we employ certain principles of logical deduction, whether expressed formally (as modus ponens or the like) or embedded in everyday use of language.
The fact that our mental capabilities have allowed successful study of a wide range of phenomena does not give them absolute validity. Even the rules governing our ratiocination have to be seen as suspect since they were evolved subject to a criterion of value to survival. At the same time, our evolution has encouraged us to see these rules as totally valid and unquestionable, and this is the basis of all scientific and philosophical discussion.
The uncertainty about the absolute validity of the rules of thought has something in common with the well-known uncertainty mentioned by Masani[Note 10]Note 10. NOTETEXT-10 owing to unprovability of the assumption that the future will be like the past. There is no way of resolving either uncertainty, and the only way to proceed is to acknowledge their existence and perhaps to be a little more cautious in making assertions, but otherwise substantially to ignore them.
There is in fact very good reason for refusing to be unduly influenced by this negative, sceptical view of what can be meant by “knowing.” The inner certainty that we feel about the appropriateness of certain ways of reasoning is the basis of all science and culture. The arguments presented by Ashby[Note 6]Note 6. NOTETEXT-6 , as well as in the book of Plotkin[Note 11]Note 11. NOTETEXT-11 , and in Darwin’s writing and this article, depend on ratiocination – without it we are lost.
In a certain sense, any evolutionary or adaptive process acquires knowledge of its environment. It is often said, e.g. notably by Craik[Note 12]Note 12. NOTETEXT-12 , that a primary function of a nervous system is to formal model of its environment. However, the meaning of “model” is not precise, and there is no sharp distinction between a system that operates by forming and refining a recognizable model of a part of its environment, and one that merely learns by trial how best to interact with it. Human learning is obviously best described as operating in terms of a model if the subject is able to visualize the relevant part of the environment while remote from it. However, human learning, especially of the variety referred to as manual-skill acquisition, can certainly be highly effective without the formation of any obvious visual model. For example, many of the skills that are relevant to driving a car can be acquired without any detailed picture of the mechanisms involved. Decisions regarding optimal gear-shifts, for example, can come to be made skilfully by an operator with little idea of the internal workings of the engine or gearbox.
It has been shown by Andrew[Note 13]Note 13. NOTETEXT-13 that in the case of a continuous self-optimizing controller, the operation of a device that forms an explicit model and then operates on it to derive optimal responses, may be exactly equivalent to one that operates to optimize the control policy directly.
Any adaptive process can be seen as forming a model, in that its adapted state embodies a reflection of aspects of the environment. It is therefore likely that the laws of symbolic thought, or ratiocination have a very firm basis. Their evolution reflects the fact that whatever real world, or real universe, is “out there,” these laws have proved effective in dealing with it and have been derived from it during evolution. If “knowing” is interpreted in a slightly weaker sense than that of absolute certainty, we know a lot about our environment. However, the laws of logic have to be seen as being derived empirically from the environment; the latter is such that objects and properties can usefully be given names, and these can usefully be manipulated according to the rules of language and ratiocination, and this has been “discovered” by evolution.
The emphasis on utility is probably implicit in any reference to Darwinian evolution, where utility can obviously be related to “fitness.” An interesting commentary is by Rosenbrock[14], [15] who indicates an unwarranted bias towards causal formulations of natural laws, as opposed to alternatives in terms of variational principles. Although it is not the main point Rosenbrock wants to make, the preference for causal formulations is consistent with the idea that laws are judged by their potential utility to the living agent. The alternative formulation suggests something purposive and external to the living agent, and hence appears to be less useful to the agent.
It is easy to feel tempted to encapsulate the above arguments by saying that we really know nothing, but this is readily seen as a contradiction, or paradox, since the assertion itself is a claim to know something. The fact that such a contradiction is seen as unacceptable emphasizes the strong faith in ratiocination ingrained in all of us.
Since the basis of scientific conviction is itself mysterious and essentially empirical, scientists are presumptuous when they lump the alternatives under the heading of “mysticism,” with a distinctly derogatory implication. Not all scientists take the same view; Stafford Beer[16], [17] is a notable exception. In a recent paper, Charles Musès[Note 18]Note 18. NOTETEXT-18 defends the attention to aspects commonly referred to as mystical by saying that only “a superficial materialist-mechanist bent of mind” would use the term as a pejorative of opprobrium in describing the profounder aspects of the contributions of Wiener, McCulloch and Beer.
I find it difficult to relate this view to the published works of Wiener and McCulloch, in which assertions are defended in essentially the terms of ordinary scientific argument. This is not to say that the viewpoints were not influenced by studies of the classics and of alternative religions and philosophical systems. I can remember Warren McCulloch saying of one such system: “It’s wrong, of course, but it’s gloriously wrong and should be preserved.”
At the same time, there can be no doubt that abstract principles of religion are accepted by particular individuals with a degree of conviction that must rival or exceed the belief of the scientist in what is understood by “scientific method.” Martyrs have gone to the stake rather than renounce quite abstract beliefs. Also, the most blatant of agnostic materialist-mechanist scientists will usually admit that there are certain principles of a humanitarian nature that he or she holds dear, although they cannot be derived by argument.
There is also an element of something akin to religious faith in certain tacit assumptions by scientists about the consistency of the natural universe they study. A rather clear statement of this appears in Einstein’s semi-popular account of Relativity[Note 19]Note 19. NOTETEXT-19 in which, in introducing the General Theory, he comments that two apparently similar systems should behave similarly; if they do not there is something to be elucidated. He illustrates the point by referring to two pans of water on a gas range; if one is boiling and the other not, most observers will cease to be surprised if they find a gas flame under one and not under the other, even if they have never seen gas flames nor boiling water previously. However, if the two pans are behaving differently with no observable difference in conditions, there is something to be investigated. (The argument is used in explaining the need to extend the theory so as to unify the treatment under accelerating and non-accelerating frames of reference.)
Scientific Method
Most scientists would contend that accepted scientific methodology has a reassuring set of built-in safeguards that seem to be absent from religions. For example, in analysing the results of experimental studies it is customary to insist that certain well-established statistical tests are employed. This means that the criteria by which the significance of the result will be judged are agreed on before the result is actually obtained. It cannot be argued that this eliminates subjectivity, especially since the nature of probability is controversial, but it imposes a useful constraint on the experimenter.
One reason that it is important to place a constraint on the experimenter is that he has, inevitably, a wish to find a surprising, or “breakthrough” result which will justify the time, effort and resources that he has expended. Without the help of statistics it is virtually impossible to avoid being influenced by this, either by being too ready to accept results as significant, or by recognizing the danger and over-compensating by being unduly cautious.
There is probably no unique “good scientific method” in any field of investigation, but there would probably be general agreement over examples of poor, or sloppy, science. (Kenneth Clark[Note 20]Note 20. NOTETEXT-20 made a somewhat similar comment in one of his broadcasts, saying that he could not precisely define “civilization,” but he could indicate regimes that would be readily accepted as counterexamples.)
Regarding the areas denoted by religion and mysticism, it is probably fair to say that the misgivings of many scientists arise from the apparent lack of any generally-agreed criteria by which to separate the sincere from the sloppy, or even from the mendacious.
Goethe as Scientist
An amusing and instructive account of a clash of interpretations is in a little book by Sir Charles Sherrington[Note 21]Note 21. NOTETEXT-21 . The poet Goethe saw himself as a student of science, more even than as a poet, and yet he was unable to accept Newton’s discovery that white light could be decomposed into the colours of the spectrum, and that the colours could be combined to form white light. The poetical idea of white light as elemental was so strong that Goethe could not bring himself to see it as divisible.
Of course, if Goethe had confined his comment to the psychological, or even the physiological, effects of the light as perceived, his view was probably defensible. Unfortunately he failed to make the distinction, and went to enormous lengths to discredit Newton’s experimental findings, even accusing Newton of dishonesty. Perhaps the lesson is that scientific method and emotional reactions each have their place, but mixtures of the two can be dangerous.
The discussion began as an attempt to be precise about the general idea of self-organization, especially in its application to nervous activity. There is a worrying circularity in the fact that what we see as interesting kinds of organization is conditioned by our own evolution and experience, in which we are participants in a similar process of self-organization. It is not clear that brains can ultimately understand brains, though the question clearly hinges on the meaning attached to “understand.” Traditional scientific methods have successfully unravelled much of the working of the lower levels of sensory analysis in the brain, and there is no obvious barrier to the continuation of the study some way into higher levels.
Since what we regard as scientific method, and as ratiocination, have a strong subjective content, it has been necessary to consider their relationship to other studies, particularly those subsumed under religion and mysticism.
Although little direct reference has been made to the views of Norbert Wiener, it is a compliment to the subject-area that he formally inaugurated that it is seen as the appropriate forum for the discussion of these profound issues that are basic to all science.
1. Masani, P.R. (1990) Norbert Wiener 1894-1964, Birkhäuser Verlag, Basel.
2. Andrew, A.M. (1989) Self-organizing Systems, Gordon and Breach, New York, NY.
3. Wiener, N. (1948) Cybernetics, John Wiley & Sons, New York, NY, Reprinted in 1961.
4. McCulloch, W.S. and Pitts, W. (1943) “A Logical Calculus of the Ideas Immanent in Nervous Activity,” Bulletin of Mathematical Biophysics 5: 115-133.
5. Landahl, H.D., McCulloch, W.S. and Pitts, W. (1943) “A Statistical Consequence of the Logical Calculus of Nervous Nets,” Bulletin of Mathematical. Biophysics 5: 135-137.
6. Ashby, W.R. (1962) “Principles of the Self-organizing Systems,” in von Foerster, H. and Zopf, G.W., Principles of Self-organization, Pergamon, Oxford, pp. 255-278.
7. Andrew, A.M. (forthcoming) “What Do We Mean by ‘Knowing’?,” Cybernetica (Namur).
8. Andrew, A.M. (1991) “Continuity and Artificial Intelligence,” Kybernetes 20(6): 69-80.
9. Rose, M. and Doolittle, F. (1983) “Parasitic DNA – The Origin of Species and Sex,” New Scientist 98: 787-789.
10. Masani, P.R. (1992) “The Illusion that Man Creates Reality – A Retrograde Trend in the Cybernetical Movement,” Kybernetes 21(4): 11-24. http: //
11. Plotkin, H. (1994) The Nature of Knowledge: Concerning Adaptations, Instinct and the Evolution of Intelligence, Allen Lane, The Penguin Press, London.
12. Craik, K.J.W. (1943) The Nature of Explanation, Cambridge University Press, Cambridge.
13. Andrew, A.M. (1967) “To Model or Not to Model,” Kybernetik 3(6): 272-75.
14. Rosenbrock, H. (1990) Machines with a Purpose, Oxford University Press, Oxford.
15. Andrew, A.M. (1992) “Machines with a Purpose” (review), Kybernetes 21(2): 69-71.
16. Beer, S. (1993) “Requiem,” Kybernetes 22(6): 105-108.
17. Andrew, A.M. (1993) “Stafford Beer – Personal Reminiscences and Reflections,” Kybernetes 22(6): 60-73.
18. Musès, C. (1994) “Recollections of Norbert Wiener, Warren McCulloch and Stafford Beer,” Kybernetes 23(2): 58-62.
19. Einstein, A. (1944) Relativity: The Special and General Theory, 13th ed., Methuen, London.
20. Clark, K. (1969) Civilisation, BBC and John Murray, London.
21. Sherrington, C. (1949) Goethe on Nature and on Science. Cambridge University Press, Cambridge.