CEPA eprint 2915

Warren McCulloch’s search for the logic of the nervous system

Arbib M. A. (2000) Warren McCulloch’s search for the logic of the nervous system. Perspectives in Biology and Medicine 43(2): 193–216. Available at http://cepa.info/2915
Table of Contents
Introduction
Towards a Biography
The Formative Years
The Emergence of a Theory of the Logic of the Brain
The MIT Years
Immanuel Kant and the A Priori
A Selection of Key Papers
A logical calculus of the ideas immanent in nervous activity
How we know universals: The perception of auditory and visual forms
What the frog’s eye tells the frog’s brain
Redundancy of potential command
Reliable computation by unreliable neurons
Does the brain have a logic?
References
Arbib M. A. (2000) Warren McCulloch’s search for the logic of the nervous system. Perspectives in Biology and Medicine 43(2): 193-216
Introduction
The 1940s saw the beginning of the computer age, building on the logical investigations of the 1930s, and leading to the development of artificial intelligence in the 1960s and of neurally-inspired paradigms for neurally inspired methods of adaptive, parallel computing in the 1980s and 1990s [1]. Among the key players in these developments was the neurologist Warren Sturgis McCulloch (1898-1968). With Walter Pitts [2], he showed how to formalize the brain as a network of neurons viewed as logical processing elements – a key element in the definition of the classical computer architecture based on stored programs that was devised by John von Neumann, and in the cybernetics of Norbert Wiener. With the addition of learning rules built on the ideas of Donald Hebb and Frank Rosenblatt, this formalization also led to the resurgence of artificial neural networks as a new computing technology from the mid-1980s onwards, a resurgence coupled with that in the computational modeling of the brain, thus closing the circle back to McCulloch’s original source of inspiration.
Certainly, McCulloch was concerned with computer technology, as is reflected in his concerns for reliable computing from unreliable neuron-like elements and for redundant, distributed computing by larger modules. However, throughout his life he was driven less by the demands of technology than by the quest to understand how we think. Specific experimental techniques were always secondary to the basic questions: what is the logic of thought? what is a person? what is a man that he may know a number? As a young man worrying about the fundamental questions of philosophy, metaphysics, and epistemology, McCulloch set himself the goal of developing an “experimental epistemology”: how can one really understand the mind in terms of the brain? More particularly, he sought to discover “A Logical Calculus Immanent in Nervous Activity” [3]. The present paper will seek to provide some sense of McCulloch’s search for the logic of the nervous system, but will also show that his papers contain contributions to experimental epistemology which provide great insight into the mechanisms of nervous system function without fitting into the mold of a logical calculus. Moreover, McCulloch was not only a scientist but also a storyteller, poet, and memorable “character.” I will thus interleave a number of characteristic anecdotes into the more objective attempts at scientific history that follow.
Towards a Biography
Before providing a biographical sketch of Warren McCulloch, let me say a few words on sources for this material. The primary source is my own experience. I read McCulloch’s papers as an undergraduate at Sydney University; worked with McCulloch’s group while I was a graduate student at MIT from January 1961 to September 1963; met McCulloch a number of times thereafter; know and have worked with many of his colleagues; and have reflected on his work in making my own contributions to cybernetics and computational neuroscience. Several publications have proved particularly useful in supplementing my personal perspective. Heims’s book describes the Macy Foundation meetings on cybernetics and devotes many pages to biographical information concerning McCulloch, who was chairman of the series [4]. McCulloch’s reflections on his intellectual development and its relation to the development of cybernetics provides an autobiographical view [5]. His wife Rook and his protégé and long-time friend Jerry Lettvin add important personal perspectives to the biography [6-8]. In addition, I benefited from many conversations with, and presentations by, friends and colleagues of McCulloch who took part in the 1995 meeting in Gran Canaria entitled “An International Conference in Honor of W. S. McCulloch 25 Years After His Death.” I wish to record here my debt to Roberto Moreno-Diaz for organizing this conference and for his hospitality in Las Palmas. In what follows, if no source is given, the material is either my own, or is from Heims or McCulloch [4], [5]; other sources will be cited as appropriate. The reader seeking a selection of McCulloch’s scientific papers, as well as general essays and even poems, may turn to the collection Embodiments of Mind [9].
The Formative Years
McCulloch’s first college was Haverford, and in talking to the philosopher Rufus Jones there, he posed the question that would shape much of his life work: “What is a number that a man may know it and a man that he may know a number?” [10] When Rufus heard this question, he said, “Friend, thee will be busy as long as thee lives” [5]. And, indeed, McCulloch was!
However, Haverford was a Quaker college, and when America joined World War I, McCulloch, given a family history of patriotism, wanted to join the Navy. He therefore moved to Yale University, where he joined the Officers’ Training Program. There he divided his time between officers’ training courses and time on a ship, combining “marlin spike sailing” and signaling by semaphore. Perhaps some of his ideas about coding in the nervous system were shaped by his concern for coding messages and transmitting them from ship to ship. Another idea from the World War I Navy, to which we will return, was what he refers to as “redundancy of potential command.” In a naval battle, there are many ships widely separated at sea, and normally command rests in the ship with the Admiral. But if some fighting breaks out or some crucial information becomes available locally, then temporarily the ship that has that information is the one with command. This notion of redundancy of potential command, rooted in McCulloch’s experience in World War I, came in the 1960s to yield the view that the nervous system is not to be seen as a pure hierarchy but rather operates by cooperative computation.
After the war, McCulloch continued at Yale, majoring in philosophy with a minor in psychology. The study of philosophy was crucial for McCulloch’s quest. Descartes was very important to him. McCulloch was lucky in that he read, not the usual edition of Descartes which leaves out the physiology, but rather the version that included the results of eight years of dissection that Descartes had carried out. Descartes elaborated the idea that nerve fibers were hydraulic tubes, controlled by valves in the brain to carry commands down to the muscles, but that inside each hydraulic tube there is a little string so that when the muscle had contracted sufficiently, it could pull the string to signal back to the brain to close the valve. As McCulloch noted, that is perhaps the earliest example of feedback in the nervous system [5]. Descartes also had the idea that the eye would send back pictures to the brain over many hydraulic fibers in parallel, but he was concerned that there would not be enough fibers to carry everything about the picture at once – and so came out with the idea of temporal coding of impulses. So already in his study of Descartes, McCulloch found some very important ideas to feed into experimental epistemology.
Kant and Leibniz were also important to McCulloch’s early studies and later thinking, especially Kant’s notion of the synthetic a priori. To oversimplify: the fact that we make sense of the world in terms of Euclidean geometry was, according to Kant, because this knowledge was necessary (a priori) to any form of understanding and reason, rather than being something gained through experience. Although he does not make the distinction clear, Kant distinguished the analytic a priori (perhaps co-extensive with the tautologies of logic) from the synthetic a priori (all other a priori knowledge, such as the truths of Euclidean geometry). The question remains: even equipped with this a priori knowledge, how do you connect that knowledge via the senses to your everyday experience? This question continued to inform McCulloch’s thinking. (I shall offer a somewhat more responsible gloss on Kant’s ideas, especially that of the schema in a later section, and will show how the idea of the “biological synthetic a priori” emerged as part of experimental epistemology.)
Leibniz had written the Manadology and – perhaps most interestingly for those who know about debates on artificial intelligence – in writing about perception, had asked the reader to imagine what would happen if you magnified the insides of the head more and more until it was so large you could walk right through it like a mill (in the sense of a place for grinding flour). The 17th paragraph of the Monadology begins:
Moreover, it must be confessed that perception and that which depends upon it are inexplicable on mechanical grounds, that is to say, by means of figures and motions. And supposing there were a machine, so constructed as to think, feel, and have perception it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. ( [11] provides the English translation)
We may see that Leibniz anticipated John Searle’s Chinese room paradox by many centuries [12] (see [13] for a critique).
Another important influence was Jay Hambidge of the Yale Art School, of whom McCulloch states that “in 1923, I took my Master’s degree in psychology with a thesis on which I had started out of incredulity of Hambidge’s assertion that root rectangles and the Golden Section are aesthetically preferred by most people” [5]. Hambidge also got McCulloch counting the pips in pinecones. If you count the way the pips form consecutive rows around the pinecone, they form a Fibonacci series. McCulloch says that this is what taught him to count. And thereafter, he would count many things, such as how many threshold functions there are of a certain kind, how many neurons there are with so many inputs and outputs, and so on.
Given all this background, it is hardly surprising that McCulloch determined to equip himself to understand the physical nature of perception and thought, and so he went to the College of Physicians and Surgeons in New York, getting his M.D. in 1927, precisely so that he could begin to learn the physiology needed to understand how indeed a man could know a number. He went from medical school to a residency at Bellevue, and there saw the complexities of what can go wrong with the brain. At Bellevue he treated people with terrible skull injuries, spinal damage, etc., and a few years later at Rockland State Hospital, he worked with the insane. He felt that anybody who wishes to understand the brain should work with the insane – understanding the limits of the mind to better challenge one’s search for the underlying brain mechanisms.
As another stage in McCulloch’s search for the logic of the nervous system, he began thinking about loops in the nervous system. It was his claim (probably mistaken) that if you look through Cajal’s anatomy, there was no definite proof of loops in the nervous system. In any case, ideas about reflexes certainly dominated analysis of the nervous system at that time. By contrast, there were various neurological conditions he came upon, such as the constant pain of causalgia, for which he felt reverberating loops of neurons provided the best explanation. It was at about this time that Lorente de No came up with his studies of cerebral cortex – post-Cajal, but very much in the spirit of Cajal – which made explicit the possibility of loops in the nervous system, an idea that today we take for granted. And “in spite of the busy life of the intern, I was forever studying anything that might lead me to a theory of nervous function. My fellow intern, Samuel Bernard Words, accused me of trying to write an equation for the working of the brain. I am still trying to!” [5]
In the depths of the Depression, money rather than scientific choice dominated the search for employment. In the years 1929 to 1931, McCulloch studied mathematics and mathematical physics, and taught physiological psychology at Seth Low Junior College in Brooklyn, New York. He next worked at Rockland State Hospital “to earn money,” and there he met Eilhard von Domarus. With von Domarus, he learned how to think about madness, language, and consciousness.
With this, we may conclude our view of the first phase of McCulloch’s intellectual development, the formative years before his real career as a research scientist begins. We can already see here his questing mind, as he equipped himself through his philosophical explorations as well as through medical school and residency to understand how to map the fundamental questions of epistemology and metaphysics onto the function of the brain.
The Emergence of a Theory of the Logic of the Brain
The next phase is the one where the McCulloch of his most famous publications begins to emerge. In 1934, he finally got the real research job that he wanted. It was back at Yale with Dusser de Barenne, who had come to Yale from Holland. In fact, Holland played a very important role in McCulloch’s later life: he had many friends there, and he began to claim that not only was he a Scot (as well as a patriotic American), but also that he was, by adoption, half Dutch. In de Barenne’s lab he mastered a very interesting approach to doing neuroanatomy in a functional way. De Barenne’s technique of neuron strychninography was to put a very small amount of strychnine in a particular place in the brain, which would yield a sequence of volleys originating in that part of the brain, and then pick out the targets of these volleys with a galvanometer. Lettvin notes that the technology of the time was just bad enough to make this method work [14]. Many secondary and antidromic volleys would be set off by the strychnine, and if the equipment were too good, it would show the targets of all these volleys. But, in fact, the equipment of the 1930s was just good enough to show the primary volley resulting from the strychninization and thus yield very reliable maps of how different parts of the monkey brain are connected. These connectivity data remain valid today – although, of course, with current tracing and double labeling techniques and so on, neuroanatomists can now perceive more exquisite details of connectivity than neuron strychninography could reveal.
In addition to the work in functional neuroanatomy, McCulloch was continuing to pursue philosophical questions, such as “what could be the logic of the brain?” There were many seminars on philosophy of science and logic going on at Yale at that time. Turing’s paper on computable numbers and what we now call universal Turing machines came out in 1936 [15], and there was much discussion of GOdel’s result [16], a critique of the Principia Mathematica of Whitehead and Russell which was an attempt to show that the concept of number could be reduced to set theory and that set theory could be reduced to logic. GOdel showed you could not get all the truths about numbers as theorems in any Principia Mathematica-like system. Still the question, of course, that really intrigued McCulloch was this: if we have in Principia Mathematica an attempt to reduce all the complexity of number down to logical propositions, could one then see how to map logical propositions into the brain? Fitch, who was the expert on mathematical logic at Yale at the time, was not able to meet that challenge, but McCulloch kept analyzing layers of neurons, and kept worrying about loops in neurons because he had not yet got the concept of delay – if you negate something as you go around the loop then the input must equal its negation, and how can that be?
In 1941, McCulloch moved to the Neuropsychiatric Institute of the University of Illinois in Chicago, where he remained until 1952. I will not detail here his concerns as director of the lab there, since for us the important thing is that at the University of Chicago at that time was the Committee on Mathematical Biology, run by a red-bearded Russian named Nicolas Rashevsky. McCulloch came to Rashevsky’s seminar to present his three-quarters-baked ideas about the logic of the brain. And there he met an amazing young man named Walter Pitts, who was at that time a protégé of the eminent mathematical logician Rudolf Carnap. (See [2] for a view of the life of Walter Pitts and how he came to the University of Chicago.) Pitts immediately saw how one might apply Carnap’s formalism to McCulloch’s ideas about the logic of the nervous system.
Walter Pitts was, in 1941, an incredibly brilliant, incredibly mixed-up adolescent, essentially a runaway from a family that could not appreciate his genius. Warren and Rook McCulloch took Walter into their home, as they did many other people, including Jerry Lettvin who came to live with the McCullochs at the same time. There followed endless evenings sitting around the McCulloch kitchen table trying to sort out how the brain worked, with the McCullochs’ daughter Taffy sketching little pictures which later illustrated “A Logical Calculus of the Ideas Immanent in Nervous Activity,” published in 1943 in Rashevsky’s journal The Bulletin of Mathematical Biophysics [3]. Taking account of the delay in each neuron’s response to its inputs – and thus resolving McCulloch’s concerns about the possible temporal paradoxes in neural networks containing loops – was a key ingredient in the eventual development of a logical calculus of neural activity. In the second most famous paper of their collaboration, Pitts and McCulloch gave a study of “How We Know Universals,” a theoretical construction of neural networks for pattern recognition that showed how visual input could control motor output via the distributed activity of a layered neural network without the intervention of executive control [17].
World War II was a truly epochal time in the history of American science – a time in which Warren McCulloch was certainly one of the important players. Much of the cream of European science flowed to the United States from a Europe shattered by the growth of Nazism in the 1930s. They became part of an incredible outgrowth of science and engineering, catalyzed by the funding of applied research during the war that established American leadership in science, and also established the idea of federal sponsorship of research. One of the things that came out of learning how to move people back and forth between basic research, applied research, and the war effort was an outburst of interdisciplinary research. Quite an amazing administrator, Frank Fremont-Smith, worked for the Josiah Macy Jr. Foundation, where he used his position to advance science by bringing together people from different disciplines. Before pursuing this strand further, however, we must introduce two key players in the history of cybernetics, John von Neumann and Norbert Wiener, among the very few mathematicians of such pre-eminence as to be honored posthumously with memorial issues of the Bulletin of the American Mathematical Society.
Jerry Lettvin moved from Chicago to do his residency in neurology in Boston and there got to know Norbert Wiener, who had been putting a lot of effort into trying to relate his wartime work on control and communication to the brain. Lettvin told Wiener about Pitts, and eventually Pitts moved to MIT to work with Wiener. Wiener and McCulloch both attended a meeting arranged in the winter of 1943-1944 to bring together engineers and biologists. The idea of the meeting was to see what biologists could teach the technologists about the notions of signal processing, computation, and communication. The first stored program computer had not yet been built, but the participants already had a lot of experience with amplifiers, feedback, stability, and so on. McCulloch recalls that at this meeting:
Lorente de No and I, as physiologists, were asked to consider the second of two hypothetical black boxes that the allies had liberated from the Germans. No one knew what they were supposed to do or how they were to do it. The first box had been opened and exploded. Both had inputs and outputs, so labeled. The question was phrased unforgettably: “This is the enemy’s machine. You have to find out what it does and how it does it. What shall we do?” By the time the question had become that well defined, Norbert was snoring at the top of his lungs and his cigar ashes were falling on his stomach. But when Lorente and I had tried to answer, Norbert rose abruptly and said: “You could of course give it all possible sinusoidal frequencies one after the other and record the output, but it would be better to feed it noise – say white noise – you might call this a Rorschach.” Before I could challenge his notion of a Rorschach, many engineers’ voices broke in. Then, for the first time, I caught the sparkle in Johnny von Neumann’s eye. I had never seen him before and I did not know who he was. He read my face like an open book. He knew that a stimulus for man or machine must be shaped to match nearly some of his feature-filters, and that white noise would not do. There followed a wonderful duel: Norbert with an enormous club chasing Johnny, and Johnny with a rapier waltzing around Norbert – at the end of which they went to lunch arm in arm. The later part of this meeting was spent listening to engineering and mathematics appropriate to these problems. We all agreed that there should be an interdisciplinary gathering of this kind more often. [5]
The result of all this experience came to be a series of 10 meetings set up by the Macy Foundation, and for which McCulloch served as chairman. The first five were not published, the last five were. They typified the successes as well as the difficulties of the bringing together of biology and technology. Out of such interactions grew cybernetics – the study of control and communication in animal and machine – which was officially christened when Wiener published his book of that name in 1948, and which then gave its name to the last five of the Macy meetings (see, e.g., [18]). In reminiscing about this, McCulloch says that at the first few meetings, because people were in different disciplines, they did not know how to talk to each other and some people would leave the meeting crying: “Margaret Mead said she broke a tooth and did not notice it.” These meetings continued until 1953. Some idea of how interdisciplinary the mix was is given by the fact that the roster included not only McCulloch, Pitts, Wiener, and von Neumann, but also the anthropologists Margaret Mead and Gregory Bateson, Claude Shannon (the father of information theory), Wolfgang Miler (the gestalt psychologist), and Heinrich Kliiver (well known for the Kliiver-Bucy syndrome, and for his book seeking to deduce brain mechanisms from the effects of taking mescaline). An exhaustive analysis of these meetings together with much interesting biographical detail is provided by Heims [4].
Another member of the Macy group was Lawrence Kubie, who entered the life of McCulloch in two different ways. One is that he published in Brain a theoretical paper on the possible importance of reverberating loops and thus set part of the stage for the 1943 theory [19]. The other is that he was the proponent of psychoanalysis at the Macy meetings, and McCulloch hated psychoanalysis with a passion. He felt that we should build a psychia‑try based on drugs used in accordance with an understanding of the physical basis of mental disorder, and all this talk about Freud was just pseudo- science. He ended up publishing a diatribe against psychoanalysis whose title, “The Past of a Delusion” [20], was a parody of Freud’s title “The Future of an Illusion.” This upset Kubie, who immediately began to psychoanalyze what problems in McCulloch’s psyche had led him to publish so misguided a work. (Heims offers a lengthy discussion of Kubie’s role in the Macy meetings and his relationship with McCulloch [4].)
The MIT Years
Now we come to the last period of McCulloch’s life, which takes us from the move to MIT’s Research Laboratory of electronics in 1952 to his death in 1968. Let me start with a story of the breakup with Norbert Wiener which occurred at about the time of the move to MIT. I arrived in 1961 at MIT to do my Ph.D., and I very quickly found myself with a research assistantship from McCulloch but doing a Ph.D. thesis with Norbert Wiener. McCulloch warned me, “Do not tell Norbert that you are working with me,” and he told me an imaginative tale as to why Wiener had broken with him. A couple of years went by, my Ph.D. was completed, and as I was about to leave MIT I went to say good-bye to various people, including Wiener. As I sat in Wiener’s office for the last time, he said, “What else have you done while you were at MIT?” and I foolishly thought “Oh, it can’t do any harm,” and said “I spent quite a bit of time with Warren McCulloch’s group.” He got absolutely apoplectic with rage: “That man, that wretched man, that drunkard! Why, if I had the money I would buy him a case of Scotch whisky so he could drink himself to death!” That reaction seemed to me out of proportion with McCulloch’s story, so I asked various people if they knew what had “really” happened. Finally, Pat Wall, who was a member of McCulloch’s group, doing neurophysiology on the pain system, came up with the following story, which has very much the ring of truth as to the character of the protagonists, whether or not it is true as to the actual cause of the breakup. It went like this:
Norbert Wiener, flushed with the success of his book on Cybernetics, decided it was time to get really serious and go beyond general observations about feedback in the nervous system and so on, and make a serious mathematical model of the brain. So he goes to his friend Warren McCulloch and says, “McCulloch, tell me what you know about the brain.” Now, McCulloch was both a great scientist and a great storyteller, and he was not going to let the facts spoil a good story. So when he told Norbert about the brain, it was a mixture of what was known to be true and what McCulloch thought should be known to be true. If you were a naive person, this could be very dangerous. If, on the other hand, you understood the nature of the man, this was tremendously stimulating, because then you realized that it was your responsibility to figure out what was known and what was provocative speculation. It was then your challenge to do the new stuff. But, unfortunately, Norbert Wiener was “emotionally challenged.” He had been a child prodigy but, poor fellow, whenever he got a brilliant idea, his father would not say “Norbert, you are brilliant,” but “This proves my pedagogical theory. I can make even you do something brilliant.” The result of all this was that Wiener was “tone deaf” to nuances of human personality, and had an absolutely pathological need for praise throughout his life, even when established as one of the world’s great scientists. So he had no way of reading this man, McCulloch, and took everything he said about the brain to be true. He then, according to Pat Wall, spent three years of his life creating the theory that explained it all, went to a physiology congress to present the theory, and had it shot down. And again, because he was no judge of character, he thought that McCulloch had set him up – and thus the fury I experienced a decade later. I think McCulloch actually would have been happy to get the case of Scotch whisky, since he found a bottle each night a fine lubricant for his side of the conversation. McCulloch characterized himself as a cavalier to Norbert Wiener’s roundhead.
Whatever the truth of the reasons behind it, this breakup was very sad, not only because it ended the fruitful interactions between Wiener and McCulloch, but also because of its terrible effect on Walter Pitts, as movingly described by Smalheiser [2]. I experienced its effects 10 years later when, newly arrived at MIT in 1961, I went to ask Pitts some questions about the 1943 paper. Pitts was so far into delirium tremens that after about two sentences he was shaking uncontrollably, and I had to withdraw from the conversation. A tragic end to what had started out as so wonderful a collaboration.
However, in the late 1950s, Jerry Lettvin and Walter Pitts were still collaborating, so the transition in Pitts’s behavior must have been a protracted one. They took ideas from the “How We Know Universals” paper [17] and applied these to analyze the retina of the frog. The result of this work – which also involved McCulloch and the Chilean neuroanatomist Humberto Maturana, and which built on earlier insights of Hartline and Barlow – was the understanding that the retina does not just relay information to the brain, but is already processing very subtle features of the visual input that were, in some way, related to the ethology – the behavior – of the frog, with sets of these features sent in retinotopic layers to the visual tectum of the midbrain. I think this is a more important and exciting paper than the Hubel-Wiesel paper on cortical edge detectors, published about the same time, which initiated the research program which later led to David Hubei and Torsten Wiesel sharing a Nobel Prize [21]. However, Lettvin had a bizarre refusal to follow through with a series of well-quantified studies: once he claimed that he had just accepted a grant from NSF only on the condition that he would not be expected to publish any results – his condition, not theirs. Even though the story in their first classic paper was in some ways less exciting than that of Lettvin, et al., Hubei and Wiesel followed through and built upon it in a way which encouraged many others to follow them in building a truly impressive edifice of information about the mammalian visual system.
In the 1960s, McCulloch continued to work on a number of projects. Of these, the questions of reliability in the nervous system and the embodiment of redundancy of potential command in the brainstem reticular formation reached full published treatment (see a later section for further material), whereas the search for a logic of triadic relations did not.
Much of this work was conducted in McCulloch’s office in Room 26-027 at MIT, a large basement room with one door and no windows. There were four desks pushed together in the center of the room, while filing cases and bookcases lined with books such as the collected works of Charles Sanders Peirce and the complete Oxford English Dictionary lined all the walls, except for a small gap which contained a packing crate on which people would stand to chalk their ideas on a three-foot square blackboard. People from all over the world eventually came to this room to stand up on the box and draw their symbols on this minute blackboard. Whatever members of McCulloch’s group happened to be there at the time would sit around the desks to comment and ask questions. It was a great way to learn about science!
Warren McCulloch was indeed serious in his attention to the deep epistemic and metaphysical questions that he pursued all his life, first through the medium of clinical work, then through detailed laboratory experiments, and finally through theoretical investigation. Through all that time, there was a sparkle in the eye and concern for doing science for the fun of it. Some of his papers were very hard to read because he enjoyed literary expression – and writing a scientific paper using the diction of The Anatomy of Melancholy does not make for easy reading. He also wrote poetry. He gave me a copy of The Natural Fit [22], a collection of his poetry, with the inscription “To Arbib. For the fun of it! Warren.” He prefaces the poems as follows:
Three characters have worn the carcass you see before you, each for a score of years. Aside from an interest in communication they have little in common. I could tell you next to nothing of the first, or prospective theologian, for I doubt I would recognize him. I need say nothing of the third, your familiar scientific friend. The second was born [aged twenty-one] of a war to make the world safe for democracy and was lost in a world without song. I remember him well …
Having thus seen McCulloch define himself as a trinity, I recall one more McCulloch anecdote. One day, with piercing eyes and dramatic white beard, he was striding down the street of a Southern town when a small boy stopped him and asked “Are you the Lord?” I do not know Warren’s answer. Anyway, he was a great man.
Immanuel Kant and the A Priori
The next section will discuss a number of McCulloch’s key papers. As background, I first review a number of the ideas in Immanuel Kant’s Critique of Pure Reason, since we have seen that Kant’s philosophy was one of the key elements in McCulloch’s intellectual development. Kant considered, for example, the distinction between one’s perception of a particular wheel and the pure geometrical concept of a circle. How is it that we may intuit the pure concept of the circle from experience with a particular wheel? Kant posits that the “transcendental schema” makes possible the application of the category to the appearance. In modern terms, one might be tempted to think of such a schema as a pattern recognition device: input a visual pattern to the “circle schema” to receive a binary output – “yes, the object is circular” or “no, it is not.” But this falls vastly short of what Kant intends, in several ways:
Simply labeling something as circular does not entail understanding circularity in, say, the sense in which the properties of circles may be richly characterized by Euclidean geometry.Kant distinguishes the image as “a product of the empirical faculty of reproductive imagination” from the schema as “a product … of pure a priori imagination, through which, and in accordance with which, images themselves become possible” [23]. In a famous passage, he observes that “No image could ever be adequate to the concept of a triangle in general. It would never attain that universality of concept which renders it valid of all triangles, whether right-angled, obtuse- angled, or acute-angled; … The schema of the triangle can exist nowhere but in thought. It is a rule of synthesis of the imagination, in respect to pure figures in space” [23, p. 182].Kant’s schemas include far more than schemas for “universals” like circles and triangles or even dogs, as we can see from such passages as: “the pure schema of magnitude (quantitas), as a concept of the understanding, is number” (p.183); “[t]he schema of substance is permanence of the real in time” (p.184); and “ [t] he schema of cause, and the causality of a thing in general, is the real upon which, whenever posited, something else always follows” (p.185).For Kant, knowledge is grounded in a priori principles: “Even natural laws, viewed as principles of the empirical employment of understanding, carry with them an expression of necessity, and so contain at least the suggestion of a determination from grounds which are valid a priori and antecedently to all experience. The laws of nature, one and all, without exception, stand under higher principles of understanding” (pp. 195-96).
We shall see resonances of Kant when we look at the contributions made in “How We Know Universals” and “What the Frog’s Eye Tells the Frog’s Brain.” In terms of current neurophysiology, we can explore the form of understanding involved in the linkage of perception and action, but we must for now relinquish the studies of such schemas as those of magnitude, substance, and cause to a psychology little constrained as yet by the data of neuroscience. However, the real disagreement with Kant is not over the shifting divide between what we can and cannot neuralize, but rather over Kant’s central notion of the a priori. Where an 18th-century philosopher could see the postulates of Euclidean geometry as a set of a priori truths, in the 20th century we see them as providing a compact basis for the inference of many facts about the geometry of limited regions of space, but from the work of Einstein and others we know them to be inadequate to describe many spatial phenomena of the physical universe. We can thus entertain the idea of Euclidean geometry as a convenient approximation to the properties of space, rather than an a priori set of truths. Moreover, we now understand that much, if not all, of spatial behavior of animals is controlled by their brain-body-environment interactions, and that if these rest on both “nature” and “nurture,” then the nature is not one of a priori structure, but rather a contingent structure shaped by evolution through natural selection to fit a particular ecological niche. Such an “innate, species-specific nature,” moreover, is not directly expressed in adult behavior but, rather, sets a developmental pathway whose unfolding may be more or less influenced by the experience of the organism (cf. Waddington’s notion of the epigenetic landscape for the embryological perspective [24] ). This leads us to look at schemas not as immutable objects expressive of a priori principles, but rather as biologically rooted entities which evolve and develop to better adapt the behavior of the animal, and the thought of the human, to its world.
We see resonances here with one of the best-known uses of the term schema, that of the Swiss developmental psychologist Jean Piaget. At the basis of Piaget’s work is a concern with action: “Any piece of knowledge is connected with an action … [T]o know an object or a happening is to make use of it by assimilation into an action schema … [namely] whatever there is in common between various repetitions or superpositions of the same action” [25, pp. 6-7]. As we act on the basis of an action schema, we do so with the expectation of certain consequences. When you recognize something, you “see” in it things that will guide your interaction with it – but there is no claim of infallibility, no claim that the interactions will always proceed as expected. Piaget talks both of assimilation, the ability to make sense of a situation in terms of the current stock of schemas, and of accommodation, the way in which the stock of schemas may change over time as the expectations based on assimilation to current schemas are not met. To the extent that our expectations are false, our schemas can change, and we learn. Piaget traces the cognitive development of the child, starting from reflexive or instinctive schemas that guide her motoric interactions with the world. Piaget sees the child starting with schemas for basic survival like breathing, eating, digesting, and excreting, as well as such basic sensorimotor schemas as suckling, grasping, and rudimentary eye-hand coordination. Objects are secondary to these primary schemas, and such schemas pave the way for more global concepts such as the schema for object permanence – the recognition that when an object disappears from view, the object still exists and is there to be searched for. This schema develops to allow the use of extrapolation to infer where a moving object that has passed from sight is likely to reappear. Piaget argues that such schemas lead to further development until the child has schemas for language and logic – for abstract thought – which are no longer rooted in the sensorimotor particularities. The later stages bring to the child schemas such as those of magnitude, substance, and cause posited by Kant, but they are now the outcome of a developmental process rather than the direct embodiment of the a priori. For this reason, Piaget has referred to his work as genetic epistemology (to which we may contrast McCulloch’s brain-centered study of experimental epistemology). Elsewhere, I have discussed “schema theory” as a language in which one may analyze the full range of mental function, whether or not it can be related to neuronal function [26]. While showing that schema theory is well developed in cognitive psychology, where thought and behavior are viewed “from the outside,” it emphasizes an approach to schema theory which builds on the work of Warren McCulloch to bridge from the external characterization of function to the interactions of brain regions and the details of neural circuitry [27], [28].
A Selection of Key Papers
With this biographical and philosophical background, we now provide a brief review of the ideas and contributions of a small selection of McCulloch’s papers.
A logical calculus of the ideas immanent in nervous activity
In 1943, McCulloch and Pitts published “A Logical Calculus of the Ideas Immanent In Nervous Activity” [3]. In 1936, Alan Turing had published a formal model of computation [15]. Essentially, Turing’s machine was a box with a finite program in it which controlled how it would interact one square at a time with an indefinitely extendable tape which carried the code for both the input and, if and when the computation halted, the output of the computation. Besides proving several important theorems, Turing provided an informal argument supporting the claim that anything that could be done by a human following a set of rules could be done by the kind of machine he described. (In 1936, a computer was still a human using a calculating machine and following a set of rules rather than a machine.) We now say “anything that is effectively computable can be done by a Turing machine.” The 1943 paper presented nets of formal neurons, showing how different patterns of excitatory and inhibitory connections forming neurons into loops could realize a great variety of different functions. Whenever there was a loop, the loop passed through a neural cell body which had a delay in it, and so the paradox that had kept McCulloch awake back in the 1930s was solved. The paper is very difficult to read because Pitts had used an almost impenetrable notation of Carnap, and there are some errors in the paper though the overall results are correct. Basically, McCulloch and Pitts proved that any finite-state control box for a Turing machine could be replaced by a network of their formalized neurons. And so, in a way, one can say that Turing provided the “psychology of the computable” whereas McCulloch and Pitts provided the “physiology of the computable.”
This work, together with that of Turing and others who charted “effective computability,” laid the basis for the new field of “automata theory,” which found its first full expression in the 1956 book Automata Studies, edited by Claude Shannon and John McCarthy [29]. One of the contributors to this volume was Marvin Minsky, who had done a Ph.D. thesis on neural networks under the supervision of John von Neumann. Minsky is best known as a pioneer of AI, but he has always acknowledged his debt to McCulloch’s work and his quest for an experimental epistemology.
How we know universals: The perception of auditory and visual forms
Pitts and McCulloch’s study of “How We Know Universals: The Perception of Auditory and Visual Forms” is a classic in the study of pattern recognition [17]. This paper extends in three ways the 1943 ideas of building networks: one was to go back to the neuroanatomy and see that the brain is not a random network, but is structured and layered; the second was to think more subtly about perception; while the third was to show how visual input could control motor output via the distributed activity of a layered neural network without the intervention of executive control, perhaps the earliest example of “cooperative computation.”
Gertrude Stein said “A rose is a rose is a rose,” but Pitts and McCulloch were more concerned with “A triangle is a triangle is a triangle.” This problem of universals brings us back to the Kantian concern with how a particular triangle could be subsumed under the “universal” category of triangles. Recall the earlier quote from Kant: “No image could ever be adequate to the concept of a triangle in general. It would never attain that universality of concept which renders it valid of all triangles, whether right-angled, obtuse-angled, or acute-angled; … The schema of the triangle can exist nowhere but in thought.” Pitts and McCulloch showed, rather, how the “schema for a universal” could exist in the brain, in specific neural circuitry rather than abstract thought. They came up with two basic approaches.
The first approach generalizes the fact that you can move your gaze to center on the middle of a visual pattern. Changing location is just one example of a “group of transformations,” and the task of centering the gaze is one example of finding the right transformation in a group (which might include rotations and magnifications as well as translations) to bring a pattern to “standard form.” In seeking a neural basis for the centering of gaze on the middle of a pattern, Pitts and McCulloch turned to the then recent work of Julia Apter [30]. Using electrodes (this was a few years before microelectrodes) to record from cat superior colliculus, she was able to show that there was a retinotopic map of visual information. By carefully placing strychnine locally on the colliculus to induce volleys that would affect the oculomotor muscles, and seeing where the eyes went, she showed that there was also a retinotopic motor map – and that the two maps were in register. On this basis, Pitts and McCulloch posited that each point on the superior colliculus will receive stimulation from a point on the retina which is stimulated by a visual target in a particular position; and that activity of this point of the colliculus causes firing of just those oculomotor neurons that will cause muscle contractions which turn the eye in the direction needed to fixate the corresponding visual target. We may thus regard each point on the retina as having a vote registered in the superior colliculus, and which could then be relayed to the appropriate muscle. The result is a control system with the property that the eye will only stop moving if it is gazing at the center of mass at the pattern on the retina – a primary example of a sensory input controlling motor output by distributed activity in a layered neural network without the intervention of executive control. Thus, we do not have to think of the brain in terms of a pyramid leading up to the pinnacle of “executive control,” which would then communicate with the mind and send commands down another pyramid towards motor control. Rather, distributed interactions could play an important part in relating sensation to behavior. Note that the claim is not that Pitts and McCulloch developed the definitive model of the role of the superior colliculus in the control of saccades (see [31] for a review of current models). Rather, the claim is that they introduced concepts of enduring value in cognitive neuroscience and computational neuroscience.
The other idea of the 1947 paper was to hypothesize that a universal can be found by extracting features of an image, and then averaging them over all possible elements of a group of transformations relevant to this universal. The average would then be an invariant measure which would be the same for all triangles, say, but different from the value that characterized all circles. Pitts and McCulloch were concerned that a large group of transformations might require too large a brain, and so they came up with the idea of “exchanging space and time” by scanning up and down across the layers of the brain at the rate of the alpha rhythm:
At the tenth and final Macy meeting McCulloch reported on the status of his and Pitts’s theory of how we recognize shapes and musical chords. He reported Lashley’s strong arguments against the specific mechanism they had proposed. Moreover, Donald MacKay [one of the founders of the British school of information theory] … had tested the role of scanning in McCulloch’s laboratory and the result again refuted the detailed mechanism. McCulloch cheerfully concluded from this that we can invent mechanisms, make hypotheses, and disprove them. Thus we are right to regard our work as scientific epistemology. [4, p. 241]
But what was here set up beautifully, perhaps for the first time, was that we should analyze perception in terms of layers of feature detectors.
What the frog’s eye tells the frog’s brain
One of the classics of single-cell neurophysiology, the 1959 paper “What the Frog’s Eye Tells the Frog’s Brain” combined the physiology of Jerry Lettvin with the anatomy of Humberto Maturana to develop the ideas of “How We Know Universals” [32]. Even though the processes found in frog tectum are not those predicted for mammalian cortex, the 1959 paper did confirm that
An important method of coding information in the brain is by topographically organized activity distributed over layers of neurons.Computation may be carried out in a distributed way by a collection of neurons without the intervention of a central executive.The retina begins the process of transformation that extracts from the visual input information that is relevant to the action of the organism (in this case, the frog’s need to get food and evade predators no matter how bright or dim the world is about him).
More specifically, the physiologist Lettvin and the anatomist Maturana showed that if you measure activity of frog retinal ganglion cells sending their fibers back along the optic nerve of the frog, then there will be basically four kinds of feature detectors – dimming detectors, small moving object detectors, and so on – and that these indeed end in different layers of the optic tectum of the mid-brain. We thus see here a prime example of the importance of distributed computation mediated by topographically organized activity distributed over layers. Their work also suggested that the retina was not a “general purpose” device, but rather that the retina processes the information in a species-specific way relevant to specific jobs like turning to catch flies and avoid enemies. We now know that the story is not that simple – the frog needs the rest of its brain, too – but this paper adds tremendously important ideas to our thinking about the brain.
Lettvin, et al., conclude by saying that:
By transforming the image from a space of simple discrete points to a congruent space where each equivalent point is described by the intersection of particular qualities in its neighborhood, we can then give the image in terms of distributions of combinations of those qualities. In short, every point is seen in definite contexts. The character of these contexts, genetically built in, is the physiological synthetic a priori.
This view of “the physiological synthetic a priori’ brings us back to Kant’s idea in a form adapted to the needs of late 20th-century neuroscience. This physiological synthetic a priori is not something given “magically,” but rather it is given by circuitry provided by the evolution of the nervous system (or, though this is not a feature of the 1959 paper, through self-organization).
Redundancy of potential command
This transfer of Kant’s a priori into the physiological realm provides an important realization of one of the ideas that set McCulloch on his quest for an experimental epistemology back in his student days at Yale. In the 1960s, McCulloch worked with Bill Kilmer to model the reticular formation as a place that might execute redundancy of potential command, thus bringing another formative concept from the Yale student days (recall the discussion of Naval command in World War I) into the physiological realm.
The so-called RETIC model had its roots in two sorts of neural data. On the one hand there was the work of Magoun and others on the reticular activation system: you can transform a person from sleep to wakefulness by appropriate patterns of activity in the reticular core of the brainstem [33]. The other ingredient was from the husband and wife team at UCLA, Marge and Arne Scheibel, who, in looking at the anatomy of the reticular system, saw axons crossing in a rostral-caudal direction, with dendrites of neurons in the reticular core basically orthogonal to this axis [34]. To get a simpler view of the very complex anatomy, the Scheibels suggested that we view the reticular formation as a stack of “poker chips,” so that each poker chip comprised neurons with overlapping or adjacent dendrites sampling, hopefully, roughly the same information.
McCulloch and Kilmer used these ideas to ground a formal model, RETIC, which contains an array of modules corresponding to the poker chips, with each getting a different sample of the sensory input, with some communication up and down the neuraxis [35]. To get from there to the full model required going beyond the available empirical data. But McCulloch’s point was that “I do not like to experiment when I have no hypothesis to disprove” [5], and in this he was the opposite of Dusser de Barenne, who worked very systematically around the brain, placing small doses of strychnine and carefully recording what happened to map out these connections – a classic example of somebody who refuses to use the-ory in the design of his experiments. McCulloch did both experiment and theory at different stages, sometimes together, but always thought of a good theory as the key to a well-designed experiment. In any case, the RETIC model, while grounded in basic empirical observations, added to them to suggest how such a network could implement redundancy of potential command.
Imagine that the basic commitment of the organism could be to one of several overall modes of behavior, such as waking, sleeping, fighting, fleeing, mating, and so on. Different types of sensory information are available, and so any small sample of the information might have different effects. Imagine that you are standing on the grass wanting to go into the beautiful cool water at the beach, and then you take a step and your feet feel the burning hot sand, and you get discordant information: to advance or to retreat. How can this be resolved? In the RETIC model, each module/poker-chip, with its limited sample of sensory input, makes a tentative initial “vote,” setting a confidence level for each mode. The modules then communicate in such a way that, if the votes that a particular module has for the different modes of behavior are roughly similar, they have relatively little effect on the neighbors, but if, on the other hand, one module had a much stronger vote for one mode than another, then the neighbors it talked to would accept that as meaningful information and would shift their votes in that direction Kilmer showed by computer simulation, with the help of programmer Jay Blum, that, no matter what inputs were given the system, RETIC would usually and eventually converge so that majority of the modules would commit the organism to the same mode.
This model shows how to build on data concerning anatomy and function to obtain a scheme for interacting modules which determine overall motor behavior through the cooperative computation of modules. The important idea here goes back to that WWI form of naval command: the module (or ship or person) which has the crucial current information temporarily has command. We are thus encouraged to think of the brain not as a hierarchy, but rather as a heterarchy, in which many different modules communicate with each other, such that temporary coalitions dominate the overall behavior as appropriate.
Reliable computation by unreliable neurons
One of the issues that John von Neumann and McCulloch discussed was reliability in the brain. One version of the story was that McCulloch got a 3: 00 AM phone call from von Neumann to say, “I have just finished a bottle of creme de menthe. The thresholds of all my neurons are shot to hell. How is it I can still think?” (In other versions of the story, von Neumann was called by McCulloch, and the drink was whisky.) Three answers came to that question. First, von Neumann devised explicit ways of building redundancy into model nervous systems, with one reliable neuron replaced by a bank of many unreliable neurons; the strategy was to keep taking majority votes after each bank of similar neurons so that even if many neurons were unreliable, the overall ensemble would be reliable [36]. Second, McCulloch’s idea was to build circuits using neurons whose function would not change with moderate shifts in threshold [37]. The diagrams here are very similar to those of the 1943 classic, but now the “logical calculus” has the additional twist that network function must be relatively stable in the face of threshold fluctuations. Finally, Jack Cowan and Shmuel Winograd, working in McCulloch’s group, took Shannon’s theory of reliable communication in the presence of noise [38] and showed how to use the redundancy that Shannon had come up with for encoding message to recode neural networks to provide sufficient redundancy for reliable computation in the presence of noise [39].
Does the brain have a logic?
The 1943 paper is the most famous paper by McCulloch and Pitts, and it has been incredibly influential, due to its basic idea that given any “finite enough” specification of what is to be done, you can build a McCulloch – Pitts network to do it.
One line of influence was through its shaping effect on John von Neumann, who developed the logic of the computing machines that emerged as one of the new technologies of World War II. Von Neumann designed the first stored program digital computing machine, and in doing so was informed by what he had learned from McCulloch and Pitts (cf. [40]). In the von Neumann tradition, techniques of switching theory were developed to explicitly design as needed a McCulloch – Pitts network that would do a particular job as part of a computer. Here, the framework is one of “logical design.”
The other line of influence stems from the challenge which was met by Frank Rosenblatt [41] building on ideas of Donald Hebb [42] and laying the basis for the modern topic of “connectionism” (see, e.g., [43]). In this approach, given some “finite enough” job, rather than explicitly designing a logical circuit to do it, the connections between the formal neurons change automatically by some learning rule on the basis of experience, so that a sufficiently large network can be trained to perform the given job increasingly well over time. Here, notions of “approximation” and “optimization” seem more important than whether or not neurons can be described by a logical calculus.
In this way, the 1943 paper has been influential in both the history of computation and the development of connectionism. On the other hand, many a computational neuroscientist may lament the paper’s simplicities [44], since neurons have complex dendrites and neuromodulation and calcium channels and many other properties. But McCulloch knew that. He never claimed that the 1943 model exhausted the richness of individual neurons, and even our brief sample above shows that much of his later work was devoted to coming up with more subtle (but certainly not biophysical or neurochemical) models of the neuron. Nonetheless, he and Pitts had shown that “anything finite enough” that could be logically conceived could be done by a neural network. They had killed dualism. Until then, there was no convincing argument for the idea that logical thought could be done by a brain, since the working out of neurophysiological detail had been mostly at the level of reflexes. Theirs was a tremendous achievement.
Once, in the late 1960s, I flew from Boston to Chicago with John Eccles, the Australian Nobel Laureate in neurophysiology. Eccles really disapproved of Warren McCulloch, because whereas Eccles, nearing 70, was still engaged in painstaking experiments, McCulloch’s work with de Barenne was long behind him, and his science was done in his office at MIT with his feet up on the desk, talking to people about brains and machines and mathematics. But here is a very interesting contrast: Eccles had kept to the dualism of his Catholic boyhood and spent a lot of his time trying to prove that there must be a nonphysical mind that complements what the “machine” – the brain – can do; whereas McCulloch was the person who, I think for the first time, provided a compelling demonstration that thought could be expressed just through the interaction of neurons alone – without a mind or soul to somehow influence the brain through the pineal gland or supplementary motor area or the quantum lattice of the synapse.
In this sense, Warren McCulloch’s search for the logic of the nervous system was successful. Yet, in some ways, the 1943 paper was the apogee of that search, expressing neural activity in a logical calculus. McCulloch’s later work showed that such a logic could be maintained even in the face of threshold fluctuations [39]. And to his dying day McCulloch was convinced that there needed to be an adequate logic of triadic relations – relations of the kind “A gives B to C” or “A perceives B to be C” with three terms in them – to allow one to really understand what the brain does (see [45] for a current analysis of this work). However, I am not convinced that the search for a new logic was the right way to elucidate the mysteries of brain function. My reason for this (with which others may certainly disagree) is in part autobiographical. When I look at my own career, it is only those studies that fall squarely within automata theory – a branch of computer science rather than neuroscience – that rest firmly on the notion of a logical calculus, whereas my forays into cognitive and computational neuroscience (e.g., [46] ) are greatly inspired by such contributions as “How We Know Universals,” “What the Frog’s Eye Tells the Frog’s Brain,” and “A Model of the Vertebrate Central Command System,” in which the ability of neural networks to solve the organism’s problems in interacting with a com‑plex and ambiguous world are solved using a variety of concepts – feature detectors, averaging, decision-making without executive control, etc. – which are not directly rooted in a specific logical calculus. In summary, the present paper has not only given some sense of McCulloch’s search for the logic of the nervous system, but has also shown that his papers contain contributions to experimental epistemology which provide great insight into the mechanisms of nervous system function without fitting into the mold of a logical calculus. In my opinion, these “alogical” contributions have the most to say for the development of computational neuroscience, while the “logical calculus” has the greater philosophical importance but has contributed more to computer science than to neuroscience.
In an age in which we are beginning to integrate much more strongly the different levels of analysis of the brain into a cognitive neuroscience, McCulloch’s concern for basic questions – what is the logic of thought? what is a person? what is a man that he may know a number? – becomes very timely. Of course, each scientist must master a certain palette of techniques, whether empirical or theoretical. Nonetheless, the idea of seeing how we can go beyond technique to answer fundamental questions remains crucial if the fruits of these techniques are to transcend mere data collection. Some neuroscientists worry more about theory or cognition, some focus on anatomy, or neurophysiology, or neurochemistry. But the array of talents and techniques marshaled by a community of scholars who know how to communicate with each other really creates a redundancy of potential command, as it were, to take control of this incredible question: how does the brain work?
References
[1] Arbib, M. A. A historical perspective. In Brains, Machines, and Mathematics, 2nd ed. New York: Springer-Verlag, 1987.
[2] Smalheiser, N. R. Walter Pitts. Persp. Biol. Med. 43(2): 217-26.
[3] McCulloch, W. S., and Pitts, W. H. A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5: 115-33, 1943.
[4] Heims, S. J. The Cybernetics Group. Cambridge: MIT Press, 1991.
[5] McCulloch, W. S. Recollections of the many sources of cybernetics. ASC Forum 6(2): 5-16, 1974.
[6] McCulloch, R. Foreword. In Collected Works of Warren S. McCulloch, edited by R. McCulloch. Salinas, CA: Intersystems Publications, 1989.1-6.
[7] Lettvin, J. Y. Introduction to vol. 1. In Collected Works of Warren S. McCulloch, edited by R. McCulloch. Salinas, CA. Intersystems Publications, 1989.7-20.
[8] Lettvin, J. Y. McCulloch and Walter. In Collected Works of Warren S. McCulloch, vol. 3, edited by R. McCulloch. Salinas, CA: Intersystems Publications, 1989. 514-29.
[9] McCulloch, W. S. Embodiments of Mind. Cambridge: MIT Press, 1965.
[10] McCulloch, W. S. What is a number that a man may know it and a man that he may know a number? Gen. Sem. Bull. 26-27: 7-18, 1961. (Reprinted in Embodiments of Mind.)
[11] Leibniz, G. W. The Monad°logy of Leibniz, translated by H. W. Carr. Los Angeles: USC Press, 1930.
[12] Searle, J. R. Minds, brains, and programs. Behay. Brain Sciences 3: 417-57, 1980.
[13] Arbib, M. A. In Search of the Person: Philosophical Explorations in Cognitive Science. Amherst: U of Massachusetts Press, 1985.
[14] Lettvin, J. Y. Strychnine neuronography. In Collected Works of Warren S. McCulloch, vol. 1, edited by R. McCulloch. Salinas, CA: Intersystems Publications, 1989. 50-58.
[15] Turing, A. M. On computable numbers with an application to the Entscheidungsproblem. Proc. London Math. Soc. Series 2. 42: 230-65, 1936.
[16] Gödel, K. Über formal unentscheidbare Sätze der Principia Mathematica and verwandter Systeme. I. Monats. Math. Phys. 38: 173-98, 1931.
[17] Pitts, W. H., and McCulloch, W. S. How we know universals: The perception of auditory and visual forms. Bull. Math. Biophys. 9: 127-47, 1947.
[18] Von Foerster, H., ed. Cybernetics: Circular, Causal, and Feedback Mechanisms in Biological and Social Systems. Trans. 7th Conference, 24-25 March 1949. New York: Josiah Macy Jr. Foundation, 1950.
[19] Kubie, L. S. A theoretical application to some neurological problems of the properties of excitation waves which move in closed circles. Brain 53: 166-77, 1930.
[20] McCulloch, W. S. The Past of a Delusion. Pamphlet. Chicago: Chicago Literary Club, 1953. (Reprinted in Embodiments of Mind.)
[21] Hubel, D. H., and Wiesel, T. N. Receptive fields of single neurons in the cat’s striate cortex. J. Physiol. 1481: 574-91, 1959.
[22] McCulloch, W. S. The Natural Fit. Pamphlet. Chicago: Chicago Literary Club, 1959. (Reprinted in Embodiments of Mind.)
[23] Kant, I. Critique of Pure Reason, translated by N. K. Smith. 1781, 1787. London: Macmillan, 1929.
[24] Waddington, C. H. The Strategy of the Genes. London: Allen & Unwin, 1956.
[25] Piaget, J. Biology and Knowledge. Edinburgh: Edinburgh UP, 1971.
[26] Arbib, M. A. Schema theory: From Kant to McCulloch and beyond. In Brain Processes: Theories and Models. An International Conference in Honor of W. S. McCulloch 25 Years After His Death, edited by R. Moreno-Diaz and J. Mira-Mira. Cambridge: MIT Press, 1995. 11-23.
[27] Arbib, M. A. Perceptual structures and distributed motor control. In Handbook of Physiology: The Nervous System. II. Motor Control, edited by V. B. Brooks. Bethesda, MD: American Physiological Society, 1981. 1449-80.
[28] Arbib, M. A. Schema theory. In The Encyclopedia of Artificial Intelligence, edited by S. Shapiro. New York: Wiley-Interscience, 1992. 1427-43.
[29] Shannon, C. E., and McCarthy, J., eds. Automata Studies. Princeton: Princeton UP, 1956.
[30] Apter, J. T. Eye movements following strychninization of the superior colliculus of cats. j Neurophysiol. 9: 73-85, 1946.
[31] Van Gisbergen, J., and Van Opstal, J. Collicular visuomotor transformations for saccades. In The Handbook of Brain Theory and Neural Networks, edited by M. A. Arbib. Cambridge: Bradford Books/MIT Press, 1995. 206-10.
[32] Lettvin, J. Y., Maturana, H., McCulloch, W. S., and Pitts, W. H. What the frog’s eye tells the frog brain. Proceed. IRE 47: 1940-51, 1959.
[33] Magoun, H. W. The Waking Brain, 2nd ed. Springfield, IL: Charles C. Thomas, 1963.
[34] Scheibel, M. E., and Scheibel, A. B. Structural substrates for integrative patterns in the brain stem reticular core. In Reticular Formation of the Brain, edited by H. H. Jasper, et al. Boston: Little, Brown, 1958. 31-68.
[35] Kilmer, W. L., McCulloch, W. S., and Blum, J. A model of the vertebrate central command system. Int. j Man-Machine Stud. 1: 279-309, 1969.
[36] Von Neumann, J. Probabilistic logics and the synthesis of reliable organisms from unreliable components. In Automata Studies, edited by C E Shannon and J. McCarthy. Princeton: Princeton UP, 1956. 43-98.
[37] McCulloch, W. S. Agatha Tyche of nervous nets: The lucky reckoners. In Mechanization of Thought Processes. London: Stationery Office, 1959. 611-25.
[38] Shannon, C. E. The mathematical theory of communication. Bell System Tech. J. 27: 379-423 and 623-56, 1948. (Reprinted with an introductory essay by W. Weaver as Shannon, C. E., and Weaver, W. Mathematical Theory of Communication. Urbana: U of Illinois Press, 1949.)
[39] Winograd, S., and Cowan, J. D. Reliable Computation in the Presence of Noise. Cambridge: MIT Press, 1963.
[40] Von Neumann, J., Burks, A., and Goldstein, H. H. Planning and Coding of Problems for an Electronic Computing Instrument. Princeton: Institute for Advanced Study, 1947-48. (Reprinted in Von Neumann’s Collected Works 5: 80-235.)
[41] Rosenblatt, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychol. Rev. 65: 386-408, 1958.
[42] Hebb, D. 0. The Organization of Behavior: A Neuropsychological Theory. New York: John Wiley, 1949.
[43] Rumelhart, D. E., and McClelland, J. L., eds. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. 2 vols. Cambridge: Bradford Book/ MIT Press, 1986.
[44] Perkel, D. H. Logical neurons: The enigmatic legacy of Warren McCulloch. Trends Neurosci. 11: 9-12, 1988.
[45] Moreno-Diaz, R., and Mira-Mira, J. Logic and neural nets: Variations on themes by W. S. McCulloch. In Brain Processes: Theories and Models. An International Conference in Honor of W. S. McCulloch 25 Years After His Death, edited by R. Moreno-Diaz and J. Mira-Mira. Cambridge: MIT Press, 1995. 24-36.
[46] Arbib, M. A., Erdi, P., and Szentagothai, J. Neural Organization: Structure, Function, and Dynamics. Cambridge: MIT Press, 1998.
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/2915 on 2016-07-19 · Publication curated by Alexander Riegler