CEPA eprint 1436 (EVG-148)

Das Ende einer grossen Illusion [The end of a grand illusion]

Glasersfeld E. von (1992) Das Ende einer grossen Illusion [The end of a grand illusion]. In: Fischer H. R., Retzer A. & Schweitzer J. (eds.) Das Ende der grossen Entwürfe. Suhrkamp, Frankfurt, Germany: 85–98. Available at http://cepa.info/1436
Table of Contents
The Classical Impasse
Metaphysical Confusion
Conceptual Construction
A Theory of Fictions
Obstinate Tradition
The Cybernetic Parallel
Conclusion
References
Our title speaks of “the end of grand designs.” For psychotherapists, I am sure, the phrase has a rather special meaning. Being myself an obstinate outsider, who has so far managed to avoid annexation by any discipline, the phrase immediately prompts me to ask: What, indeed, are the grandest designs in the twenty-five hundred years of our intellectual history?
Since this is a rhetorical question, I am going to answer it myself. The grandest designs – and I say this without the least hesitation – are the philosophers’ schemes to find out what the world “in itself,” the world apart from the human knower, might be like. And since I do not belong to any discipline and do not have to defend a particular dogma, I would add that these grand schemes have finally run out of steam.
No one, as far as I know, has yet tried to work out how many man-hours of laborious thought and conceptual computation have been devoted to this task, but anyone who has looked around in a philosophical library will, I am sure, agree that it has been an enormous endeavor. Let me say at once that I consider it to have been unsuccessful – a wild goose chase, but nevertheless a glorious endeavor. It produced a wealth of inspired writing and, what is more important, it was quite indispensable. If our history of ideas were not littered with these noble ruins, we could not see what we are seeing today. Ross Ashby said a long time ago that one should never disparage false starts and failures; because to find out that a particular approach to a problem does not work, is a gain in any field where the number of choices is finite: it reduces the manifold of possible approaches that remain to be tried. (Ashby, 1963)
The Classical Impasse
Had the grand epistemological projects not been pursued with all the ingenuity and all the stamina of the great philosophers, the lesson to be learned today would still be out of reach. Hence it is well worth our while to look back and, if we can, put into focus the point at which they turned into a blind alley. From my perspective, this point lay at the very beginning.
Given the earliest records we possess concerning investigations about the questions of what knowledge is and how we come to have it, it seems inconceivable that the formulation of these two questions should not have raised a third – namely how we might decide whether our knowledge is really true.
These earliest records, as far as the Western world is concerned, stem from the 6th century BC, the time of the Pre-Socratics. The extraordinary thinkers of that period were as universal in their interests and as versatile in their accomplishments as the famous elite of the Italian Renaissance. Among them there were also some who clearly saw the inevitable impasse of the epistemological venture they had embarked on.
Xenophanes of Colophon, for instance, said that even if someone happened upon knowledge that represented the world exactly as it is, he could never tell that this was the case (cf. Diels, Fragment 34). This remarkably concise statement is based on the logical fact that, in order to check the “truth” of any representation, one must have access to what it is supposed to represent. In the case of knowledge that purports to be knowledge of the “real” world, a check would be possible only if one could step outside the field of one’s knowing. This, indeed, is what in one way or another the sceptics have been reiterating ever since.
With the beginning of the Christian era, however, the focus of interest shifted to knowledge which was to be acquired from the scriptures and revelation rather than by rational consideration of actual experience. But here, too, a logical difficulty was raised by 3rd-century theologians in Byzantium. If God was omnipotent, they said, if He was omniscient and present in every place of the universe, then He had to be fundamentally different from anything we could encounter in our experiential world; and if this was the case, there was no way to grasp His essence in human concepts. The Byzantine school became known as apophatic or negative theology and, for obvious reasons, was quickly suppressed as heresy by the church.
Though these insurmountable logical obstacles to the acquisition of ontologically true knowledge were clearly seen early on in our history of ideas, and the sceptics, throughout the ages, never tired of drawing attention to them, the quest for “true” or “objective” knowledge of a world posited as independent of the knowing subject was pursued by almost all great thinkers. Somehow they hoped that reason would find a way in spite of the logical impasse. They were driven on, above all, I think, by two deep-rooted feelings. First, the reluctance to acknowledge that, while human thought had obviously managed to solve a great many problems, there were mysteries to which there was no human approach. Second, the age-old mystical conviction that Descartes much later expressed when he said, in the context of perception, that God could not have been so mean as to equip us with an insufficient reason and deceptive senses.
Metaphysical Confusion
At this point, it should be clear that I am concerned exclusively with what we call rational knowledge. Mystics may well have a way of resonating to a world that lies beyond our experience. However, their resonating does not involve the workings of reason but rather what Giambattista Vico aptly put under the heading of “poetic wisdom.” What mystics say about their visions is couched in private symbols whose formation and interpretation is and necessarily remains in a domain of subjective invention that lies beyond the reach of rational assimilation. It may, indeed, be more important than anything reason produces, but it must be grasped intuitively because it eludes prosaic expression in logical terms.
The source of the trouble is that, in the works of the great philosophers of the Western world, the distinction between the rational and the mystical became blurred and they freely larded their epistemological investigations with implicit metaphysical assumptions that could not be rationally grounded in human experience.
The first time a sharp distinction between the two kinds of knowledge was suggested was at the birth of modern science. Cardinal Bellarmino, who had been charged to conduct the prosecution of Giordano Bruno, was an extremely cultivated gentleman who appreciated the lure of intellectual explorations. When he heard of the accusations against Galileo, he sent him a warning: Galileo should be prudent, and present the heliocentric theory as a hypothesis that could serve to explain and to predict certain experiences. This would not be considered heresy. But on no account must he present that theory as a description of God’s world. True knowledge of that world was the domain of the Church, and science must not meddle with it.
Against the logical arguments of philosophers who denied the possibility of true knowledge, the church could always pit the contention that its access to knowledge was through revelation. Now, however, when scientists produced empirical facts that flatly contradicted the sacred dogma, another line of defense was needed.
Hence Bellarmino reinstated the distinction the Greeks had made by contrasting doxa with gnosis. The first was to refer to experiential knowledge and could never get beyond the status of “opinion” because it was derived from the necessarily limited observations in the world of actual living. The second was the knowledge of the soul, acquired directly and without contamination by everyday praxis.
But Galileo would not accept such a division. Though he was apparently deeply troubled by the conflict between his scientific work and the religious dogma, and though he recanted – as any reasonable man would have – when he was shown the Vatican’s instruments of torture, he did not want to give up the belief that he was uncovering the real workings of the universe.
In retrospect, this was a strange conceit. Yet most of the scientists that followed – and most of the teachers of science today – have taken the same stance. To them, science is the way that leads to knowledge of the world as it really is. It is a strange conceit, because the stupendous advance of science since the days of Galileo is, after all, based on his brilliant gambit of calculating the behavior of observable physical objects by relating it to “laws of physics” which physical objects were never observed to follow exactly. Nowhere, for instance, was a physicist able to demonstrate that objects of different weight fall at the same rate, and nowhere could they observe that the motion of balls rolling down an inclined plane is uniform and infinitely continuous. Such laws could not be observed in actual experiments, they had to be invented.
Conceptual Construction
In spite of the spreading myth that science provides absolute knowledge, however, there were scientists, and often the most imaginative and successful ones, who were at least occasionally aware of the fact that it was they themselves who constructed the conceptual framework that supports the scientific picture.
From Galileo’s student, Torricelli, we have this remarkable statement:
Whether the principles of the doctrine de motu be true or false is of very little importance to me. Because if they are not true, one should pretend that they are true, as we supposed them to be, and then consider as purely geometrical, not as empirical, all the other speculations that we derived from these principles. … If this is done, I say that there will follow everything Galileo and I have said. Then, if the balls of lead, of iron, or of stone do not behave according to our computation, too bad for them, we shall say that we were not talking of them.[Note 1]
Some two hundred years later, in the middle of the 19th century, the extension of scientific know-how and the technological wonders that were being achieved led to a mood of unbounded confidence. There were scientists who not only thought but also said that all important mechanisms of the universe had been explained and soon no mysteries would be left for science to tackle.[Note 2] Yet, a brief generation later, some students of those overly enthusiastic scientists began to see through the optimistic illusion. The continual widening of the horizon and a growing awareness of how science had been dealing with the increasing wealth of observations led them to realize that they were managing experience rather than explaining the universe. In 1887, Thomas Henry Huxley wrote:
Any one who has studied the history of science knows that almost every great step therein has been made by the “anticipation of Nature,” that is, by the invention of hypotheses, which, though verifiable, often had very little foundation to start with; and, not infrequently, in spite of a long career of usefulness, turned out to be wholly erroneous in the long run. (Huxley, 1887/1948; p.56)
At much the same time, Hermann von Helmholtz, in a postscript to his pioneering paper on the conservation of energy of forty years earlier, formulated the fundamental insight that “the principle of causality is in fact nothing but the presupposition of lawfulness in all the appearances of nature.”[Note 3]
In physics, most of the prominent actors in the revolution brought about by the theory of relativity and quantum mechanics have at some time or other taken a similar stance and acknowledged the fact that they had first constructed a theory and only then looked for observational results to fit into it. But the textbooks from which students are to acquire “the scientific method” are still perpetuating the earlier dogma according to which the regularities and “laws of nature” are discovered by observation (cf. Brush, 1974).
A Theory of Fictions
Although the authorsof modern physics were at least occasionally aware of the breach they had caused in the epistemological tradition, they had no time to tie this breach to particular strands in the history of philosophy that would have helped to substantiate it as a generally viable position. While Helmholtz was undoubtedly familiar with Kant’s assertion that “reason can see only what she herself has brought forth according to her design,”[Note 4] neither he nor the later emancipated scientists said anything about how it might come about that invented scientific theories can turn out to be so eminently useful in the actual praxis of living. Yet, this is the question that has to be answered by anyone who suggests that the grand design of knowledge as a representation of the real world should be replaced by a more modest paradigm.
The first attempt to justify the use of fictions – where legitimate “fictions” are understood as useful, not merely fanciful inventions – was made by Jeremy Bentham, an 18th-century prodigy who was admitted to Oxford at the age of twelve and a half (cf. Ogden, 1932). Bentham provided some truly seminal analyses of concepts such as ‘matter, ’ ‘form, ’ ‘quantity, ’ and ‘space.’ For me, however, the fundamental insight he provided is this: relational concepts cannot be absolute, because they can be known only when an operating subject assembles them in experiential time.
Because I consider this insight fundamental, let me try to explain it in simple words. When we relate, we are obviously dealing with more than one unitary thing. To “relate” means that we have one focus of attention, move our focus of attention to something else, and then look at the way we moved from the one to the other. Only by operating in some such way, can we specify a relation. Hence it requires the attention of an observer, someone who does it or, as I would say today, someone who constructs it by operating in a particular way.
This insight should give thought to anyone who claims to be a “realist.” One may disagree with some details of Bentham’s conceptual analyses, but it would be difficult to ignore the general principle they embody, namely that most of our indispensable concepts are not given to us by the senses but are the result of our mental operations and our creative reflection and abstraction.
With regard to this “constructivist” approach, Bentham is in agreement with Locke, who introduced the notion of “the perception of the operations of our own mind within us” (Locke, 1690; Book II, chapter 1, §4). But whereas Locke was still anxious to maintain a correspondence between the subject’s conceptual world and an objective reality, Bentham’s construction of concepts was guided only by the notion of utility. This gave his opponents the opportunity to disparage his work as tainted with an unworthy “utilitarianism.”
This criticism is essentially the same that was leveled more than a century later against the pragmatists, who promoted the maxim: True is what works. Both in the case of Bentham and that of the pragmatists, usefulness or workability were tacitly understood to refer to “practical values” in a material world and to those alone. What was lacking was an unequivocal statement that practical utility in that approach was secondary to the far more important use it had in that it provided a new way towards explaining the structure of the experiential world, the only “reality” we can rationally comprehend.
Even when this view is expressed with all the required care and clarity, it still takes a long time to counteract the inveterate belief that the structure of the experiential world is nothing but a more or less hazy and partial reflection of a real world that lies beyond it and whose exploration, therefore, is a worthier goal for philosophy. The sceptics’ cogent arguments have not been sufficient to discourage the illusory quest in twenty-five hundred years. I have earlier mentioned emotional reasons for this remarkable persistence. Now I want to suggest a more practical one.
Obstinate Tradition
If, in the domain of science, a problem is approached from all conceivable angles and still resists solution, it does not take long before someone suggests that there may be something wrong in the way the problem is conceived. Then, it may happen that a concept, considered fundamental until that moment, is dismantled and replaced by a novel one that opens a new perspective in which the problem either disappears or can be solved.
The concept of space, in the switch from the Newtonian to the Einsteinian view of the universe, is a recent case in point.
In epistemology, no such thing has happened since its inception in the 6th century BC. The conceptual approach has not changed in all that time. The great failing of the sceptics was that they never seriously tried to go beyond the demonstration that “true” knowledge was impossible. They never questioned the original cognitive scenario in which knowledge had to be a representation of a reality independent of the knower.
Only about a hundred years ago an idea cropped up in the theory of knowledge that could eventually lead to a new scenario.
In an essay published two decades before the turn of the century, William James suggested that one could apply the basic notions of the biological theory of evolution to the evolution of concepts and conceptual structures (James, 1880). The same idea was presented with great clarity and detail a few years later by Georg Simmel (1895), and when Hans Vaihinger came out with his Philosophy of As If (1911) it became clear that he, too, had been thinking along that line since 1876.
Today, “evolutionary epistemology” is quite a fashion. Well-known biologists are vigorously sponsoring it, but they have also managed to direct it back into conventional channels. Knowledge, this school holds, evolves through adaptation brought about by a process analogous to natural selection – and this is the original idea James, Simmel, and others suggested at the turn of the century. But then, for instance, Konrad Lorenz jumps to the conclusion that, because concepts such as ‘space’ and ‘time’ have evolved and are successful, we are justified in assuming that they correspond to characteristics of an ontologically real world. (Lorenz, 1977; p.21ff)
In my view, this is no more rationally warranted than the belief that God would not have created us without giving us the means to see the world as it “really” is. The assumption springs from an implicit over-extension of the biological notion of adaptation. In the realm of living organisms, to be adapted means no less, but also no more, than to have found a way of surviving and reproducing under the present environmental circumstances. In other words, all organisms found to be alive at a particular moment of evolutionary history are adapted – but this means no more than that they have not succumbed to the obstacles and perils the environment has so far put in their path. The ways and means they have developed to avoid these obstacles and perils cannot be taken to reflect properties of the environment. All an organism might conclude from the fact that it has survived is that it happened to find one among the countless ways and means that do not happen to come into conflict with the environment’s constraints or, in other words, that it is still viable.
To use a simple metaphor, natural selection in evolution works like a sieve: what passes through, passes through. Successful passage provides no clues about what might have been an impediment, no clues about the character of the sieve. Thus, the fact that the concepts of space and time are useful in the management of our experiential world, entails no more than that these concepts are among the possibilities the real world leaves open to organisms with our conceptual capabilities. And I would add that, given the success of the theory of relativity, it is now clear that, even in our experience, there are reaches where our ordinary concepts of space and time are no longer viable.
I have gone into this at some length, because “evolutionary epistemology” has in fact blocked the most promising perspective that was opened at its beginning: the revolutionary view suggested by Jean Piaget’s programmatic statement that knowing is an adaptive function (Piaget, 1967). If we take seriously the premise that cognition is an instrument of adaptation, we have to replace the traditional concept of knowledge as representation. Instead of thinking of knowledge as corresponding to an independent “objective” reality, we have to think of knowledge as the collection of conceptual structures that have so far not clashed with the constraints of our experiential world. This is to say, knowledge does not have to match an ontological reality, it merely has to fit into the world we experience. – It is well to remember that this is what “empiricism” is about. To attribute an ontological value to “empirical evidence” is the old misconception used in the attempt to turn science into unquestionable dogma.
The Cybernetic Parallel
It is worth mentioning that Piaget, in his later years, found himself in agreement with many of the epistemological fragments produced by cyberneticians. In retrospect, this is not surprising.
As Bateson pointed out, cybernetics differs from the traditional scientific procedure in that it operates by means of constraints rather than causal connections and that, consequently, Darwin’s theory is a cybernetical one, because it explains evolution as the result of nature’s “restraints” on the random variations of organisms (Bateson, 1972). In this context, it is tempting to adapt Paul Feyerabend’s most shocking statement and to say: Anything that goes, goes.[Note 5]
Since cybernetics is mainly concerned with self-regulation and the gaining and maintaining of internal equilibrium, cyberneticians who become interested in the process of knowing, will try to see it as a process of self-regulation. I have elsewhere tried to show that there are sound philosophical arguments for the cyberneticians’ contention that cognizing organism cannot receive anything that could reasonably be called “information” from an external world, and that such “knowledge” as they are able to acquire can be constructed by no one but themselves (Glasersfeld, 1981).
I want to emphasize that this autonomous construction of knowledge should not be considered a form of solipsism because it is by no means a “free” construction. All its conceptual elements and the structures into which they are built have to prove viable in the flow of experience. On the physical level, their viability is an empirical question in the very sense of the empiricist philosophers. On the conceptual level, on the other hand, viability is a question of logical coherence in the rationalist sense. On both levels, the construction is subject to constraints which separate what is viable from what is not, but the nature of the constraints is inaccessible to the constructing subject, because there is no way of telling whether a failure is due to a flaw in the constructive operating or to an obstacle of the ontological world.
Conclusion
Coming to the end, one might ask: Is it actually the case that the “grand epistemological design” of the Western world has failed? There is still, after all, a majority of people who firmly believe that the knowledge we have gathered through the centuries is representative of an objective world beyond our experiential interface and that it is this and only this correspondence with an ontological reality that makes it true. Those who suggest that this belief is based on an illusion are often considered to be spoilsports or, by the more virulent defenders of the status quo, dangerous heretics.
In fact, the intellectual mood is reminiscent of the 16th century, when the majority of thinkers were still struggling to perpetuate the notion that, being the crowning achievement of God’s creation, mankind’s place in the center of the universe could not be doubted. It took much more than a hundred years before the geocentric myth was generally relinquished. But this change did not in any way weaken what I consider to be another popular myth: the belief that our reason should be sufficient to grasp at least an outline of a world as it really is. Unlike the earlier one, this myth is not kept alive by mere human vanity but rather by the fear of what would follow if it were given up.
As long as we cling to the notion that parts of our experience reflect an objective world that is independent of our knowing, we are not compelled to feel responsible for that world.
When it would be uncomfortable, laborious, or painful to change certain things, we can simply escape by saying: There is nothing we can do, because this is how it is; or, on the personal level:
I cannot help it, this is how I am.
Thus, I want to suggest that the profound emotional reactions against the notion that it is ourselves who construct our experiential reality spring from the desire not to acknowledge that no one but ourselves can be held responsible for what we know and what we do.
If my exposition was at all intelligible, it should now be clear that I cannot possibly claim to have presented a “truth” in the traditional sense of that term. I have no intention of claiming such a thing. I present my view as a possible way of thinking, and I do this because I strongly feel that, by relinquishing the ontologically ambitious “grand designs” in epistemology, and replacing them with the conception of knowing as a powerful instrument for achieving a viable fit with our experiential world, we may have a better chance of saving that world before it is too late.
References
Ashby R. W. (1963) Induction, prediction, and decision-making in cybernetic systems. In H. E. Kyburg & E. Nagel (Eds.), Induction: Some current issues (pp.55-66). Middletown, Connecticut: Wesleyan University Press. (Reprinted in Mechanisms of Intelligence: Ross Ashby’s writings on cybernetics, edited by R. Conant. Seaside, California: Intersystems Publ., 1981.)
Bateson G. (1972) Cybernetic explanation. In Steps to an ecology of mind, (pp. 399–410). New York: Ballantine.
Bernal J. D. (1954) Science in history, Vol.2, The scientific and industrial revolutions. Cambridge, Massachusetts: M. I.T. Press, 1971 (First published 1954).
Brush S. G. (1974) Should the history of science be rated X? Science, 183, 1164–1172.
Ceccato S. (1951) Il linguaggio con la tabella di Ceccatieff. Paris: Hermann.
Diels H. (1957) Fragmente der Vorsokratiker. Hamburg: Rowohlt.
Feyerabend P. (1975) Against method. Atlantic Highlands, New Jersey: Humanities Press.
Glasersfeld E. von (1981) An epistemology for cognitive systems. In: G. Roth & H. Schwegler (Eds.), Self-organizing systems (pp.121–131). Frankfurt/New York: Campus.
Helmholtz H. von (1878) Tatsachen in der Wahrnehmung. In P. Hertz & M. Schlick (Eds.) Hermann von Helmholtz, Epistemological writings. (transl. M. F. Lowe). Dordrecht, Holland: Reidel, 1977.
Huxley T. H. (1887) The progress of science. In A, Castell (Ed.), Selections from the Essays. Northbrook, Illinois: AHM Publ. Co., 1948.
James W. (1880) Great men, great thoughts, and the environment, The Atlantic Monthly, 46, 441–459.
Kant I. (1787) Kritik der reinen Vernunft, (2nd edition). Akademieausgabe, Vol.3.
Locke J. (1690) An essay concerning human understanding.
Lorenz K. (1977) Behind the mirror. New York: Harcourt Brace Jovanovich. (German original, 1973).
Ogden C. K. (Ed., 1932) Bentham’s theory of fictions. London: Kegan Paul.
Piaget J. (1967) Biologie et connaissance. Paris: Gallimard.
Simmel G. (1895) Ueber eine Beziehung der Selectionslehre zur Erkenntnistheorie. Archiv für systematische Philosophie, 1, 34–45.
Vaihinger H. (1911) Die Philosophie des Als Ob. Presented at the IV. International Congress of Philosophy, Bologna; published in Berlin by Reuther & Reichard, 1913. (Reprinted by Scientia Verlag Aalen, 1986.)
Endnotes
1
I owe this quotation to Silvio Ceccato who used it forty years ago in a treatise I translated for him (Ceccato, 1951).
2
This has been documented by many authors, e.g. by Bernal, 1954/1971, Vol 2., p.665.
3
Helmholtz wrote this as an addition to his famous paper Ueber die Erhaltung der Kraft: Eine physikalische Abhandlung, the 2nd edition of which was not published until1899; a less concise expression of the same thought, however, can be found in his essay on perception of 1878.
4
Kant, Kritik der reinen Vernunft, B XIII.
5
Feyerabend used the phrase “anything goes” in Against method, (1975), p.23.
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/1436 on 2017-04-17 · Publication curated by Alexander Riegler