CEPA eprint 1330 (EVG-040)
Glasersfeld E. von & Silverman P. (1976) Man-machine understanding. Communications of the Association of Computer Machinery 19(10): 586–587. Available at http://cepa.info/1330
The recent series of letters about understanding , , ,  has raised many questions but it has done little toward defining, even tentatively, what we have in mind when we say we understand someone. Wilks  somewhat pessimistically states “we have no idea about it at all … in a scientific sense,” and he denies that “there really is some definitive process or feeling called understanding or being understood.” On the other hand, he admits that “we have rules of thumb for everyday life.” Since many, if not most, of our interactions with people seem to involve some form of understanding and are probably governed by these rules of thumb, we suggest that, rather than discard them unseen as “unscientific,” we should have a look at them. Much of science, after all, is no more than rules of thumb with their statistical base made explicit.
There are many kinds of understanding: understanding a language, understanding the structure and function of a mechanism, understanding the path and purpose of a process, etc. Though the understanding McLeod and the other correspondents are getting at may comprise some of these, it seems to be a kind in its own right that is involved, for instance, in the common complaint that parents don’t understand their children. To understand someone in that sense means to have constructed an inferential model of the goals and values that motivate his behavior at that time. This presupposes that we possess a cognitive apparatus that is at least as complex as what we are trying to model. Being understood, on the other hand, means crediting the understander both with the capacity to model one’s own goal structure and with having correctly inferred the current goals.
Minot’s  stress of “background experience” is a hint in the right direction: our background is undoubtedly responsible for part of our goal structure, and it is an experiential fact that, in hypothesizing other people’s goals we are unlikely to diverge very far from our own. Indeed, when we say we understand someone, we are implying that, under different circumstances, his goals could conceivably be ours. And that works both ways: in order to feel understood, we must have been given some indication that the other can conceive of our goals as though they were his.
Whether we would be inclined to say that a machine understands us, depends on its responses and the context in which they occur. They would have to be such as to warrant our belief in three things: (1) that we are dealing with an organism that is purposive in the active, self-regulating sense that Pask  calls “purpose for” rather than “purpose of”; (2) that its goal structure is sufficiently complex to be at least partially analogous to ours; and (3) that the organism has actually mapped our present goal setting. The problem is presented quite correctly by Arthur C. Clarke in his Space Odyssey : Hal, the computer, acquires personality, i.e. becomes understandable as a person, when its behavior begins to imply an individual goal structure ( destroying the crew of the space ship). As Bowman, the human partner, comes to understand that goal structure, he credits Hal with his own understanding capabilities and consequently takes care to mask his intentions.
All this, of course, is the result of inductive inference and therefore never certain. Whether or not an organism (person or machine) operates toward a goal can mostly be determined by testing its ability to adjust its behavior to disturbances, and its efforts to overcome obstacles. Even without verbal communication, an organism can behave in such a way that we conclude it is making predictions about our behavior-and that is the beginning of being understood. Language then adds another fertile field for inference. Every child in his early cognitive development must learn to infer the nature of other people’s goal structures as well as their particular intents. The task is made difficult by the fact that people can always change their goals. Hence our understanding of another’s goal structure may at any moment be invalidated by unanticipated behavior, and if it is, our behavior may inform the other that he was not adequately understood. All too often, in retrospect, the understanding with which we credited another turns out to have been a misinterpretation, or we discover that what we believed to have been understanding was an illusion created by the other’s deliberate tricks. In the first case further experience may lead to correct interpretation; in the second there is no remedy because tricks, once they arc discovered, imply deceit (i.e. an ulterior motive that has been camouflaged), and that instantly turns the feeling of being understood into the feeling of being abused. This distinction, which we have learned to make in our interactions with people, may preclude our feeling understood by a machine until we are dealing with machines that we can credit with a wholly autonomous goal structure.
 McLeod, D.J. Letter, Comm. ACM 18, 9 (Sept. 1975), 546.
 Wilks, Y. Letter, Comm. ACM 19, 2 (Feb. 1976), 108.
 Weizenbaum, J., and McLeod, D.J. Letter, Comm. ACM 19, 6 (June 1976), 362-363; Wilks, Y. Letter, Comm. ACM 19, 7 (July 1976), 422- 423.
 Minot, O.N. Letter, Comm. ACM 19, 7 (July 1976), 422.
 Pask, G., cited in Bateson, M.K. Our Own Metaphor. Knopf, New York, 1972, p. 231.
 Clarke, A.C. 2001, a Space Odyssey. Signet Books, New York, 1968.
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/1330 on 2016-08-03 · Publication curated by Alexander Riegler