CEPA eprint 1330 (EVG-040)

Man-machine understanding

Glasersfeld E. von & Silverman P. (1976) Man-machine understanding. Communications of the Association of Computer Machinery 19(10): 586–587. Available at http://cepa.info/1330
The recent series of letters about understanding [1], [2], [3], [4] has raised many questions but it has done little toward defining, even tentatively, what we have in mind when we say we understand someone. Wilks [2] somewhat pessimistically states “we have no idea about it at all … in a scientific sense,” and he denies that “there really is some definitive pro­cess or feeling called understanding or being understood.” On the other hand, he admits that “we have rules of thumb for everyday life.” Since many, if not most, of our interactions with people seem to involve some form of understanding and are prob­ably governed by these rules of thumb, we suggest that, rather than discard them unseen as “unscienti­fic,” we should have a look at them. Much of science, after all, is no more than rules of thumb with their statist­ical base made explicit.
There are many kinds of under­standing: understanding a language, understanding the structure and func­tion of a mechanism, understanding the path and purpose of a process, etc. Though the understanding Mc­Leod and the other correspondents are getting at may comprise some of these, it seems to be a kind in its own right that is involved, for instance, in the common complaint that parents don’t understand their children. To understand someone in that sense means to have constructed an infer­ential model of the goals and values that motivate his behavior at that time. This presupposes that we pos­sess a cognitive apparatus that is at least as complex as what we are trying to model. Being understood, on the other hand, means crediting the un­derstander both with the capacity to model one’s own goal structure and with having correctly inferred the current goals.
Minot’s [4] stress of “background experience” is a hint in the right di­rection: our background is undoubt­edly responsible for part of our goal structure, and it is an experiential fact that, in hypothesizing other peo­ple’s goals we are unlikely to diverge very far from our own. Indeed, when we say we understand someone, we are implying that, under different cir­cumstances, his goals could conceiv­ably be ours. And that works both ways: in order to feel understood, we must have been given some indica­tion that the other can conceive of our goals as though they were his.
Whether we would be inclined to say that a machine understands us, depends on its responses and the con­text in which they occur. They would have to be such as to warrant our be­lief in three things: (1) that we are dealing with an organism that is pur­posive in the active, self-regulating sense that Pask [5] calls “purpose for” rather than “purpose of”; (2) that its goal structure is sufficiently complex to be at least partially anal­ogous to ours; and (3) that the orga­nism has actually mapped our pres­ent goal setting. The problem is presented quite correctly by Arthur C. Clarke in his Space Odyssey [6]: Hal, the computer, acquires person­ality, i.e. becomes understandable as a person, when its behavior begins to imply an individual goal structure ( destroying the crew of the space ship). As Bowman, the human part­ner, comes to understand that goal structure, he credits Hal with his own understanding capabilities and con­sequently takes care to mask his in­tentions.
All this, of course, is the result of inductive inference and therefore never certain. Whether or not an organism (person or machine) oper­ates toward a goal can mostly be de­termined by testing its ability to adjust its behavior to disturbances, and its efforts to overcome obstacles. Even without verbal communication, an organism can behave in such a way that we conclude it is making predic­tions about our behavior-and that is the beginning of being understood. Language then adds another fertile field for inference. Every child in his early cognitive development must learn to infer the nature of other peo­ple’s goal structures as well as their particular intents. The task is made difficult by the fact that people can always change their goals. Hence our understanding of another’s goal struc­ture may at any moment be invali­dated by unanticipated behavior, and if it is, our behavior may inform the other that he was not adequately un­derstood. All too often, in retrospect, the understanding with which we credited another turns out to have been a misinterpretation, or we dis­cover that what we believed to have been understanding was an illusion created by the other’s deliberate tricks. In the first case further ex­perience may lead to correct inter­pretation; in the second there is no remedy because tricks, once they arc discovered, imply deceit (i.e. an ul­terior motive that has been camou­flaged), and that instantly turns the feeling of being understood into the feeling of being abused. This distinc­tion, which we have learned to make in our interactions with people, may preclude our feeling understood by a machine until we are dealing with machines that we can credit with a wholly autonomous goal structure.
References
[1] McLeod, D.J. Letter, Comm. ACM 18, 9 (Sept. 1975), 546.
[2] Wilks, Y. Letter, Comm. ACM 19, 2 (Feb. 1976), 108.
[3] Weizenbaum, J., and McLeod, D.J. Letter, Comm. ACM 19, 6 (June 1976), 362-363; Wilks, Y. Letter, Comm. ACM 19, 7 (July 1976), 422- 423.
[4] Minot, O.N. Letter, Comm. ACM 19, 7 (July 1976), 422.
[5] Pask, G., cited in Bateson, M.K. Our Own Metaphor. Knopf, New York, 1972, p. 231.
[6] Clarke, A.C. 2001, a Space Odyssey. Signet Books, New York, 1968.
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/1330 on 2016-08-03 · Publication curated by Alexander Riegler