CEPA eprint 1343 (EVG-053)

Some problems of intentionality [Commentary on Haugeland’s “The nature and plausibility of cognitivism”]

Glasersfeld E. von (1978) Some problems of intentionality [Commentary on Haugeland’s “The nature and plausibility of cognitivism”]. The Behavioral and Brain Sciences 1(2): 252–253. Available at http://cepa.info/1343
Haugeland’s paper will be welcomed by seasoned as well as incipient cognitivists as an eminently helpful and stimu­lating contribution. If it should not convert staunch followers of the be­haviorist gospel to the view that “the Cognitive approach to psychology of­fers … a science of a distinctive form,” it will probably not be Haugeland’s fault. His lucid exposition of the three types of explanation cuts through much of the fog that has been created by enthusiastic but often inaccurate promul­gation of cybernetics, systems theory, and structuralism – all of which imply the discrimination between “morphological” and “systematic” coordination. Haugeland should also be congratulated on his unequivocal statement of the fact that explanatory reductions do not “supplant the explanations they reduce” and on his courageous frontal approach to the problem of intention. Using chess as an example in his analysis of how one may come to in­terpret a black box as intentional is a good didactic simplification. I would suggest, however, that the very element whose exclusion makes that example simpler than other, less conventional activities might lead us to take a somewhat less positivistic stance than does the author in his later evalua­tion of Cognitivist theory.
Condition (iii) for interpreting an object as an IBB is that the object’s outputs “consistently make reasonable sense.” Haugeland is aware of the problems this expression raises, but he says that “it is seldom hard to recog­nize in practice” how “reasonable sense” should be defined. I think we have to be more explicit about this. In the context of several games of chess it will, as a rule, be easy to decide whether or not a presumed player’s moves, on the whole, make sense. There will even be occasions when this can be de­cided about a single move. This is so because within the context of chess we know a priori what a player’s goal has to be, and there will be no doubt what­soever about recognizing it when it is achieved. It is, indeed, a matter of ac­cepted rules, and a person or box that has no conception or a deviant con­ception of what constitutes “mate” will not be considered a chess player at all. However, when we come to consider other activities that are not so ob­viously governed by a set of explicit conventional rules, the situation is much more obscure because we have no a priori knowledge of the observed sub­ject’s goals in terms of which his or her actions could be judged to make sense and, hence, to be intentional.
Jurists, who are frequently faced with the problem of deciding whether or not a person’s action was intentional, have created the rather powerful maxim: A person will be presumed to intend the natural, probable conse­quences of his acts. This works well enough in court, because there it is tacitly assumed that people have much the same ideas (knowledge) as to what are natural, probable consequences of the acts under consideration. But if, as philosophers or scientists we are faced with a black box, a wild- living chimpanzee, or a person from a significantly different cultural back­ground, we are in no way justified in making that tacit assumption, because we simply do not know what they believe to be natural, probable conse­quences of acts. Therefore, as long as we remain passive observers, we can­not be sure whether or not their acts are intentional (cf. von Glasersfeld & Silverman, 1976).
Fortunately, as Hofstadter (1941) has suggested, there are ways and means for an observer to test hypotheses about an observed organism’s intentions or goals, by creating obstacles or, more generally, disturbances for the organism (cf. Powers, 1973, for tentative implementations). Epistemo­logically, such tests are equivalent to any other hypothesis tests, in that they may tell us whether or not our hypothesis remain a viable explanation for our observations, but not whether this explanation is “true” or “false” in any absolute ontological sense.
There is yet another epistemological aspect to be considered. The inten­tions that an observer attributes to an observed organism are necessarily de­termined in a limiting sense by the set of goals that are conceivable to the observer as well as by his beliefs concerning rules, methods, and activities that are likely to lead to the attachment of these goals. This observer- dependent “conjectural” character of explanations, however, can hardly be said to be unique or discrediting to the Cognitivist approach to “systematic explanation.” It seems to be the character of all scientific explanation.
Another fairly important issue is involved in the hypothetical example of the brain hologram. The proposed “fixed” association between an “important substructure in chess positions” and “powerful or dangerous” moves would have to be considered either innate or acquired. If we decided that it was in­nate, we would have to discard any intentional explanation because, as long as we believe the theory of evolution, we would be obliged to say that the association was the result of accidental variation and unintentional selection. If it was acquired, however, it would have to have been “holographed” into that brain at some prior time by someone’s goal-directed, intentional selec­tion. If the brain now shows no evidence of making new selective associa­tions of that kind, it would be classified as an IBB whose goals have been set by some IPS that decided which positions were to be associated with which moves. If, on the other hand, the brain were still making new selective associations, the information-processing and decision-making capability would have to reside within it (at least if we exclude hypnosis or telepathy) and, qua whole brain, it would be classified as IPS. This distinction, I hasten to add, is obviously based on the distinction Pask (1969, p. 23) made between “purpose of and “purpose for.”
That point is relevant also to the problem Haugeland raises with regard to skills. I can see no reason why an IPS of a certain level of complexity should not have the capability of “deliberately and thoughtfully” building up, say, a sequence of motor acts under appropriate perceptual feedback control for a given recurrent purpose, and then “automating” the whole arrangement by giving it direct access to the sensory signals that were originally used for the control of the activity in the central processor. Since we can build a ther­mostat that perfectly realizes the purpose of maintaining, without conscious­ness on its or on our part, the room temperature we set (purpose for), there seems to be no logical obstacle to our automating (after deliberate compila­tion) the general motor pattern for hitting a ping-pong ball in such a way that only those parameters that determine where the ball with go remain under our conscious control. People who have to learn to double-declutch when shifting gears (e.g., in competitive sports car racing) seem to do exactly that. The movements of foot on clutch and hand on gear lever come relatively quickly under autonomous control; gauging the intermediary jab on the gas pedal according to perceptual signals indicating engine rotation and actual speed of the car, however, takes very much longer to become “automatic” and probably never does so entirely. The salient feature in all this is that, experientially, we are all aware of the fact that there are many quite compli­cated motor activities whose control, after a period of more or less conscious supervision, can be relegated to an unconscious level. The application of control theory to these cases seems promising because it supplies a con­ceptual model for the strange observation that, while the execution of the motor sequence seems wholly unconscious, their direction (i.e., the setting of particular goals of reference values that determine each individual execu­tion) remains under conscious control.
It is to be hoped that Haugeland’s paper, because it supplies a number of very clear methodological definitions and draws attention to problems that psychologists, by and large, would rather avoid, will not only be discussed but also acted upon. There are, I believe, good reasons to predict that both empirical and theoretical investigation of the “hurdles” he mentions will show the Cognitivist’s systemic approach rather more powerful and fertile than this conservative evaluation might lead one to expect.
References
Hofstadter A. (1941) Objective teleology. Journal of Philosophy 38(2): 29-39.
Pask G. (1969) The meaning of cybernetics in the behavioural sciences. In: Rose J. (ed.) Progress of cybernetics. Gordon and Breach, New York: 15-44.
Powers W. T. (1973) Behavior: The control of perception. Aldine, Chicago.
von Glasersfeld E. & Silverman P. (1976) Man-machine understanding. Communications of the Association for Computer Machinery (Forum) 19(10): 586-587. http://cepa.info/1330
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/1343 on 2016-08-03 · Publication curated by Alexander Riegler