CEPA eprint 1335 (EVG-045)

Linguistic communication: Theory and definition

Glasersfeld E. von (1977) Linguistic communication: Theory and definition. In: Rumbaugh D. M. (ed.) Language learning by a chimpanzee. Academic Press, New York: 55–71. Available at http://cepa.info/1335
Table of Contents
Introduction
Language and speech
What is communication?
The restoration of purpose
From sign to symbol
The semantic aspect of syntax
The recognition of language
References
Introduction
Until about 20 years ago, there seemed to be no urgent need to specify very rigorously what we meant when we used the word “language.” It was an accepted fact that language belonged exclusively to humans and that there could be no serious contender for that monopoly. Then, in the 1950s, came the “information explosion” and with it the sudden interest in computing machines. Would these machines ever be able to “think” and to handle human language? The suggestion caused a good deal of indignation, and debates tended to become fierce among both laymen and scientists (Taube, 1961; Armer, 1963; Minsky, 1963). The debate may still flare up today, but the fury has diminished. The kinds of task that computers have been celebrated for doing during the intervening years are not likely to threaten man’s monopoly of “thinking.”
However, a different contender has appeared on the scene. The publication of work with chimpanzees by the Gardners (1971), Premack (1971), Fouts (1973), and ourselves (Rumbaugh, von Glasersfeld, Warner, Pisani, Gill, Brown, & Bell, 1973) once more called into question the assumption that Homo sapiens is the only organism that can handle language. Oddly enough, these publications have not stirred up indignation among laymen. On the contrary, there is a great deal of rather benevolent interest in just how well the various chimpanzees are doing. Only among scientists have there been some signs of alarm. But in contrast to the debate about “thinking,” this time attempts are made to define what the contested term means. That is just as well, for if we do not state what it is we are arguing about, we could go on arguing forever. Thus it is important to provide an acceptable definition of “language,” and to outline some criteria by which we can recognize language when we find it.
In what follows I shall try to formulate criterial characteristics of the phenomenon we have in mind when we use the term “language,” and then to suggest how those criteria might be applied to determine whether or not a given organism is using a linguistic system of communication. I would like to point out that, although language also plays an important role in the cognitive development and functioning of individual organisms, we shall be concerned here only with its communicatory function.
Language and speech
The fact that “language” has always implied human language has led to a confusion of two terms, “speech” and “language,” which, although certainly related, are not at all the same. Because human language is presumed to have existed as speech long before the development of other channels such as gesture, hieroglyphs, or ideograms, it was taken for granted that all language had to be spoken. Many linguists thus came to consider “speech” and “language” as quasi-synonyms, and consequently felt justified in studying language by investigating the acoustic, physical manifestations of speech. I am not suggesting that phonology is not an interesting area of study, but the isolation and classification of speech sounds will not get us very far in an attempt to define language. Sapir, one of the fathers of American linguistics stated this point quite clearly in 1921:
A speech-sound localized in the brain, even when associated with the particular movements of the “speech organs” that are required to produce it, is very far from being an element of language. It must be further associated with some element or group of elements of experience, say a visual image or a class of visual images or a feeling of relation, before it has even rudimentary linguistic significance [p. 10].
Yet, in the four decades that followed, linguistics focused mainly on speech sounds and disregarded linguistic function and meaning (see, e.g., Bloomfield, 1933; Greenberg, 1954; Hockett, 1954). This approach was rather like attempting to study computational operations by analyzing transistors and all the electronic devices that flip and flop in a computer, while disregarding the programming. The program in this analogy corresponds to what we call “language,” and the mechanisms that implement it (transistors, etc.) correspond to “speech.” Computational operations can typically be carried out by very different sorts of machines, electric, mechanical, or hydraulic. Electronic machines are preferred mainly because they are faster and because it is much easier to solder wires than to make gears and hydraulic valves. Similarly, the acoustic channel of speech is perhaps the most efficient and practical way to implement a linguistic system for living organisms as we know them, but this does not mean that language must be implemented through speech.
The emphasis on phonology and the concomitant lack of interest in semantics (the very aspect without which a system cannot have communicatory function) sprang from Bloomfield, whose theoretical stance was close to Watson’s stark “behaviorism,” and who held that a science of semantics could begin only when it was reduced to a “mechanistic psychology,” which, in turn, must be reduced to physiology (Ullman, 1959, p. 7; Wells, 1961, p. 274). Clearly, very rigid behaviorists could have nothing to do with what Sapir had called “elements of experience,” “visual images,” or “feelings of relation.” Yet, if language does function as a means of communication, an analysis of it will have to take the direction pointed out by Sapir (1921):
The mere sounds of speech are not the essential fact of language, which lies rather in the classification, in the formal patterning, and in the relating of concepts. Once more, language, as a structure, is on its inner face the mold of thought. It is this abstracted language, rather more than the physical facts of speech, that is to concern us in our inquiry [p. 22].
What is communication??
So long as we speak as ordinary people to other people, we are fairly sure that we know what the word “communication” means. Even when we hear of “communication gaps,” we have a good idea of what is involved. This feeling of confidence evaporates, however, when we begin to read what psychologists, especially animal psychologists, have to say about communication. In their usage, the term “communication” is, as Thomas Sebeok (1975) has said, “an undefined prime.” It is not that attempts at definition have not been made; on the contrary, there have been quite a few. But the trouble is that, to date, these attempts have not helped us to separate what we know is not communication from the manifestations that we feel are communication. When we read, for instance: “Communication can be said to occur whenever the activities of one animal influence the activities of another animal [Alexander, 1960, p. 38],” we do not have to think long to conjure up examples that meet this criterion and are still far from anything we would normally call communication. Alexander’s definition may apply to “interaction,” but it is far too general to isolate events of communication. Under this definition, pushing, pulling, fighting, wounding, and even eating another animal would all qualify as “communication.” Its only useful feature is its implicit stipulation that there must be at least two animals for the kind of transaction we are interested in. That point must be made, as there exists a legitimate use of the term “communication” to describe the interaction within one organism, say, between the eye and the brain, or the brain and the muscle.[Note 1]
The very insufficiency of any attempt to define “communication” in terms of action and reaction draws attention to some of the aspects of the phenomenon that need to be specified. First, as has been clear for a long time (cf. Rosenblueth, Wiener, & Bigelow, 1943; Haldane, 1955) communication is not a mechanical phenomenon, in that the amount of energy involved in the sender’s transmission does not determine what the receiver does. In other words, in communication there is no thermodynamically calculable relationship between the energy change that constitutes the signal and the energy expended as a result of receiving the signal. Instead, in communication there is always an exchange of “information,” and it is the informational character of all communication processes that sets them apart from mechanical or chemical interaction. Of course, the information aspect has to do with meaning, but the connection is not as simple as it may seem to the ordinary language user.
In the more than three decades that have passed since the publication of the pioneering paper by Rosenblueth et al. (1943), the differences between mechanical and informational processes have been made very clear. We now have two complementary theories that allow us to define “communication” unequivocally. On the one hand there is the mathematical theory of communication (Shannon, 1948; Wiener, 1948a, b), which deals with the transmission of signals, their recognition by a receiver, and the code by means of which they are translated into messages. On the other hand, there is information theory (MacKay, 1954, 1969), which deals with the sending and interpreting of messages. In the first, “information” is a purely quantitative concept; in the second, a qualitative aspect is added. Strictly speaking, therefore, what we ordinarily call “meaning” plays a part only in the second theory. The first theory contains only a weak counterpart to “meaning” in the processes of encoding and decoding. Wiener (1948a) gives an admirably clear example of those processes:
If I send one of those elaborate Christmas or birthday messages favored by our telegraph companies, containing a large amount of sentimental verbiage coded in terms of a number from one to one hundred, then the amount of information which I am sending is to be measured by the choice among the hundred alternatives, and has nothing to do with the length of the transcribed “message” [p. 202].
The “encoding” takes place when the telegraph employee gives me the list of one hundred messages, and instead of spelling out my choice, I merely say, “Number 55.” Since there is a fixed, conventional connection between each of the messages listed and some number between one and one hundred, another employee, one say in Honolulu, who receives the code number 55 can at once “decode” it by selecting the printed form number 55 that carries the message I have chosen. Note that there is, of course, a second coding and decoding involved in this transaction, namely the transformation (according to Morse code) of the number 55 into the corresponding electric impulses that do the actual traveling to Honolulu and which, in technical terms, constitute the signal. This second transformation is analogous to the first, since it, too, is done according to a preestablished fixed coding system. In both these transformations the code, i.e., the total set of fixed correspondences, must be known to both the transmitter and the receiver, and it must be the same code at both ends of the channel. The code lists the conventionally established significations of “artificial signs” sometimes referred to as “meaning” (e.g., a certain sequential pattern of impulses means “55,” or the number 55 means “Many happy returns!”) To use the term in this sense is confusing, however, because this sort of “meaning” is always fixed; that is, it must have been agreed on by the code users. Used in this sense, “meaning” would refer in speech, for example, to the rules that govern the conversion of a word or sentence into speech sounds according to the phonetic conventions of the language being used. These rules have nothing to do with what we ordinarily call the “meaning” of a word or sentence. Thus I may have satisfactorily decoded someone’s speech sounds and be quite sure of what he has said to me, yet, at the same time, I may be quite unsure as to what he meant by it.
Thus, decoding a signal and interpreting a message are alike in one respect: In both cases one makes a selection. In decoding, one selects the item (i.e., the message) that has been conventionally linked to the signal at hand. In interpreting, one selects the meaning of the message. The difference is that in interpreting, the possible meanings are not preestablished in a conventionally fixed list. There are several reasons – logical, epistemological, and psychological[Note 2] – why this is so, and they all substantiate the general observation that a message cannot be treated as a detached, independent item the way a signal can. In order to interpret a message, the receiver must take into account a “context,” i.e., aspects of his own present state, aspects of the sender’s state, and above all an implicit or explicit hypothesis as to why the message was sent. Information theory has provided a model by means of which we can approach this very complex process of interpretation (MacKay, 1954, 1969, 1972).
The messages in information theory as well as the signals in the theory of communication are goal directed, in that the source sends them in order to achieve a certain result. This is not to say that there must always be an individual addressee; “to whom it may concern” messages are just as purposive as addressed messages. Finally, it is important to realize that information theory is the more comprehensive of the two models, since it covers not only information conveyed via artificial signs in communicatory messages, but also information obtained via the perception of natural signs that enable the perceiver to make inductive inferences.
The distinction between “natural” and “artificial” signs was made by Susanne Langer (1948, p. 59) on the basis of her logical analyses and independently of the technical theories. I have shown elsewhere (von Glasersfeld, 1974) how closely her ideas match those of the cyberneticist and how necessary her distinction is if we want to define “communication.” Charles Hockett, in his description of human language by means of design features, approaches the characteristic of artificiality by his conditions of “arbitrariness” and “traditional transmission” (Hockett, 1960a, b, 1963; Hockett & Altmann, 1969). An “arbitrary” sign, in Hockett’s definition, is one that has no “iconic” relation (perceptual analogy) to what it signifies, and therefore anyone who does not yet know such an arbitrary sign cannot derive its meaning from the physical characteristics of the sign itself. It can be acquired only by learning, either through “traditional transmission” from the older generation, or, if it happens to be a newly created sign, by agreement with the other users. So far as communication is concerned, however, it is not the arbitrariness of a sign that makes it communicatory but its artificiality. The well-known road sign that consists of a capital Z is certainly “iconic,” in that it is a stylized picture of the curve to which it refers, but it is communicatory because it is artificial (since it is not part of an actual curve) and because it was put there for the specific purpose of warning approaching motorists.
The concept of “purpose” is essential for the definition of communication, and the purpose has to be on the side of the source or sender (Burghardt, 1970; Thorpe, 1972, pp. 46-47). If someone whom I do not want to offend has cornered me and is telling me an interminable story, I may reach the point where I can no longer suppress a yawn. If he notices it, he may, in spite of my efforts to look interested, infer that I am bored. This interpretation, I maintain, must not be called “communication.” It is no different from the inductive inferences we draw from practically everything we perceive; to make such inferences simply requires experience with the phenomena in question. If we see smoke, we infer that there is a fire; if we see a flash of lightning, we infer that there will be thunder (and vice versa); and if someone yawns, we infer that he is either tired, bored, or both. Returning to the example, if the person telling me the boring story happens to be someone whom I do not need to indulge, I may react differently. I may simply say, “That’s the third time you’ve told me that story,” or “Let’s talk about something else.” In either case I would be communicating to him that I would like him to stop boring me. There could be no doubt on my side about the purposiveness of my communication.
The restoration of purpose
For a long time, any mention of “purpose” was considered taboo by many scientists. The reason for this lay in the fact that the concept of purposive action or goal directedness had been tied up with final causes and what was considered to be Aristotelian teleology. It was believed that talk of purpose inevitably entailed the belief that something in the future could determine what was taking place now or had happened in the past. Ayala (1970) has questioned whether such an inversion of the cause – effect sequence was, in fact, what Aristotle had in mind. However that may be, there can be no doubt that the authorities who, in this century, proscribed the use of “purpose” in scientific explanation firmly believed that it involved some such unscientific conception. Ralph Barton Perry (1921), perhaps the most astute behaviorist philosopher, expressed it very clearly:
That a reference to the future as in some sense governing the act, is an essential feature of the traditional conception of purpose appears from the commonest terms of the teleological vocabulary, such as “for the sake of,” “in order to,” “with a view to,” “in fear of,” “in the hope of,” “lest,” etc. [p. 103].
Perry’s attempts to avoid “reference to the future” in his theory of behavior are remarkable because with his formulation of “determining tendencies” (which are based on past experience) he comes so very close to the cyberneticist’s definition of “purpose.” The fact that he did not quite get there was probably due to the lack of a functional model that could demonstrate how an experientially acquired condition or set of conditions could take over the very function that the traditional teleologists ascribed to the future.
Twenty years later, when Hofstadter (1941) published his brilliant analysis, the lack of a functioning model was certainly the reason why he was careful to ascribe descriptive but not explanatory power to his definition of “objective teleology.” It is worth looking at his summary of that definition because it foreshadows with remarkable precision the cybernetic model that was built in the years that followed.
Thus the unitary attribute of the teleological actor is not the possession of end alone, or sensitivity alone, or technique alone, but of all three in inseparable combination. However, it is also true that although they can not be separated in the unitary attribute they may nevertheless be analyzed out independently by the use of a plurality of acts of the same agent [italics in original, pp. 34-35].[Note 3]
If we substitute the modern cyberneticist’s terms (e.g., Powers, 1973) for the three components, we have reference value (for “end”), sensory function (for “sensitivity”), and effector function (for “technique”). We know that in an actual feedback-control system these three components can never be separated because the operation of the system is dependent on their circular arrangement. This arrangement is such that no one point can be isolated as initial “cause” and no one point can be isolated as terminal “effect.” The system operates as a unit and, to use the fashionable term of general systems theory, as “a whole.” Nevertheless, as Hofstadter accurately foresaw, the observer can assess the characteristics of the three components by considering the system’s behavior over time and with regard to different disturbances.
It is surely one of the most intriguing aspects of the intellectual history of Western civilization that whereas functioning implementations of the principle of “negative feedback” had been designed and built since the third century B.C. (Mayr, 1970), the principle itself with all its implications for the behavior of living organisms and the nervous system was formulated only about three decades ago (Rosenblueth et al., 1943). Since then, we have seen the proliferation of “goal-seeking” devices that incorporate purposes in no uncertain terms; but at the same time, we still have influential people who consider “purpose” one of the fictitious “perquisites of autonomous man” a creature whose “abolition has long been overdue” (Skinner, 1971, pp. 13, 191). To rant against the concept of goal-directed or purposive behavior at a time when automatic pilots can keep an aircraft on a preestablished course and missiles can home in on a moving target is hardly a sign of objectivity. With the advent of cybernetics and control systems, the concept of purpose has been given not only scientific but also practical technological validity, in that we can use it to construct functioning mechanisms. That, to my knowledge, is the best validation of a scientific principle we can ask for.
I do not intend to suggest that all the behavior of living organisms must necessarily be purposive. Reflexive behavior, for instance, cannot be classified as purposive because it is, by definition, a linear cause – effect chain not under the control of a closed feedback loop. Hi nde and Stevenson (1970) have suggested that behavior must be analyzed extremely carefully and, above all, with regard to how the consequences of an activity affect that activity. This is a crucial question for the observer who wants to decide whether or not an organism’s behavior is goal directed. But the question is not quite as simple as it seems. In an organism of a certain complexity, no one feedback loop is an independent entity with a constant reference value or “goal.” As MacKay (1966, 1967), Powers (1973), and in an elementary way Miller, Galanter, and Pribram (1960) have indicated, an organism must be viewed as an hierarchical system of control loops, in which the reference value of one unit is itself controlled by another.
The consequences of a communicatory act are important for the sender. It is on the basis of the consequences that he assesses whether or not his message has achieved the desired effect. But that is not the essential criterion for the observer who wants to decide whether or not a sign is to be considered purposive. A general purposiveness is inherent in the artificiality of the sign or signs used, and does not depend on the effect the sign may or may not have on a single receiver. The warning Z sign alongside the highway is a purposive communicatory sign regardless of how many motorists pass it without reducing their speed.
From sign to symbol
If we accept the basic idea that, in order to be communicatory rather than inferential, signs have to be in some sense artificial, [Note 4] we can at once state the first of three criteria for the characterization and recognition of “language.” Although communicatory signs taken alone cannot be considered a linguistic system, they do constitute a prerequisite for such a means of communication. Thus we can say that before there can be “language” there must be a set or lexicon of communicatory signs. The size of the lexicon, i.e., the number of signs it contains, is theoretically irrelevant, provided there are enough discrete and individually differentiated signs to allow for the combinatorial patterning required by my third criterion, which will be explained later.
The second criterion is that these signs be used symbolically. This characteristic was approached by Hockett’s concept of “displacement,” in an attempt to reconcile the manifest freedom of topic provided by human language with the psychological dogma according to which he was trying to explain behavior, including “verbal behavior,” as response to a stimulus. By “displacement” Hockett meant that: “We can talk about things that are remote in time, space, or both from the site of the communicative transaction [Hockett & Altmann, 1969, p. 63].”
We can hardly deny that this is so. But as a criteria! condition this kind of displacement is not quite enough. The reason for the insufficiency is clearly delineated in Hockett’s original explanation: “Any delay between the reception of a stimulus and the appearance of the response means that the former has been coded into a stable spatial array, which endures at least until it is read off in the response [Hockett, 1960a, p. 417].”
This describes what goes on in signaling which, we may say, is always bound to a more or less immediate experiential context, i.e., to things that have happened or are happening. But language allows us to talk not only about things that are spatially or temporally remote, but also about things that have no location in space and never happen at all. The very fact that we can make understandable linguistic statements about space and time, right and wrong, Humpty-dumpty, and the square root of minus one demonstrates rather incontrovertibly that language can deal with items that have nothing to do with “observable stimuli” or with the “referents” of the traditional theory of reference in linguistic philosophy.
In order to become a symbol, the sign has to be detached from input. What the sign signifies, i.e., its meaning, has to be available, regardless of the contextual situation.[Note 5] In other words, symbolic meaning has to be conceptual and “inside” the system. That is precisely what Ogden and Richards (1923) illustrated by means of their famous triangle in which the word is directly tied to a conception only, and such links as it may have to things (i.e., items of perceptual experience) are imputed. In other words, these links are indirect and arise through epistemic connections between the concepts and perceptual experience. Susanne Langer (1948) clearly differentiated the use of symbols from that of signs:
Symbols are not proxy for their objects, but are vehicles for the conception of objects. To conceive a thing or a situation is not the same as to “react toward it” overtly, or to be aware of its presence. In talking about things we have conceptions of them, not the things themselves; and it is the conceptions, not the things, that symbols directly “mean” [italics in original, p. 61].
The role of symbolicity in the characterization of linguistic communication can be illustrated taking as an example the famous “language of the bees.” There no longer is any doubt that the bees’ “dance” constitutes an elaborate system of communication (von Frisch, 1974; Gould, 1975). It has a lexicon of artificial signs, [Note 6] and it certainly has combinatorial patterning and “displacement,” since the messages are made up of distance indication and directional indication, and always refer to items that are themselves remote from the dance site. But precisely because these messages are always produced with reference to a specific target location from which the sender has just returned and to which the recruits are to go, they cannot be said to have symbolicity. To qualify as language, the bees’ dance would have to be used also without this one-to-one relation to a behavioral response (e.g., in comments, proposals, or questions concerning a foraging location), and this has never been observed. In short, a communication system that allows for imperatives only – no matter how sophisticated and accurate they might be – should not be called a language.
The semantic aspect of syntax
The third criterion for the application of the term “language” is combinatorial patterning. This is equivalent to Hockett’s “openness” (Hockett & Altmann, 1969) or “productivity” (Hockett, 1960a, b), which Hockett (1960a) describes as follows:
The language also provides certain patterns by which these elementary significant units (morphemes) can be combined into larger sequences, and conventions governing what sorts of meanings emerge from the arrangements. These patterns and conventions are the grammar of the language [p. 418].
Hockett’s conception of openness or productivity is, of course, closely connected with what linguists call “syntax,” i.e., the conventional rules of the language that govern the sequencing of words. From the communication point of view, however, the crucial aspect as Hockett pointed out is that from the sequences or sentences taken as wholes, meanings emerge that are on a level above the meanings of the individual signs or words that compose the sequence. It is, indeed, this conventional, rule-governed combinatorial meaning that makes possible the theoretically infinite openness of language. Whereas in English sequential order is the main device for the expression of combinatorial meaning, in other languages, such as Latin or German, case- endings or case-specific articles are equivalent devices. The important point is that language has some way to express certain relations between the items designated by single signs or words. This may be achieved by different means in different languages: by rules of sequential order; by prefixes, infixes, or suffixes; or by specific words, such as prepositions, which have relational meaning only. But language as a system of communication is characterized by the fact that such relations are expressed by word combinations.[Note 7]
To sum up this brief description of language as a communicatory system, we can say that it has three indispensable characteristics.
There must be a set, or lexicon, of artificial signs.These signs can be and are used as symbols; in other words, they are used on the conceptual level and without reference to a particular perceptual or behavioral instance of the item they signify.There must be a grammar, that is, a set of conventional rules that govern the formation of sign combinations that have semantic content in addition to the meanings of the individual signs.
The recognition of language
Given the three characteristics described in the preceding sections of this chapter, we can now ask how we can discriminate communicatory behavior from simple interaction, and linguistic communication from nonlinguistic signaling. Clearly it is one thing to set up criteria for language on the basis of theoretical considerations and quite another to specify observable behavioral manifestations that would indicate that these criteria are satisfied. Among the criteria I have discussed, there are two that seem particularly elusive from an observational point of view: the artificiality of signs and their use as symbols.
If we are observing organisms with whom we have not established a system of communication, it will at times be difficult or even impossible to decide whether a particular behavior is being carried out for its own sake (i.e., to reduce an internal or environmental disturbance) or for the purpose of communication (i.e., to reduce a disturbance by the attempt to modify in a nonmechanical way the behavior of another organism that is creating the disturbance). There is, obviously, no neat demarcation line between the two. Any threat gesture, posture, or vocalization that constitutes part of a larger aggressive or agonistic activity chain cannot be said to be a communicatory sign, so long as it has been observed only in conjunction with the other parts of the chain. But when it occurs without the other parts and functionally replaces them, we should give it communicatory status (cf. Tinbergen, 1952). A single observation or even a few isolated sporadic observations will hardly ever be sufficient to decide whether or not it occurs in isolation. Familiarity with the organism, its whole behavioral repertoire, and above all with the kinds of disturbance that bring it into action will be indispensable.
In the laboratory setting, on the other hand, when a chimpanzee spontaneously makes a conventional gestural sign, picks up pieces of plastic and places them on a magnetic board, or presses specific keys in a keyboard for specific rewards there can be no doubt about the artificiality of the signs used or about their communicatory function. Even if, as some suggest, this signing or pressing of keys were due to some form of conditioning by means of reinforcement, there would still remain the fact that the chimpanzee discriminates between different reinforcers and communicates by his choice of a particular sign which one it wants at the moment. If it can be observed that a request, say for a slice of banana, is coupled with an expectation of a slice of banana and therefore will not be satisfied by the window being opened, then the communicatory function of these signs is established. In this connection it should be remembered that several years before the various communication studies with chimpanzees began, Mason and Hollis (1962) demonstrated that nonhuman primates can not only communicate but also create artificial communicatory signs for the cooperative solution of a problem.
To prove that an organism in the wild is using signs symbolically may be very difficult indeed unless the observer can communicate with the organism. A sign is turned into a symbol and thus acquires “symbolicity” when it is used without any connection to a perceptual or behavioral instance of its signification. Delay in time and distance of location are not sufficient. When witnessing a sign for the first time, the observer can never be sure that the sign was not the response to an instance of its referent in the remote past, or that it will not lead to a behavioral response on the part of the receiver in some distant future. On the other hand, the experienced observer who is aware of the theoretical distinctions between the symbolic use of signs and signaling should be able to decide that question at least tentatively on pragmatic grounds. I have no doubt that an observer studying the communicatory dance of the bees, for example (which in its symbolic nature seems to come closer to “language” than anything that has been observed in the great apes), could decide before long whether or not bees display situation-detached, and, therefore, symbolic dancing. It would be necessary only to look for the existence of dance events that are neither directly tied to foraging behavior nor given as imperatives to the recruits. Among nonhuman primates, play has frequently been observed together with specific signs that indicate to the partner that a given act is not to be taken seriously. “The messages or signals exchanged in play are in a certain sense untrue or not meant, [and] that which is denoted by these signals is nonexistent [Bateson, 1972, p. 183].” This definition comes very close to symbolicity. In fact, Piaget has always maintained that play and imitation are crucial elements in the development of symbolic capabilities (see, e.g., Piaget, 1967).
In a laboratory or experimental environment, the issue can be decided much more easily. Once some communication has been established so that the organism can be asked questions, we can test whether or not the subject has a representational conception of a given item. Any novel question concerning the character or use of an item that is not perceptually present at the moment can be answered correctly only if the subject has a sufficiently clear and detailed representation of the item named. That, after all, is what we mean when we speak of conception.[Note 8]
Finally, there is the question of combinatorial rules, or grammar. Once we know at least some of the signs by means of which a given species communicates, it should not be impossible to find out whether combining two or more signs adds something to their meaning and whether these combinations are recurrent and rule governed. The issue, however, has been thoroughly befogged by the general confusion concerning descriptive rules and rules of action. It is one thing to acquire a rule-governed behavior and quite another to isolate and specify the rules that govern it. In early infancy, for example, we all acquire the skill of crawling. As Piaget has shown, up to the age of 5 to 6 years, children invariably give erroneous descriptions of the sequence of arm and leg motions involved in crawling, even when they have just performed it. Only about a third of children aged 8-10 can specify the rules correctly; and even in a test of adults, by no means all subjects gave the correct description (Piaget, 1974, pp. 14-15). So it is with our use of language. Unless we happen to be linguists, “we always operate within a framework of living rules [italics in original, Sellars 1967, p. 315].” That is to say, we operate according to rules to which we have adapted without having formulated them. In this sense we are said to “know” the rules when our performance conforms to them. This is the only way we should speak of rules in the linguistic attempts of nonhuman primates. If they produce a substantial number of novel sign-combinations that conform to the grammar of the language they are using, then they must be credited with rule- governed productivity and “knowledge” of the grammar.
To conclude these notes on how to recognize manifestations of language, I repeat that this recognition will be more difficult and will require a much subtler approach with organisms in their natural habitat than in an experimental situation. The artificiality of signs and thus their communicatory status, may be established only after prolonged observation in the wild. In the laboratory it is a foregone conclusion that the signs are artificial since they are manmade and, as such, not part of the subject’s original behavioral repertoire. The use of signs as symbols by free-living organisms could be established only after a minute examination of their sign repertoire and the activities preceding and following the occurrence of specific signs. In the experimental situation, on the other hand, how the signs are being used can be discovered by certain questions, i.e., by means of the very system of communication to which the subject is being introduced. The question of conformity to grammatical rules of sign combination is a good deal easier to decide. Provided there is a preestablished grammar, there will be no problem in determining whether the subject’s novel combinations are grammatical. With a free-living species, of course, the grammar would first have to be discovered by the observer. That discovery may be difficult since the human observer is necessarily anthropocentric and will always tend to look for a grammar that resembles the grammar of his human language.
References
Alexander R. D. (1960) Sound communication in orthoptera and cicadidae. In: W. E. Lanyon & W. N. Tavolga (eds.) Animal sounds and communication. Washington D. C.: American Institute of Biological Sciences.
Armer P. (1963) Attitudes toward intelligent machines. In: E. A. Feigenbaum & J. Feldman (eds.) Computers and thought. New York: McGraw-Hill.
Ayala F. J. (1970) Teleological explanations in evolutionary biology. Philosophy of Science 37(1): 1–15.
Bateson G. (1972) Steps to an ecology of mind. New York: Ballantine.
Bloomfield L. (1933) Language. New York: Henry Holt.
Brown G. S. (1969) Laws of form. London, England: Allen & Unwin.
Burghardt G. M. (1970) Defining “communication.” In J. W. Johnston D. G. Moulton, & A. Turk (eds.) Communication by chemical signals. New York: Appleton.
Fouts R. (1973) Acquisition and testing of gestural signs in four young chimpanzees. Science 180: 978–980.
Gardner B. T. & Gardner R. A. (1971) Two-way communication with an infant chimpanzee. In: A. M. Schrier & F. Stollnitz (eds.) Behavior of nonhuman primates, Vol. 4. New York: Academic Press.
Glasersfeld E. von (1974) Signs, communication, and language. Journal of Human Evolution 3: 465–474. http: //cepa.info/1322
Glasersfeld E. von (1976) The development of language as purposive behavior. Annals of the New York Academy of Sciences 279: 212–226.http: //cepa.info/1329
Gould J. L. (1975) Honey bee recruitment: The dance-language controversy. Science, 189: 685–693.
Greenberg J. H. (1954) Concerning inferences from linguistic to nonlinguistic data. In: H. Hoijer (ed.) Language in culture. Chicago: Univ. of Chicago Press.
Haldane J. B. S. (1955) Animal communication and the origin of human language. Science Progress 43: 383–401.
Hinde R. A. & Stevenson J. G. (1970) Goals and response control. In: L. R. Aronson E. Tobach D. S. Lehrman, & J. S. Rosenblatt (eds.) Development and evolution of behavior: Essays in memory of T. C. Schneirla. San Francisco: Freeman.
Hockett C. F. (1954) Chinese versus English: An exploration of the Whorfian hypothesis. In: H. Hoijer (ed.) Language in culture. Chicago: Univ. of Chicago Press.
Hockett C. F. (1960a) Logical consideration in the study of animal communication. In: W. E. Lanyon & W. N. Tavolga (eds.) Animal sounds and communication. Washington D. C.: American Institute of Biological Sciences.
Hockett C. F. (1960b) The origin of speech. Scientific American 203(3): 88–96.
Hockett C. F. (1963) The problem of universals in language. In: J. H. Greenberg (ed.) Universals of language. Cambridge, Massachusetts: M. I. T. Press.
Hockett C. F. & Altmann S. A. (1969) A note on design features. In: T. A. Sebeok (ed.) Animal communication. Bloomington: Indiana University Press.
Hofstadter A. (1941) Objective teleology. journal of Philosophy 38(2): 29–39.
Langer S. K. (1948) Philosophy in a new key. Cambridge, Massachusetts: Harvard University Press. (1942)
MacKay D. M. (1954) Operational aspects of some fundamental concepts of human communication. Synthese 9(3–5): 182–198.
MacKay D. M. (1966) Cerebral organization and the conscious control of action. In: J. C. Eccles (ed.) Brain and conscious experience. New York: Springer.
MacKay D. M. (1967) Ways of looking at perception. In: W. Wathen-Dunn (ed.) Models for the perception of speech and visual form. Cambridge, Massachusetts: M. I. T. Press.
MacKay D. M. (1969) Information, mechanism, and meaning. Cambridge, Massachusetts: M. I. T. Press.
MacKay D. M. (1972) Formal analysis of communicative processes. In: R. A. Hinde (ed.) Non-verbal communication. Cambridge, England: Cambridge Univ. Press.
Mason W. A. & Hollis J. H. (1962) Communication between young Rhesus monkeys. Animal Behavior 10(3–4): 211–221.
Mayr O. (1970) The origin of feedback control. Cambridge, Massachusetts: M. I. T. Press.
Miller G. A., Gallanter E. & Pribram K. H. (1960) Plans and the structure of behavior. New York: Holt.
Minsky M. (1963) Steps toward artificial intelligence. In: E. A. Feigenbaum & J. Feldman (eds.) Computers and thought. New York: McGraw-Hill.
Mounin G. (1974) Review of R. A. Hinde’s “Non-verbal communication.” Journal of Linguistics 10: 201–206.
Ogden C. K. & Richards I. A. (1923) The meaning of meaning. New York: Harcourt. (Harvest paperback, 1946)
Perry R. B. A (1921) behavioristic view of purpose. Journal of Philosophy 28(4): 85–105.
Piaget J. (1967) Six psychological studies. New York: Random House, Vintage.
Piaget J. (1974) La prise de conscience. Paris: Presses Universitaires de France.
Powers W. T. (1973) Behavior: The control of perception. Chicago: Aldine.
Premack D. (1971) On the assessment of language competence in the chimpanzee. In: A. M. Schrier & F. Stollnitz (eds.) Behavior of nonhuman primates, Vol. 4. New York: Academic Press.
Premack D. (1975) Overview. Conference on origins and evolution of language and speech. New York: New York Academy of Sciences.
Rosenblueth A., Wiener N. & Bigelow J. (1943) Behavior, purpose, and teleology. Journal of Philosophy of Science 10: 18–24. http: //cepa.info/2691
Rumbaugh D. M., Glasersfeld E. von, Warner H., Pisani P. P., Gill T. V., Brown J. V. & Bell C. L. (1973) A computer-controlled language training system for investigating the language skills of young apes. Behavioral Research Methods and Instrumentation 5(5): 385–392.
Sapir E. (1921) Language. New York: Harcourt. (Harvest paperback, 1965)
Sebeok T. A. (1974) Semiotics: A survey of the state of the art. In: T. A. Sebeok (ed.) Current trends in linguistics, Vol. 12. The Hague: Mouton.
Sebeok T. A. (1975) The semiotic web: A chronicle of prejudices. Semiotica, in press.
Sellars W. (1967) Language, rules, and behavior. In: S. Hook (ed.) John Dewey: Philosopher of science and freedom. New York: Barnes & Noble.
Shannon C. E. (1948) The mathematical theory of communication. Bell System Technical Journal 27: 379–423, 623–656.
Skinner B. F. (1971) Beyond freedom and dignity. New York: Bantam/Vintage.
Taube M. (1961) Computers and common sense – the myth of thinking machines. New York: Columbia Univ. Press.
Thorpe W. H. (1972) The comparison of vocal communication in animals and man. In: R. A. Hinde (ed.) Non-verbal communication. Cambridge, England: Cambridge University Press.
Tinbergen N. (1952) ‘Derived’ activities: Their causation, biological significance, origin, and emancipation during evolution. Quarterly Review of Biology 27(1): 1–32.
Ullman S. (1959) The principles of semantics. 2nd edition. Oxford: Blackwell, 1951.
von Frisch K. (1974) Decoding the language of the bee. Science 185: 663–668.
Wells R. (1961) Meaning and use. In: S. Saporta (ed.) Psycholinguistics. New York: Holt.
Wiener N. (1948a) Time, communication, and the nervous system. Annals of the New York Academy of Sciences 50: 197–219.
Wiener N. (1948b) Cybernetics. Cambridge, Massachusetts: M. I. T. Press.
Endnotes
1
Norbert Wiener (1948b) made this distinction by speaking of “intercommunication” when sender and receiver were two separate organisms, and of “communication” when sender and receiver were two parts of one organism. Since this usage is not likely to become general, I use the term “communication” in the context of this chapter to refer to communication between two organisms.
2
Logical and epistemological, because if signs are used as symbols, their definition has to be an intensional one, i.e., a more or less subjective construct for both sender and receiver. Since there is no way for one user to match another user’s constructs directly with his own, his interpretation of a message frequently has to take into account inferences drawn from prior communication experience with the particular sender. The “psychological” indeterminacy springs from the fact that one and the same linguistic message can be sent for widely varying purposes.
3
I am indebted to Thomas Sebeok for having drawn my attention to Hofstadter’s essay.
4
I deliberately say “in some sense artificial,” because in nonhuman species signs are clearly not invented and agreed on as they are among humans. Nevertheless there are communicatory signs and they are artificial; for an analysis of this artificiality derived from Tinbergen’s (1952) concept of “incipient movement,” see von Glasersfeld, 1974, 1975.
5
Georges Mounin in a recent book review states that the condition of “displacement” is “not well formulated; one should perhaps speak of messages outside a situation [Mounin, 1974, p. 205, my translation].”
6
That the signs are produced by dancing and thus cannot be part of the flight they designate makes them artificial, and the fact that they are also somewhat iconic (because direction relative to the sun is indicated by direction relative to gravity or an artificial light) does not affect that artificiality.
7
Note that relations are always the result of some conceptual operation on the part of the perceiver; they are not inherent in what is perceived but in the way of perceiving. That this is so even with the simplest spatial relations expressed by prepositions was shown by Spencer Brown (1969). One could say, therefore, that any communication system that provides for the expression of conceptual relations (which as such are never perceptually present) must have “symbolicity.” But since this way of arguing involves basic epistemological considerations, I have here chosen the simpler approach to “symbolicity” via the perceptual and behavioral absence of referents.
8
This point was recently made also by Premack (1975) with regard to his work with the chimpanzee Sarah.
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/1335 on 2017-01-31 · Publication curated by Alexander Riegler