CEPA eprint 3054

Signs and minds: An introduction to the theory of semiotic systems

Fetzer J. H. (1988) Signs and minds: An introduction to the theory of semiotic systems. In: Fetzer J. (ed.) Aspects of artificial intelligence. Kluwer, Dordrecht: 133–161. Available at http://cepa.info/3054
Table of Contents
1. Peirce’s theory of signs
2. Abstract systems and physical systems
3. Causal systems and semiotic systems
4. The varieties of semiotic systems
5. Symbol systems and causal systems
6. Symbol systems and semiotic systems
7. The symbol-system and the semiotic-system hypotheses
8. What about humans and machines?
9. What difference does it make?
Acknowledgment
References
Perhaps no other view concerning the theoretical foundations of artificial intelligence has been as widely accepted or as broadly influential as the physical symbol system conception advanced by Newell and Simon (1976), where symbol systems are machines – possibly human – that process symbolic structures through time. From this point of view, artificial intelligence deals with the development and evolution of physical systems that employ symbols to represent and to utilize information or knowledge, a position often either explicitly endorsed or tacitly assumed by authors and scholars at work within this field (cf. Nii et al., 1982 and Buchanan 1985). Indeed, this perspective has been said to be “the heart of research in artificial intelligence” (Rich 1983, p. 3), a view that appears to be representative of its standing within the community at large.
The tenability of this conception, of course, obviously depends upon the notions of system, of physical system, and of symbol system that it reflects. For example, Newell and Simon (1976) tend to assume that symbols may be used to designate “any expression whatsoever,” where these expressions can be created and modified in arbitrary ways. Without attempting to deny the benefits that surely are derivable from adopting their analysis, it may be worthwhile to consider the possibility that this approach could be perceived as part of another more encompassing position, relative to which physical symbol systems might qualify as, say, one among several different sorts of systems possessing the capacity to represent and to utilize knowledge or information. If such a framework could be constructed, it might not only serve to more precisely delineate the boundaries of their conception but contribute toward the goal of illuminating the theoretical foundations of artificial intelligence as well.
Not the least important question to consider, moreover, is what an ideal framework of this kind ought to be able to provide. In particular, as their “physical symbol system hypothesis” – the conjecture that physical symbol systems satisfy the necessary and sufficient conditions for “intelligent action” (Newell and Simon, 1976) – itself suggests, the conception of physical symbol systems is intended to clarify the relationship between symbol processing and deliberate behavior, in some appropriate sense. Indeed, it would seem to be a reasonable expectation that an ideal framework of this kind ought to have the capacity to shed light on the general character of the causal connections that obtain between mental activity and behavioral tendencies to whatever extent they occur as causes or as effects of the production and the utilization of symbols (or of their counterparts within a more encompassing conception).
The purpose of this paper, therefore, is to explore this possibility by examining the issues involved here from the point of view of a theory of mind based upon the theory of signs (or “semiotic theory”) proposed by Charles S. Peirce (1839–1914). According to the account that I shall elaborate, minds ought to be viewed as semiotic systems, among which symbolic systems (in a sense that is not quite the same as that of Newell and Simon) are only one among three basic kinds. As semiotic systems, minds are sign-using systems that have the capacity to create or to utilize signs, where this capability may be either naturally produced or artificially contrived. The result is a conception of mental activity that promises to clarify and illuminate the similarities and the differences between semiotic systems of various kinds – no matter whether human, (other) animal or machine – and thereby avail its support for the foundations of artificial intelligence, while contributing toward understanding mankind’s place within the causal structure of the world.
1. Peirce’s theory of signs
It is not uncommon to suppose that there is a fundamental relationship between thought and language, as when it is presumed that all thinking takes place in language. But it seems to me that there is a deeper view that cuts into this problem in a way in which the conception of an intimate connection between thought and language cannot. This is a view about the nature of mind that arises from reflection upon the theory of signs (or “semiotic theory”) advanced by Peirce. The most fundamental concept Peirce elaborated is that of a sign as a something that stands for something (else) in some respect or other for somebody. He distinguished between three principal areas of semiotic inquiry, namely, the study of the relations signs bear to other signs; the study of the relations signs bear to that for which they stand; and, the study of the relations that obtain between signs, what they stand for, and sign users. While Peirce referred to these dimensions of semiotic as “pure grammar,” “logic proper,” and “pure rhetoric,” they are more familiar under the designations of “syntax,” “semantics,” and “pragmatics,” respectively, terms that were introduced by Morris (1938) and have become standard.
Within the domain of semantics, Peirce identified three ways in which a sign might stand for that for which it stands, thereby generating a classification of three kinds of signs. All signs that stand for that for which they stand by virtue of a relation of resemblance between those signs themselves and that for which they stand are known as “icons.” Statues, portraits and photographs are icons in this sense, when they create in the mind of a sign user another – equivalent or more developed – sign that stands in the same relation to that for which they stand as do the original signs creating them. Any signs that stand for that for which they stand by virtue of being either causes or effects of that for which they stand are known as “indices.” Dark clouds that suggest rain, red spots that indicate measles, ashes that remain from a fire are typical indices in this sense. Those signs that stand for that for which they stand either by virtue of conventional agreements or by virtue of habitual associations between those signs and that for which they stand are known as “symbols.” Most of the words that occur in ordinary language, such as “chair” and “horse” – which neither resemble nor are either causes or effects of that for which they stand – are symbols in this technical sense. (Compare Peirce, 1955, esp. pp. 97-155 and pp. 274-289; and Peirce, 1985.)
There is great utility in the employment of symbols by the members of a sign-using community, of course, since, as Newell and Simon (1976) recognize, purely as a matter of conventional agreement, almost anything could be used to stand for almost anything else under circumstances fixed by the practices that might govern the use of those signs within that community of sign users. Thus, the kinds of ways in which icons and indices can stand for that for which they stand may be thought of as natural modes, insofar as relations of resemblance and of cause-and-effect are there in nature, whether we notice them or not; whereas conventional or habitual associations or agreements are there only if we create them. Nevertheless, it would be a mistake to make too much of this distinction, because things may be alike or unalike in infinitely many ways, where two things qualify as of a common kind if they share common properties from a certain point of view, which may or may not be easily ascertainable.
The most important conception underlying this reflection is that of the difference between types and tokens, where “types” consist of specifications of kinds of things, while “tokens” occur as their instances. The color blue for example, when appropriately specified (by means of color charts, in terms of angstrom units, or whatever), can have any number of instances, including bowling balls and tennis shoes, where each such instance qualifies as a token of that type. Similar considerations obtain for sizes, shapes, weights, and all the rest: any property (or pattern) that can have distinct instances may be characterized as a kind (or type), where the instances of that kind (type) qualify as its tokens. The necessary and sufficient conditions for a token to qualify as a token of a certain type, moreover, are typically referred to as the intension (or “meaning”) of that type, while the class of all things that satisfy those conditions is typically referred to as its extension (or “reference”). The distinction applies to icons, to indices and to symbols alike, where identifying things as tokens of a type presumes a point of view, which may or may not involve more than the adoption of a certain semiotic framework.
The importance of perspective can be exemplified. My feather pillow and your iron frying-pan both weight less than 7 tons, are not located on top of Mt. Everest, and do not look like Lyndon Johnson. Paintings by Rubens, Modigliani, and Picasso clearly tend to suggest that relations of resemblance presuppose a point of view. A plastic model of the battleship Missouri may be like the “Big Mo” with respect to the relative placement of its turrets and bulwarks, yet fail to reflect other properties – including the mobility and firepower – of the real thing, where whether a specific set of similarities ought to qualify as resemblance depends upon and varies with the adoption of some perspective. And, indeed, even events are described as “causes” and as “effects” in relation to implicit commitments to laws or to theories. Relations of resemblance and causation have to be recognized to be utilized, where the specification of a complex type can be a rather complex procedure.
In thinking about the nature of mind, it seems plausible that Peirce’s theory of signs might provide us with clues. In particular, reflecting upon the conception of a sign as a something that stands for something (else) in some respect or other for somebody, it appears to be a presumption to assume that those somethings for which something can stand for something (else) in some respect or other must be “somebodies”: would it not be better to suppose, in a non-question-begging way, that these are “somethings,” without taking for granted that the kind of thing that these somethings are has to be human? In reasoning about the kind of thing that these somethings might be, therefore, the possibility arises that those somethings for which something can stand for something (else) might themselves be viewed as “minds.” (Morris, 1938, p. 1, suggests that (human) mentality and the use of signs are closely connected). The conception that I shall elaborate, therefore, is that minds are things that are capable of utilizing signs (“sign users”), where semiotic systems in this sense are causal systems of a special kind.
2. Abstract systems and physical systems
In defense of this position, I intend to explain, first, what it means to be a system, what it means to be a causal system, what it means to be the special kind of causal system that is a semiotic system; and, second, what, if anything, makes this approach more appropriate than the Newell and Simon conception, especially with respect to the field of artificial intelligence. The arguments that follow suggests that Newell and Simon’s account harbors an important equivocation, insofar as it fails to define the difference between a set of symbols that is significant for a user of a machine – in which case, there is a semiotic relationship between the symbols, what they stand for, and the symbol user, where the user is not identical with the machine – and a set of symbols that is significant for use by a machine – in which case, a semiotic relationship obtains between the symbols, what they stand for, and the symbol user, where the user is identical with the machine. The critical difference between symbol systems and semiotic systems emerges at this point.
Let us begin by considering the nature of systems in general. A system may be defined as a collection of things – numbers and operators, sticks and stones, whatever – that instantiates a fixed arrangement. This means that a set of parts becomes a system of a certain kind by instantiating some set of specific relations – logical, causal, whatever – between those parts. Such a system may be functional or non-functional with respect to specified inputs and outputs, where the difference depends on whether that system responds to an input or not. In particular, for a fixed input – such as assigning seven as the value of a variable, resting a twelve-pound weight on its top, whatever – a specific system will tend to produce a specific output (which need not be unique to its output class) – such as yielding fourteen as an answer, collapsing in a heap, whatever – where differences in outputs (or in output classes) under the same inputs can serve to distinguish between systems of different kinds, but only as a sufficient and not as a necessary condition (cf. the distinction of deterministic and indeterministic systems below).
For a system to be a causal system means that it is a system of things within space/time between which causal relations obtain; an abstract system, by comparison, is a system of things not in space/time between which logical relations alone can obtain. This conception of a causal system bears strong resemblance to Newell and Simon’s conception of a physical system, which is a system governed by the laws of nature. Indeed, to the extent to which the laws of nature include non-causal as well as causal laws, both of these conceptions should be interpreted broadly (though not therefore reductionistically, since the non-occurrence of emergent properties – possibly including at least some mental phenomena – is not a feature of the intended interpretation of causal systems; cf. Fetzer, 1981, 1986a). Since systems of neither kind are restricted to inanimate as opposed to animate systems, let us assume that their “physical systems” and my “causal systems” are by and large (“roughly”) the same, without pretending to have conclusively established their identity.
Within the class of causal (or of physical) systems, moreover, two subclasses require differentiation. Causal systems whose relevant properties are only incompletely specified may be referred to as “open,” while systems whose relevant properties are completely specified are regarded as “closed.” Then a distinction may be drawn between two kinds of closed causal systems, namely, those for which, given the same input, the same output invariably occurs (without exception), and those for which, given the same input, one or another output within the same class of outputs invariably occurs (without exception). Systems of the first kind, accordingly, are deterministic causal systems, while those of the second kind are indeterministic causal systems. Whether or not Newell and Simon would be willing to acknowledge this distinction, I cannot say for certain; but because it is well-founded and its introduction begs no crucial questions, I shall suppose they would.
This difference, incidentally, is not the same as that in computational theory between deterministic and non-deterministic finite automata, such as parsing schemata, which represent paths from grammatical rules (normally called “productions”) to well-formed formulae (typically “terminal strings”) for which more than one production sequence is possible, where human choice influences the path selected (Cohen, 1986, pp. 142-145). While Newell and Simon acknowledge this distinction, strictly speaking, the systems to which it applies are special kinds of “open” rather than “closed” causal systems. The conception of abstract systems also merits more discussion, where purely formal systems – the systems of the natural numbers, of the real numbers, and the like – are presumptive examples thereof. While abstract systems in this sense are not in space/time and cannot exercise any causal influence upon the course of events during the world’s history, this result does not imply that, say, inscriptions of numerals – as their representatives within space/time – cannot exercise causal influence as well. Indeed, since chalkmarks on blackboards affect the production of pencilmarks in notebooks, where some of these chalkmarks happen to be numerals, such a thesis would be difficult to defend.
For a causal system to be a semiotic system, of course, it has to be a system for which something can stand for something (else) in some respect or other, where such a something (sign) can affect the (actual or potential) behavior of that system. In order to allow for the occurrence of dreams, of daydreams, and of other mental states as potential outcomes (responses or effects) of possible inputs (stimuli or trials) – or as potential causes (inputs or stimuli) of possible outcomes (responses or effects) – and thereby circumvent the arbitrary exclusion of internal (or private) as well as external (or public) responses to internal as well as to external signs, behavior itself requires a remarkably broad and encompassing interpretation. A conception that accommodates this possibility is that of behavior as any internal or external effect of any internal or external cause. Indeed, from this point of view, it should be apparent that that something affects the behavior of a causal system does not mean that it has to be a sign for that system, which poses a major problem for the semiotic approach – distinguishing semiotic causal systems from other kinds of causal systems.
3. Causal systems and semiotic systems
To appreciate the dimensions of this difficulty, consider that if the capacity for the (actual or potential) behavior of a system to be affected by something were enough for that system to qualify as semiotic, the class of semiotic systems would be coextensive with the class of causal systems, since they would have all and only the same members. If even one member of the class of causal systems should not qualify as a member of the class of semiotic systems, however, then such an identification cannot be sustained. Insofar as my coffee cup, your reading glasses and numberless other things – including sticks and stones – are systems whose (actual and potential) behavior can be influenced by innumerable causal factors, yet surely should not qualify as semiotic systems, something more had better be involved: that something can affect the behavior of a causal system is not enough.
That a system’s behavior can be affected by something is necessary, of course, but in addition that something must be functioning as a sign for that system: that that sign stands for that for which it stands for that system must make a difference to (the actual or potential behavior of) that system, where this difference can be specified in terms of the various ways that such a system would behave, were such a sign to stand for something other than that for which it stands for that system (or, would have behaved, had such a sign stood for something other than that for that system). Were what a red light at an intersection stands for to change to what a green light at an intersection stands for (and conversely) for specific causal systems, including little old ladies but also fleeing felons, then that those signs now stand for things other than that for which they previously stood ought to have corresponding behavioral manifestations, which could be internal or external in kind.
Little old ladies who are not unable to see, for example, should now slow down and come to a complete stop at intersections when green lights appear and release the break and accelerate when red lights appear. Felons fleeing with the police in hot pursuit, by contrast, may still speed through, but they worry about it a bit more, which, within the present context, qualifies as a behavioral manifestation. Strictly speaking, changes in external behavior (with respect to outcome classes) are sufficient but not necessary, whereas changes in internal behavior (with respect to outcome classes) are necessary and sufficient. Thus, a more exact formulation of the principle in question would state that, for any specific system, a sign S stands for something for that system rather than for something else if and only if the strength of the tendencies for that system to manifest behavior of some specific kind in the presence of S – no matter whether publicly displayed or not – differs from case to case, where otherwise what it stands for remains the same.
This principle implies that a change that effects no change is no change at all (with respect to the significance of a sign for a system), which tends to occur when some token is exchanged for another token of the same type. Once again, however, considerations of perspective may have to be factored in, since one dime (silver) need not stand for the same thing as another dime (silver and copper) for the same system when they are tokens of some of the same types, but not of others. Although this result appears agreeable enough, the principle of significance being proposed does not seem to be particularly practical, since access to strengths of tendencies for behavior that may or may not be displayed is empirically testable, in principle, but only indirectly measureable, in practice (cf. Fetzer, 1981, 1986a). As it happens, this theoretical yardstick can be supplemented by a more intuitive standard of its kind.
The measure that I have proposed, of course, affords an account of what it means for a sign to change its meaning (what it stands for) for a system, where the differences involved here may be subtle, minute and all but imperceptible. For defining “sign,” it would suffer from circularity in accounting for what it means for a sign to change its meaning while relying upon the concept of a sign itself: it does not provide a definition of what it is to be a sign or of what it is to be a semiotic system as such, but of what it is for a sign to change its meaning for a semiotic system. This difficulty, however, can be at least partially offset by appealing to (what I take to be) a general criterion for a system to be a semiotic system (for a thing to be a mind), namely, the capacity to make a mistake; for, in order to make a mistake, something must take something to stand for something other than that for which it stands, a reliable evidential indicator that something has the capacity to take something to stand for something in some respect or other, which is the right result.
We should all find it reassuring to discover that the capacity to make a mistake – to mis-take something for other than that for which it stands – appears to afford conclusive evidence that something has a mind. That something must have the capacity to make a mistake, however, does not mean that it must actually make them as well, since the concept of a divine mind that never makes mistakes – no matter whether as a matter of logical necessity or as a matter of lawful necessity for minds of that kind – is not inconsistent (Fetzer, 1986b). The difference between mistakes and malfunctions, moreover, deserves to be emphasized, where mistakes are made by systems while remaining systems of the same kind, while malfunctions transform a system of one kind into a system of another. That a system makes a mistake is not meant to imply that its output classes, relative to its input classes, have been revised, but rather that, say, a faulty inference has occurred, the false has been taken for the true, something has been misclassified, and so on, which readily occurs with perceptual and inductive reasoning (Fetzer, 1981).
4. The varieties of semiotic systems
The semiotic analysis of minds as semiotic systems invites the introduction of at least three different kinds (or types) of minds, where systems of Type I can utilize icons, systems of Type II can utilize icons and indices, and systems of Type III can utilize icons, indices, and symbols. Thus, if the conception of minds as semiotic systems is right-headed, at least in general, it would seem reasonable to conjecture that there are distinctive behavioral (psychological) criteria for semiotic systems of these different types; in other words, if this approach is approximately correct, then it should not be overly difficult to discover that links can be forged with pyschological (behavioral) distinctions that relate to the categories thereby generated. In particular, there appear to be kinds of learning (conditioning, whatever) distinctive to each of these three types of systems, where semiotic systems of Type I display type/token recognition, those of Type II display classical conditioning, and those of Type III display instrumental conditioning, where behavior of these kinds appears to be indicative that a system is one of such a type. Let us begin, therefore, by considering examples of semiotic systems of all three kinds and subsequently return to the symbol system hypothesis.
Non-human animals provide useful examples of semiotic systems that display classical conditioning, for example, as systems of Type II which have the capacity to utilize indices as signs, where indices are things that stand for that for which they stand by virtue of being either causes or effects of that for which they stand. Pavlov’s famous experiments with dogs are illustrative here. Pavlov observed that dogs tend to salivate at the appearance of their food in the expectation of being fed; and that, if a certain stimulus, such as a bell, was regularly sounded at the same time its food was brought in, a dog soon salivated at such a bell’s sound whether its food came with it or not. From the semiotic perspective, food itself functions as an (unconditioned) sign for dogs, namely, as a sign standing to the satiation of their hunger as causes stand to effects. Thus, when the sound of a bell functions as a (conditioned) sign for dogs, it similarly serves as a sign standing to the satiation of their hunger as cause stands to its effects. In the case of the (unconditioned) food stimulus, of course, the stimulus actually is a cause of hunger satiation, while in the case of the (conditioned) bell stimulus, it is not; but that does not undermine this example, since it shows that dogs sometimes make mistakes.
Analogously, Skinner’s familiar experiments with pigeons provide an apt illustration of semiotic systems of Type III that have the capacity to utilize symbols as signs, where symbols are things that stand for that for which they stand by virtue of a conventional agreement or of an habitual association between the sign and that for which it stands. Skinner found that pigeons kept in cages equipped with bars that would emit a pellet if pressed rapidly learned to depress the bar whenever they wanted some food. He also discovered that if, say, a system of lights was installed, such that a bar-press would now release a pellet only if a green light was on, they would soon refrain from pressing the bar, even when they were hungry, unless the green light was on. Once again, of course, the pigeon might have its expectations disappointed by pressing a bar when the apparatus has been changed (or the lab assistant forgot to set a switch, whatever), which shows that pigeons are no smarter than dogs in avoiding mistakes.
Classical conditioning and operant conditioning, of course, are rather different kinds of learning. The connection between a light and the availability of food, like that between the sound of the bell and the satiation of hunger, was artificially contrived. The occurrence of the bell stimulus, of course, causes the dog to salivate, whether it wants to or not, whereas the occurrence of a green light does not cause a pigeon to press the bar, whether it wants to or not, but rather establishes a conventional signal for the pigeon that, if it were to perform a bar press now, a pellet would be emitted. It could be argued that the bell stimulus has now become a sufficient condition for the dog to salivate, while the light stimulus has become a sufficient condition for the pigeon not to press the bar. But Skinner’s experiments, unlike those of Pavlov, involve reinforcing behavior after it has been performed, because of which the pigeons learn means/ends relations over which they have some control.
An intriguing example of type/token recognition that displays what appears to be behavior characteristic of semiotic systems of Type I, at last, was described in a recent newspaper article entitled, ‘Fake Owls Chase Away Pests’ (St. Petersburg Times, 27 January 1983), as follows:
Birds may fly over the rainbow, but until 10 days ago, many of them chose to roost on top of a billboard that hangs over Bill Allen’s used car lot on Drew Street in Clearwater. Allen said he tried everything he could think of to scare away the birds, but still they came – sometimes as many as 100 at a time. He said an employee had to wash the used cars at Royal Auto Sales every day to clean off the birds’ droppings. About a month ago, Allen said, he called the billboard’s owner for help fighting the birds. Shortly afterward, Allen said, two viny owl “look alikes” were put on the corners of the billboard. “I haven’t had a bird land up there since,” he said.
The birds, in other words, took the sizes and shapes of the viny owls to be instances of the sizes and shapes of real owls, treating the fake owls as though they were the real thing. Once again, therefore, we can infer that these systems have the capacity to take something to stand for something (else) in some respect or other on the basis of the criterion that they have the capacity to make a mistake, which has been illustrated by Pavlov’s dogs, by Skinner’s pigeons, and by Allen’s birds alike. While there do seem to be criteria distinctive of each of these three types of semiotic systems, in other words, these more specific criteria themselves are consistent with and illuminate that more general semiotic criterion.
These reflections thus afford a foundation for pursuing a comparison of the semiotic approach with the account supported by the symbol system conception. Indeed, the fundamental difference that we are about to discover is that Newell and Simon appear to be preoccupied exclusively with systems of Type III (or their counterparts), which, if true, establishes a sufficient condition for denying that semiotic systems and symbol systems are the same – even while affirming that they are both physical systems (of one or another of the same general kind). The intriguing issues that arise here, therefore, concern (a) whether there is any significant difference between semiotic systems of Type III and physical symbol systems and (b) whether there are any significant reasons for preferring one or the other of these accounts with respect to the foundations of artificial intelligence.
5. Symbol systems and causal systems
The distinction between types and tokens ought to be clear enough by now to consider the difference between Newell and Simon’s physical symbol systems and semiotic systems of Type III. The capacity to utilize indices seems to carry with it the capacity to utilize icons, since recognizing instances of causes as events of the same kind with respect to some class of effects entails drawing distinctions on the basis of resemblance relations. Similarly, the capacity to utilize symbols seems to carry with it the ability to utilize indices, at least to the extent to which the use of specific symbols on specific occasions can affect the behavior of a semiotic system for which they are significant signs. Insofar as these considerations suggest that a physical symbol system ought to be a powerful kind of semiotic system with the capacity to utilize icons, indices and symbols as well, it may come as some surprise that I want to deny that Newell and Simon’s conception supports such a conclusion at all. For, it appears to be the case that, appearances to the contrary notwithstanding, physical symbol systems in the sense of Newell and Simon (1976) do not qualify as semiotic systems.
Since I take it to be obvious that physical symbol systems are causal systems in the appropriate sense, the burden of my position falls upon the distinction between systems for which something functions as a sign for a user of that system and systems for which something functions as a sign for that system itself. According to Newell and Simon (1976), in particular:
A physical symbol system consists of a set of entities, called symbols, which are physical patterns that can occur as components of another type of entity called an expression (or symbol structure). Thus a symbol structure is composed of a number of instances (or tokens) of symbols related in some physical way (such as one token being next to another).
Notice, especially, that symbol structures (or “expressions”) are composed of sequences of symbols (or “tokens”), where “physical symbol systems,” in this sense, process expressions, which they refer to as “symbol structures.” The question that I want to raise, therefore, is whether or not these “symbol structures” function as signs in Peirce’s sense – and, if so, for whom.
At first glance, this passage might seem to support the conception of physical symbol systems as semiotic systems, since Newell and Simon appeal to tokens and tokens appear to be instances of different types. Their conceptions of expression and of interpretation, moreover, are relevant here:
Two notions are central to this structure of expressions, symbols, and objects: designation and interpretation.Designation. An expression designates an object if, given the expression, the system can either affect the object itself or behave in ways depending on the object.In either case, access to the object via the expression has been obtained, which is the essence of designation.Interpretation. The system can interpret an expression if the expression designates a process and if, given the expression, the system can carry out the process.Interpretation implies a special form of dependent action: given an expression, the system can perform the indicated process, which is to say, it can evoke and execute its own processes from expressions that designate them. (Newell and Simon, 1976, pp. 40-41.)
An appropriate illustration of “interpretation” in this sense would appear to be computer commands, whereby a suitably programmed machine can evoke and execute its own internal processes when given “expressions” that designate them. Notice, however, that portable typewriters (pocket calculators, and so forth) would seem to qualify as “physical symbol systems” in Newell and Simon’s sense – since combinations of letters from their keyboards (of numerals from their interface, and so on) appear to be examples of “expressions” that designate a process whereby various shapes can be typed upon a page (strings of numerals can be manipulated, ..). Other considerations, however, suggest that the sorts of systems that are intended to be good examples of symbol systems are general-purpose digital computers rather than simple systems of these kinds.
A consistent interpretation of Newell and Simon’s conception depends upon an analysis of “symbols” as members of an alphabet/ character set (such as “a,” “b,” “c,” and so on), where “expressions” are sequences of the members of such a set. The term that they employ which corresponds most closely to that of “symbol” in Peirce’s technical sense, therefore, is not “symbol” itself but rather “expression.” Indeed, that “symbols” in Newell and Simon’s sense cannot be “symbols” in Peirce’s technical sense follows from the fact that most of the members of a character set do not stand for anything at all – other than that their inclusion within such a set renders them permissible members of the character sequences that constitute (well-formed) expressions. A more descriptive name for systems of this kind, therefore, might be that of “expression processing” (“string manipulating”) systems; but so long as Newell and Simon’s systems are not confused with semiotic systems, there is no reason to dispute the use of a name that has already become well-entrenched.
An important consequence of this account, moreover, is that words like “chair” and “horse,” which occur in ordinary language, are good examples of symbols in Peirce’s sense, yet do not satisfy Newell and Simon’s conception of expressions. These words stand for that for which they stand without in any fashion offering the least hint that the humans (machines, whatever) for which such things are signs can either affect such objects or behave in ways that depend upon those objects: the capacity to describe horses and chairs does not entail the ability to ride or to train them, to build or to refinish them, or otherwise manipulate them. These symbols can function as significant signs whether or not Newell and Simon’s conditions are satisfied, a proof of which follows from examples such as “elf” and “werewolf,” signs that function as symbols in Peirce’s sense, yet could not possibly fulfill Newell and Simon’s conception because they stand for non-existent objects, which can neither affect nor be affected by causal relations in space/time. Thus, “symbol systems” in Newell and Simon’s sense (of string manipulating systems) do not qualify as systems that utilize symbols in Peirce’s sense.
6. Symbol systems and semiotic systems
This result tends to reflect the fact that Newell and Simon’s conception of physical symbol system depends upon at least these two assumptions:
(a) expressions = df sequences of characters (strings of symbols); and,
(b) symbols = df elements of expressions (tokens of character types);
where these “character types” are those specified by some set of characters (ASCII, EBCDIC, whatever). This construction receives further support from other remarks they make during the course of their analysis of completeness and closure as properties of systems of this kind, insofar as they maintain:
(i) there exist expressions that designate every process of which such a system (machine) is capable; and,
(ii) there exist processes for creating any expression and for modifying any expression in arbitrary ways;
which helps to elucidate the sense in which a system can affect an object itself or behave in ways depending on that object, namely, when that object itself is either a computer command or a string of characters from such a set. (A rather similar conception can be found in Newell, 1973, esp. pp. 27-28.)
Conditions (i) and (ii), I believe, are intended to support the restriction of symbol systems to general purpose digital computers, even though they can only be satisfied relative to some (presupposed) set of symbols, no matter how arbitrarily selected. Whether or not these conditions actually have their intended effect, however, appears to be subject to debate. Although programming languages vary somewhat on this issue, such practices as the overloading of operators, the multiple definition of identifiers and the like are ordinarily supposed to be important to avoid in order to secure the formal syntax of languages that are suitable for employment with computers, which tends to restrict the extent to which they can be arbitrarily composed. Moreover, since typewriters and calculators seem to have unlimited capacities to process any number of expressions that can be formulated within their respective sets of symbols, it could be argued that they are not excluded by these constraints, which is a striking result (insofar as few would be inclined to claim that a typewriter “has a mind of its own”).
No doubt, symbol systems in Newell and Simon’s sense typically behave in ways that depend upon certain members of the class of expressions, since they are causal systems that respond to particular computer commands for which they have been “programmed” (in a sense that takes in hardware as well as software considerations). Since computer commands function as input causes in relation to output effects (for suitably programmed machines), it should be obvious that Newell and Simon’s conception entails the result that physical symbol systems are causal systems. For the reasons outlined above, however, it should be equally apparent that their conception does not entail that these causal systems are semiotic systems of Type III. Indeed, if expressions were symbols in Peirce’s technical sense, then they would have to have intensions and extensions; but it seems plain that strings of symbols from a character set, however well-formed, need not have these properties.
Indeed, the most telling considerations of all emerge from inquiring for whom Newell and Simon’s “symbols” and “expressions” are supposed to be significant (apart from the special class of computer commands, where, in fact, it remains to be ascertained whether or not those commands function as signs for those systems). Consider, for example, the following cases;
INPUT(FOR SYSTEM)OUTPUT finger (pushes)button (causing)printout of filematch (lights)fuse (causing)explosion of devicechild (notices)cloud (causing)expectation of storm
When a finger pushes a button that activates a process, say, leading to a printout, no doubt an input for a causal system has brought about an output. When a match lights a fuse, say, leading to an explosion, that an input for a causal system has brought about an output is not in doubt. And when a child notices a cloud, say, leading to some such expectation, no doubt an input for a causal system has brought about an output. Yet, surely only the last of these cases is suggestive of the possibility that something stands for something (else) for that system, where that particular thing is a meaningful token for that system (with an intensional dimension) and where that system might be making (or have made) a mistake.
If these considerations are correct, then we have discovered, first, that the class of causal systems is not coextensive with the class of semiotic systems. Coffee cups and matches, for example, are particular cases of systems in space/time that stand in causal relations to other things, yet surely do not qualify as sign-using systems. Since two words, phrases or expressions mean the same thing only if their extensions are the same, causal systems and semiotic systems are not the same thing. We have also discovered, second, that the meaning of “symbol system” is not the same as that of “semiotic system of Type III.” General- purpose digital computers are causal systems that process expressions, yet do not therefore need to be systems for which signs function as signs. Since two words, phrases, or expressions mean the same thing only if their intensions are the same, symbol systems and semiotic systems of Type III are not the same things.
From the perspective of the semiotic approach, in other words, the conception of physical symbol systems encounters the distinction between sets of symbols that are significant for users of machines – in which case there is a semiotic relationship between those signs, what they stand for and those sign users, where the users are not identical with the machines themselves – and sets of symbols that are significant for use by machines – in which case there is a semiotic relationship between those signs, what they stand for and those sign users, where these users are identical with the machines themselves. Without any doubt, the symbols and expressions with which programmers program machines are significant signs for those programmers; and without any doubt, the capacity to execute such commands qualifies those commands as causal inputs with respect to causal outputs. That is not enough for these machines to be semiotic systems of Type III.
If these considerations are correct, then there is a fundamental difference between causal systems and semiotic systems, on the one hand, and between symbol systems and semiotic systems of Type III, on the other. Of course, important questions remain, including ascertaining whether or not there are good reasons to prefer one or another conception, which will depend in large measure upon their respective capacities to clarify, if not to resolve, troublesome issues within this domain. Moreover, there appear to be several unexamined alternatives with respect to the interpretation of Newell and Simon’s conception, since other arguments might be advanced to establish that symbol systems properly qualify either as semiotic systems of Type I or of Type II – or that special kinds of symbol systems properly qualify as semiotic systems of Type III, which would seem to be an important possibility that has not yet been explored. For the discovery that some symbol systems are not semiotic systems of Type III no more proves that special kinds of symbol systems cannot be semiotic systems of Type III than the discovery that some causal systems are not semiotic systems proves that special kinds of causal systems cannot be semiotic systems.
7. The symbol-system and the semiotic-system hypotheses
The conception of semiotic systems, no less than the conception of symbol systems, can be evaluated (at least, in part) by the contribution they make toward illuminating the relationship between the use of signs, on the one hand, and the manipulation of symbols, on the other, in relation to deliberate behavior. Both accounts, in other words, may be viewed as offering characterizations of “mental activity” in some appropriate sense, where these accounts are intended to afford a basis for understanding “intelligent” (or “deliberate”) behavior. Indeed, the respective theoretical hypotheses that they represent ought to be formulated as follows:
(h1) The Symbol-System Hypothesis: a symbol system has the necessary and sufficient means (or capacity) for general intelligent action; and,
(h2) The Semiotic-System Hypothesis: a semiotic system has the necessary and sufficient means (or capacity) for general intelligent action;
where these hypotheses are to be entertained as empirical generalizations (or as lawlike claims) whose truth or falsity cannot be ascertained merely by reflection upon their meaning within a certain language framework alone.
Because these hypotheses propose necessary and sufficient conditions, they could be shown to be false if either (a) systems that display “intelligent” (or “deliberate”) behavior are not symbol (or semiotic) systems; or (b) systems that are symbol (or semiotic) systems do not display “intelligent” (or “deliberate”) behavior. Moreover, since they are intended to be empirical hypotheses (cf. Newell and Simon, 1976, esp. p. 42 and p. 46), these formulations ought to be understood as satisfied by systems that display appropriate behavior without assuming (i) that behavior that involves the processing or the manipulation of a string of tokens from a character set is therefore either “intelligent” or “deliberate” (since otherwise the symbol-system hypotheses must be true as a function of its meaning); and, without assuming (ii) that behavior that is “intelligent” or “deliberate” must therefore be successful in attaining its aims, objectives, or goals, where a system that displays behavior of this kind cannot make a mistake (since otherwise the semiotic-system hypothesis must be false as a function of its meaning). With respect to the hypotheses (h1)and (h2), therefore, “intelligent action” and “deliberate behavior” are synonymous expressions.
A certain degree of vagueness inevitably attends an investigation of this kind to the extent to which the notions upon which it depends, such as “deliberate behavior” and “intelligent action,” are not fully defined. Nevertheless, an evaluation of the relative strengths and weaknesses of these hypotheses can result from considering classes of cases that fall within the extensions of “symbol system” and of “semiotic system,” when properly understood. In particular, it seems obvious that the examples of type/token recognition, of classical conditioning, and of instrumental conditioning considered above are instances of semiotic systems of Type I, II, and III that do not qualify as symbol systems in Newell and Simon’s sense. This should come as no surprise, since Newell and Simon did not intend that their conception should apply with such broad scope; but it evidently entails that hypothesis (h1) must be empirically false.
Indeed, while this evidence amply supports the conclusion that the semiotic-system approach has applicability to dogs, to pigeons, and to (other) birds that the symbol-system approach lacks, the importance that ought to attend this realization may or may not be immediately apparent. Consider, after all, that a similar argument could be made on behalf of the alternative conception, namely, that – depending upon the resolution of various issues previously identified – the symbol- system approach has applicability to typewriters, to calculators, and to (other) machines that the semiotic-system approach lacks, which may be of even greater importance if the objects of primary interest are machines. For if digital computers, for example, have to be symbol systems but do not have to be semiotic systems, that they might also qualify as semiotic systems does not necessarily have to be a matter of immense theoretical significance.
Nevertheless, to the extent to which these respective conceptions are supposed to have the capacity to shed light on the general character of the causal connections that obtain between mental activity and behavioral tendencies – that is, to the extent to which frameworks such as these ought to be evaluated in relation to hypotheses such as (h1) and (h2) – the evidence that has been presented would appear to support the conclusion that the semiotic-system approach clarifies connections between mental activity as semiotic activity and behavioral tendencies as deliberate behavior – connections which, by virtue of its restricted range of applicability, the symbol-system approach cannot accommodate. By combining distinctions between different kinds (or types) of mental activity together with psychological criteria concerning the sorts of capacities distinctive of systems of these different types (or kinds), the semiotic approach provides a powerful combination of (explanatory and predictive) principles, an account that, at least with respect to non-human animals, the symbol-system approach cannot begin to rival.
This difference, however, surely qualifies as an advantage of an approach only so long as it is being entertained as an approach to a certain specific class of problems, such as explaining and predicting the deliberate behavior (the “intelligent actions”) of non-human animals. To whatever extent Newell and Simon did not intend to account for the intelligent actions (the “deliberate behavior”) of non-human animals, it may be said, the incapacity of their conception to accommodate explanations and predictions should not be held against them. The strict interpretation of this position would lead to the conclusions that (a) any theory should be evaluated exclusively in terms of its success or failure at achieving its intended aims, goals or objectives, where (b) unintended consequences should be viewed as, at most, of secondary importance, no matter how striking their character or significant their potential. Relative to this standard, the incapacity to accommodate non-human animals not only cannot count against Newell and Simon’s conception but instead ought to support it.
8. What about humans and machines?
A more interesting – and less implausible – position would be for Newell and Simon to abandon their commitment to the symbol system hypothesis and restrict the scope of their analysis to the thesis that, after all, general-purpose digital computers are symbol systems, where it really does not matter whether or not they have captured the nature of mental activity in humans or in (other) animals. There is a sense in which this attitude is almost precisely right, so long as the possibility that they may have captured no sense of mental activity is itself left open. Newell and Simon’s conception, in other words, may be completely adequate for digital computers but remain completely inadequate for other things – unless, of course, the precise processes that characterize symbol systems were the same processes that characterize, say, human or non-human systems that process knowledge or information, a position that is not consistent with the results we have discovered.
Notice, in particular, that the following theses regarding the relationship between symbol systems and semiotic systems are compatible:
(t1) general-purpose digital computers are symbol systems; and,
(t2) animals – human and non-human alike – are semiotic systems;
where, even if not one digital computer heretofore constructed qualifies as a semiotic system, that some digital computer yet to be built might later qualify as a semiotic system remains an open question. Indeed, whether information or knowledge processing in humans and in (other) animals is like that in symbol systems or that in semiotic systems appears to be the fundamental question at the foundations of artificial intelligence.
Strictly speaking, after all, to be a symbol system in Newell and Simon’s sense is neither necessary nor sufficient to be a semiotic system in the Peircean sense. Recall, in particular, that their account of designation presupposes the existence of that which is designated, since otherwise that system could neither affect nor be affected by that thing; yet it is no part of the notion of an icon or a symbol, for example, that the things thereby signified should be open to causal influence by a semiotic system. Moreover – and most importantly – nothing about their conception warrants the conclusion that symbol systems in their sense even have the capacity to utilize signs in Peirce’s sense at all, even though – as I readily concede – there is no reason to deny that they are causal systems.
Artificial intelligence is often taken to be an attempt to develop causal processes that perform mental operations. Sometimes such a view has been advanced in terms of formal systems, where “intelligent beings are … automatic formal systems with interpretations under which they consistently make sense” (Haugeland, 1981, p. 31). This conception exerts considerable appeal, since it offers the promise of reconciling a domain about which a great deal is known – formal systems – with one about which a great deal is not known – intelligent beings. This theory suffers from profound ambiguity, however, since it fails to distinguish between systems that make sense to themselves and those that make sense for others. Causal models of mental processes, after all, might either effect connections between inputs and outputs so that, for a system of a certain specific type, those models yield outputs for certain classes of inputs that correspond to those exemplified by the systems that they model; or else effect those connections between inputs and outputs and, in addition, process these connections by means of processes that correspond to those that are exemplified by those systems that they model.
This distinction, which is not an unfamiliar one, can be expressed by differentiating between “simulation” and “replication,” where, say,
(a) causal models that simulate mental processes capture connections between inputs and outputs that correspond to those of the systems that they represent; while,
(b) causal models that replicate mental processes not only capture these connections between inputs and outputs but do so by means of processes that correspond to those of the systems they represent;
where, if theses (t1) and (t2) are true, then it might be said that symbol systems simulate mental processes that semiotic systems replicate – precisely because semiotic systems have minds that symbol systems lack. There are those, such as Haugeland (1985), of course, who are inclined to believe that symbol systems replicate mental activity in humans too because human mental activity, properly understood, has the properties of symbol systems too. But this claim appears to be plausible only if there is no real difference between systems for which signs function as signs for those systems themselves and systems for which signs function as signs for the users of those systems, which is the issue in dispute.
Another perspective on this matter can be secured by considering the conception of systems that possess the capacity to represent and to utilize information or knowledge. The instances of semiotic systems of Types I, II, and III that we have examined seem to fulfill this desideratum, in the sense that, for Pavlov’s dogs, for Skinner’s pigeons, and for Allen’s birds, there are clear senses in which these causal systems are behaving in accordance with their beliefs, that is, with something that might be properly characterized as “information” or as “knowledge.” Indeed, the approach represented here affords the opportunity to relate genes to bodies to minds to behavior, since phenotypes develop from genotypes under the influence of environmental factors, where phenotypes of different kinds may be described as predisposed toward the utilization of different kinds of signs, which in turn tends toward the acquisition and the utilization of distinct ranges of behavioral tendencies, which have their own distinctive strengths (Fetzer, 1985, 1986b). This in itself offers significant incentives for adopting the semiotic approach.
Yet it could still be the case that digital computers (pocket calculators and the like) cannot be subsumed under the semiotic framework precisely because (t1) and (t2) are both true. After all, nothing that has gone before alters obvious differences between systems of these various different kinds, which are created or produced by distinctive kinds of causal processes. The behavior of machine systems is (highly) artificially determined or engineered, while that of human systems is (highly) culturally determined or engineered, and that of (other) animal systems is (highly) genetically determined or engineered. Systems of all three kinds exhibit different kinds of causal capabilities: they differ with respect to their ranges of inputs/stimuli/trials, with respect to their ranges of output/responses/outcomes, and with respect to their higher- order causal capabilities, where humans (among animals) appear superior. Even if theses (t1) and (t2) were true, what difference would it make?
9. What difference does it make?
From the point of view of the discipline of artificial intelligence, whether computing machines do what they do the same way that humans and (other) animals do what they do only matters in relation to whether the enterprise is that of simulating or of replicating the mental processes of semiotic systems. If the objective is simulation, it is surely unnecessary to develop the capacity to manufacture semiotic systems; but if the objective is replication, there is no other way, since this aim can not otherwise be attained. Yet it seems to be worth asking whether the replication of the mental processes of human beings would be worth the time, expense, and effort that would be involved in building them. After all, we already know how to reproduce causal systems that possess the mental processes of human beings in ways that are cheaper, faster, and lots more fun. Indeed, when consideration is given to the limited and fallible memories, the emotional and distorted reasoning and the inconsistent attitudes and beliefs that tend to distinguish causal systems of this kind, it is hard to imagine why anyone would want to build them: there are no interpretations “under which they consistently make sense.”
A completely different line could be advanced by defenders of the faith, however, who might insist that the distinction I have drawn between symbol systems and semiotic systems cannot be sustained, because the conception of semiotic systems itself is circular and therefore unacceptable. If this contention were correct, the replication approach might be said to have been vindicated by default in the absence of any serious alternatives. The basis for this objection could be rooted in a careful reading of the account that I have given for semiotic systems of Type I, since, within the domain of semantics, icons are supposed to stand for that for which they stand “when they create in the mind of a sign user another – equivalent or more developed – sign that stands in the same relation to that for which they stand as do the original signs creating them.” This Peircean point, after all, employs the notion of mind in the definition of one of the kinds of signs – the most basic kind, if the use of indices involves the use of icons and the use of symbols involves the use of indices, but not conversely – which might be thought to undermine any theory of the nature of mind based on his theory of signs.
This complaint, I am afraid, is founded upon an illusion; for those signs in terms of which other signs are ultimately to be understood are unpacked by Peirce in terms of the habits, dispositions, or tendencies by means of which all signs are best understood: “the most perfect account of a concept that words can convey,” he wrote, “will consist in a description of the habit which that concept is calculated to produce” (Peirce, 1955, p. 286). But this result itself could provide another avenue of defense by contending that systems of dispositions cannot be causal systems so that, a fortiori, semiotic systems cannot be special kinds of causal systems within a dispositional framework. Without intimating that the last word has been said with reference to this question, there appears to be no evidence in its support; but it would defeat the analysis that I have presented if this argument were correct.
The basic distinction between symbol systems and semiotic systems, of course, is that symbol systems may or may not be systems for which signs stand for something for those systems, while semiotic systems are, where I have employed the general criterion that semiotic systems are capable of making mistakes. A severe test of this conception, therefore, is raised by the problem of whether or not digital computers, in particular, are capable of making a mistake. If the allegations that the super-computers of the North American Defense Command (NORAD) located at Colorado Springs have reported the U.S. to be under ballistic missile attacks from the Soviet Union no less than 187 times are (even roughly) accurate, this dividing line may already have been crossed, since it appears as though all such reports thus far have been false. The systems most likely to fulfill this condition are ones for which a faulty inference can occur, the false can be mistaken for the true or things can be misclassified, which might not require systems more complex than those capable of playing chess (Haugeland, 1981, p. 18) – but this question, as we have discovered, is theoretically loaded.
Human beings, as semiotic systems, display certain higher-order causal capabilities that deserve to be acknowledged, since we appear to have a remarkable capacity for inferential reasoning that may or may not differ from that of (other) animals in kind but undoubtedly exceeds them in degree. In this respect, especially, however, human abilities are themselves surpassed by “reasoning machines,” which are, in general, more precise, less emotional, and far faster in arriving at conclusions by means of deductive inference. The evolution and development of digital computers with inductive and perceptual capabilities, therefore, would seem to be the most likely source of systems that display the capacity to make mistakes. By this criterion, systems that have the capacity to make mistakes qualify as semiotic systems, even when they do not replicate processes of human systems.
A form of mentality that exceeds the use of symbols alone, moreover, appears to be the capacity to make assertions, to issue directives, to ask questions and to utter exclamations. At this juncture, I think, the theory of minds as semiotic systems intersects with the theory of languages as transformational grammars presented especially in the work of Noam Chomsky (Chomsky, 1965, and Chomsky, 1966, for example; cf. Chomsky, 1986 for his more recent views). Thus, this connection suggests that it might be desirable to identify a fourth grade of mentality, where semiotic systems of Type IV can utilize signs that are transformations of other signs, an evidential indicator of which may be taken to be the ability to ask questions, make assertions and the like. This conception indicates that the capacity for explicit formalization of propositional attitudes may represent a level of mentality distinct from the occurrence of propositional attitudes as such.
Humans, (other) animals, and machines, of course, also seem to differ with respect to other higher-order mental capabilities, such as in their attitudes toward and beliefs about the world, themselves and their methods. Indeed, I am inclined to believe that those features of mental activity that separate humans from (other) animals occur at just this juncture; for humans have a capacity to examine and to criticize their attitudes, their beliefs, and their methods that (other) animals do not enjoy. From this perspective, however, the semiotic approach seems to classify symbol systems as engaged in a species of activity that, if it were pursued by human beings, would occur at this level; for the activities of linguists, of logicians, and of critics in creating and in manipulating expressions and symbols certainly appear to be higher-order- activities, indeed.
A fifth grade of mentality accordingly deserves to be acknowledged as a meta-mode of mentality that is distinguished by the use of signs to stand for other signs. While semiotic systems of Type I can utilize icons, of Type II indices, of Type III symbols and of Type IV transforms, semiotic systems of Type V are capable of using meta-signs as signs that stand for other signs (one variety of meta-signs, of course, being meta-languages). Thus, perhaps the crucial criterion of mentality of this degree is the capacity for criticism, of ourselves, our theories and our methods. While the conception of minds as semiotic systems has a deflationary effect in rendering the existence of mind at once more ubiquitous and less important than we have heretofore supposed, it does not therefore diminish the place of human minds as semiotic systems of a distinctive kind, nevertheless.
The introduction of semiotic systems of Type IV and of Type V, however, should not be allowed to obscure the three most fundamental species of mentality. Both transformational and critical capacities are presumably varieties of semiotic capability that fall within the scope of symbolic mentality. Indeed, as a conjecture, it appears to be plausible to suppose that each of these successively higher types of mentality presupposes the capacity for each of those below, where evolutionary considerations might be brought to bear upon the assessment of this hypothesis by attempting to evaluate the potential benefits for survival and reproduction relative to species and societies – that is, for social groups as well as for single individuals – that accompany this conception (cf. Fetzer, 1985 and especially 1986a).
There remains the further possibility that the distinction between symbol systems and semiotic systems marks the dividing line between computer science (narrowly defined) and artificial intelligence, which is not to deny that artificial intelligence falls within computer science (broadly defined). On this view, what is most important about artificial intelligence as an area of specialization within the field itself would be its ultimate objective of replicating semiotic systems. Indeed, while artificial intelligence can achieve at least some of its goals by building systems that simulate – and improve upon – the mental abilities that are displayed by human beings, it cannot secure its most treasured goals short of replication, if such a conception is correct. It therefore appears to be an ultimate irony that the ideal limit and final aim of artificial intelligence – whether by replicating human beings or by creating novel species – could turn out to be the development of systems capable of making mistakes.
Acknowledgment
The original version of this paper was presented at New College on 8 May 1976. Subsequent versions were presented at the University of Virginia, at the University of Georgia, and – most recently – at Reed College. I am indebted to Charles Dunlop, Bret Fetzer, Jack Kulas, Terry Rankin, and Ned Hall for instructive comments and criticism.
References
Buchanan B. (1985) Expert Systems. Journal of Automated Reasoning 1: 28–35.
Chomsky N. (1965) Aspects of the Theory of Syntax. MIT Press. Cambridge MA.
Chomsky N. (1966) Cartesian Linguistics. Harper & Row, New York.
Chomsky N. (1986) Knowledge of Language: Its Nature, Origin, and Use. Praeger Publishers, New York.
Cohen D. (1986) Introduction to Computer Theory. John Wiley & Sons, Inc., New York.
Fetzer J. H. (1981) Scientific Knowledge. D. Reidel, Dordrecht, Holland.
Fetzer J. H. (1985) Science and Sociobiology. In: J. H. Fetzer (ed.) Sociobiology and Epistemology (D. Reidel, Dordrecht, Holland): 217–246.
Fetzer J. H. (1986a) Methodological Individualism: Singular Causal Systems and Their Population Manifestations. Synthese 68: 99–128.
Fetzer J. H. (1986b) Mentality and Creativity. Journal of Social and Biological Struc-tures (forthcoming)
Haugeland J. (1981) Semantic Engines: An Introduction to Mind Design. In: J. Haugeland (ed.) Mind Design (MIT Press, Cambridge MA.): 1–34.
Haugeland J. (1985) Artificial Intelligence: The Very Idea. MIT Press, Cambridge MA.
Morris C. W. (1938) Foundations of the Theory of Signs. University of Chicago Press, Chicago IL.
Newell A. (1973) Artificial Intelligence and the Concept of Mind. In: R. Schank and K. Colby (eds.) Computer Models of Thought and Language (W. H. Freeman and Company, San Francisco.): 1–60.
Newell A. & Simon H. (1976) Computer Science as Empirical Inquiry: Symbols and Search. reprinted in J. Haugeland (ed.) Mind Design (MIT Press, Cambridge MA.): 35–66.
Nii H. P. et al. (1982) Signal-to-Symbol Transformation: HASP/SLAP Case Study. AI Magazine 3: 23–35.
Peirce C. S. (1955) Philosophical Writings of Peirce J. Buehler (ed.) (Dover Publica-tions, New York.)
Peirce C. S. (1985) Logic as Semiotic: The Theory of Signs. reprinted in R. Innis (ed.) Semiotics: An Introductory Anthology (Indiana University Press, Bloomington IN) pp. 4–23.
Rich E. (1983) Artificial Intelligence. McGraw-Hill, New York.
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/3054 on 2016-12-28 · Publication curated by Alexander Riegler