CEPA eprint 4207

The new orthodoxy: Humans, animals, Heidegger and Dreyfus

Collins H. M. (2009) The new orthodoxy: Humans, animals, Heidegger and Dreyfus. In: Leidlmair K. (ed.) After cognitivism: A reassessment of cognitive science and philosophy. Springer, Dordrecht: 75–85. Available at http://cepa.info/4207
Table of Contents
Introduction: The New Orthodoxy and its Problems
Socialness
Embeddednes in Society
Language and Embodiment
Socialness, Language, and Artificial Intelligence
Conclusion
References
Introduction: The New Orthodoxy and its Problems
I cannot imagine a better introduction to the mainstream philosophical debate about artificial intelligence than that provided by Hubert Dreyfus in this volume.[Note 1] Dreyfus, as he explains, is now to be included within the mainstream, a position he has achieved after a notoriously unjustified delay of many decades, and by a process which is, to some extent, described in the paper itself (AI students attending his MIT seminar and so forth). Dreyfus by pulling things together so clearly, has actually made it easier to see what is still wrong even now that he and Heidegger have been grasped to the bosom of AI. What is missing is not, however, what Dreyfus says it is – more of his type of Heidegger. What is missing is any understanding of the distinction between humans and animals.[Note 2]
Well, actually, this problem is partly alluded to on the very last page, where Dreyfus says, ‘If we can’t make our brain model responsive to the significance in the environment as it shows up specifically for human beings, the project of developing an embedded and embodied Heideggerian AI can’t get off the ground’ (Dreyfus’s stress). But, on the evidence presented here and elsewhere, what Dreyfus means by ‘specifically for human beings’ is not so different from what he might mean by ‘specifically for rabbits, ’ or ‘specifically for kangaroos’ – that is just another species of animal.
To lean over backwards to be fair, Dreyfus does mention en passant on that last page that humans have ‘personal and cultural self-interpretation.’ He does not, however, discuss its significance, nor how it makes us radically discontinuous from animals in respect of the project of AI. I will argue that the difference is huge. Thus, if we are concerned with animals alone it is possible to foresee the building of machines that mimic the behaviour of living entities from rabbits and kangaroos to cats and dogs so long as we get better and better at what we do now, but it is not possible to foresee the building of machines that could mimic most of the things done by humans. In sum, we can visualise how we might build artificial rabbits and the like but not how we might build artificial members of a natural language-speaking community.
Dreyfus, above all, understands ‘the frame problem.’ The frame problem is how a creature decides what is going on in a constantly changing world so it can adjust its reactions to it in an appropriate way. The frame problem is described by Dreyfus on page XX: if you try to restrict your computer’s choice of actions to a set of ‘recipes’ appropriate to the frame – at a dinner party bring a bottle of wine – at a restaurant buy a bottle of wine – at the very best you have the problem of deciding which frame you are in at the time and this needs another recipe and so on ad infinitum. It amounts to what, following Wittgenstein, we can call the ‘rules regress’ – each rule for action requires another rule to explain how it is to be applied, and each of those rules requires another rule, and so on. But as you read Dreyfus’s paper it is apparent that the examples of this problem, and the related problems, as they face humans, are all mingled together with the examples of the problems as they face rabbits and other creatures.
The reason humans and animals are mixed up is, I believe, easy to fathom: Dreyfus, and those he invokes, are obsessed with individuals and particularly individuals’ bodies. They say, correctly, that the solution to the frame problem is to be found, not by making models of ever more complicated representations of the world, but by understanding how we actually live and interact with the world itself – using the world as its own representation. But the key examples they provide are always bodily interactions with a physical environment such as Heidegger’s ready-to-hand hammer. No wonder the rabbit and carrot fit in so smoothly. Even when Dreyfus does mention culture he talks of ‘personal and cultural self-interpretation, ’ a grudging and awkward formulation which still hankers for the individual.
What I will now do is use a few examples in an attempt to show why this whole new orthodoxy is misplaced because it does not recognise that humans and animals are not continuous in terms of the problems of AI. I will try to show that any treatment that does not separate humans and non-humans at the outset, however Merleau-Pontyish, or Heidegger/Dreyfusian it is, like Wittgenstein’s fly, bound to keep smashing its head into the glass of the social.
Most of the arguments I want to make are already in print, sometimes in the form of debates with others, including Bert, so here I’ll just outline them and provide references to the more complete treatments. It seems worth going over again since the arguments below are certainly not part of any AI orthodoxy. In what follows, I first provide a reminder about what is special about humans, then show what is significant in respect of AI about humans’ embeddedness in societies, and then pull together the arguments about the special nature of language in a new way.
Socialness
The overall argument is that humans and animals are different because the former have language and culture whereas the latter do not. Human individuals experience the physical world quite differently depending on the social groups in which they have been brought up. These different collective experiences are ‘embodied’ in natural languages. Even domestic animals such as dogs and cats, whose upbringing has a huge overlap with the upbringing of human children, and whose social experience is as varied as that of their human masters and mistresses, just aren’t expected to have the equivalent degree of differentiation in the way they know the world and act within it. For example, there are no vegetarian cats or dogs. It is whatever it is that allows there to be this kind of variation between groups of human beings, that is not found between groups of cats, dogs and other animals, that makes a crucial difference to AI. Whatever it is, it not only creates differences, it also provides the conditions for certain kinds of competence within groups of human beings that aren’t found in animals. I am going to call that ‘whatever it is’ socialness. As a part of speech, think of socialness as like ‘consciousness.’ Think of it also as having the same role in the understanding of human action as David Chalmer’s claims in respect of consciousness – a fundamental constituent of the world of the same order of the four forces that enter into physicists ‘dreams of a final theory.’[Note 3] I don’t know if Chalmers is right about consciousness but I think what he claims for it is certainly true of socialness.
Incidentally, I don’t know if dolphins and chimps have language (and socialness) – if they do to some extent, then to that extent they can go on the human side. The argument is about entities that have language and socialness, whichever they are. The domain of such entities is either coextensive, or nearly coextensive, with that of humans and I will use ‘humans’ as a short-hand term for such entities and not worry about boundary problems.
At the same time, under my usage, bees do not have a language – what bees do, and what most animals do, is exchange signals. The exchange of signals and the use of language are distinguishable by the fact that the former can be endlessly transformed from one coded form to another and back again without loss whereas whenever languages are translated they are likely to lose something because meaning is related to the culture in which they are embedded.[Note 4] Exchanges of signals can be understood (translated as it is sometimes said but the correct word is ‘decoded’) by anyone and that is why we can ‘understand’ the ‘language’ of bees (that is, we can ‘decipher the code’). Languages proper can be translated only by those who have a cultural overlap with the entity doing the speaking and that is why it so hard to know whether dolphins are speaking a language and that is why to support the claim that apes can use language we have to teach them ours.
Embeddednes in Society
What it is that is afforded by membership of a society has been analysed at length by Martin Kusch and myself in our 1998 book called The Shape of Actions. We divide the domain of human actions into two types. ‘Mimeomorphic actions’ can be copied merely by replicating the externally visible behaviours regularly associated with the action – for example, punching in a predetermined number on a telephone keyboard. Polimorphic actions do not have behaviours regularly associated with them, however, so they cannot be copied just by copying visible behaviours. For example, the action of greeting, if it is to remain ‘greeting, ’ rather than saluting, or insulting, or jesting, has to have variation in its behavioural instantiations. To repeat a greeting in just the same way every time would not work as greeting. Furthermore, different polimorphic actions are sometimes instantiated with identical behaviours. An example is signing your name, which might be paying money – as in signing a cheque – putting the finishing flourish to a written declaration of love, or surrendering the future of your country to the domination of a foreign power.
In the case of mimeomorphic actions, understanding the relationship of behaviour to outcome is possible without understanding the society. One could, with enough patience, simply work out the correlation between certain behaviours that you did not understand and certain consequences that you may or may not understand – as those who study bees have come to decipher the dance. One might even repeat those behaviours in order to bring about those consequences – as birdwatchers have learned to use bird-calls. In contrast, in the case of polimorphic actions it is necessary to understand the society in order to interact. Only if the social context in which the action is being carried out is understood can the appropriate behaviour for executing an action in a particular circumstance be generated. Thus, when I greet my beloved after a long absence with the utterance ‘you bastard, ’ there is a good chance that she will understand it as a declaration of love indicated by my anger at the misery she has inflicted on me by being apart from me for so long. If I utter the same words on first meeting almost anyone else, things are likely to go wrong. The only way to learn to understand a society that we know of is to become a member of it (at least, temporarily).
This social embeddedness of the majority of our actions makes a difference to artificial intelligence. The book by Kusch and I works this out in considerable detail but the point can be made with a single classical example which is mentioned in passing in Bert’s paper. This is the example of bicycling, first famously invoked by Michael Polanyi to illustrate his concept of tacit knowledge – things we know but which we can’t tell. I quote Bert’s whole paragraph because the context of Heidegger and the hammer is also exactly to the point.
As usual with Heidegger, we must ask: what is the phenomenon he is pointing out? In this case he sees that, to observe our hammer or to observe ourselves hammering undermines our skillful coping. We can and do observe our surroundings while we cope, and sometimes, if we are learning, monitoring our performance as we learn improves our performance in the long run, but in the short run such attention interferes with our performance. For example, while biking we can observe passers by, or think about philosophy, but if we start observing how we skillfully stay balanced, we risk falling over.
I have no doubt that Bert and Heidegger are both right about the fact that we risk degrading our performance if we pay self-conscious attention to the way we execute certain physical actions such as hammering and balancing on a bicycle. But this fact has to do only with the way humans perform such tasks efficiently. The proof that this lack of human self-consciousness when carrying out physical tasks has nothing to do with our ability to make a machine that can do the act is obvious. It is easy make an artificial bike riding machine and it has been done. As far as I know it uses gyroscopes and a feedback system. So if one wants to make an artificial bikerider, the fact that humans do it best when they are not paying attention is neither here nor there. And that is because balancing on a bike is a mimeomorphic action – anything that reproduces the behaviours mimics the action.[Note 5] As a matter of fact it
is not even the case that humans can ride bikes only if they do not pay attention. If we had much faster brains, or the equivalent – if we were riding on the surface of an asteroid with very low gravity so that the bike fell extremely slowly – we could ride pretty well by self-consciously following a set of rules or diagrams in rather the same way as we assemble flat pack furniture. The fact that in our world we have to do it unselfconsciously has to do with the limits to the way our bodies and brains work – our somatic limits.[Note 6]
But that is not all there is to bike riding. There is a polimorphic component to bike-riding that has to do with riding in traffic: when riding in traffic the conventions of the particular society in which one’s journey takes place has to be understood. For example, bike-riding in China is very different to bike-riding in America and requires a different set of behaviours that can be grasped, so far as we know, only through socialization. This grasping of the meaning of bike-riding in different societies, and consequent execution of the appropriate actions, is impossible to mimic by any currently foreseeable machine.
The fact that Bert’s paper does not separate these two elements of bike-riding, or hammering for that matter, but runs them all together with rabbits’ carrot-eating, reveals the problem with the new orthodoxy. It renders the social – the glass of the fly bottle – invisible, and that is why it is destined, sooner, or, as it more and more appears, later, to bash its head against it.
Language and Embodiment
The new way of pulling the arguments about language together turns on the role of the body. I want to suggest that when one tries to understand animals the body has one role but when one understands humans it has another. The difference lies in what I have called the ‘social embodiment thesis’ and the ‘minimal embodiment thesis.’[Note 7] The first thesis is about the relationship of the bodily form of the species to the world while the second thesis is about the relationship of an individual’s body to the world. Indisputably, the bodily form of the species affects the way of being in the world of that species and the individual members of it, and here there is no disagreement between my position and that of anyone else; this, to repeat, is the social embodiment thesis. The minimal embodiment thesis is where we start to disagree.
I claim that human individuals can have a way of being in the world that is, in most respects, identical to that other human beings even if their individual bodily form varies greatly from that of the species (for example they have severe congenital abnormalities); this is the minimal embodiment thesis. I have argued that the reason this can be so is that in the case of humans the main determinant of much of the way of being in the world for the individual is not the body but language. One can immediately see why I think the obsession with the body among the new expanded orthodoxy is misplaced.
The logic of the idea can, perhaps, be illustrated by starting with animals. Rabbits (an arbitrary choice) have evolved a behavioural repertoire that is intertwined with the evolution of their bodily form as a species. For example, they are prey animals so they live in burrows where their predators cannot go. They also have powerful legs and terrific acceleration so they can forage outside the burrow and get back to safety in a short time should a predator appear. If a rabbit loses a leg its acceleration will not be so great and it will be easier prey. If it loses two legs it will probably die pretty soon. So rabbits’ way of being is very directly affected by individual bodily form. But to see the logic of how the individual body might not make a difference consider reproduction. A male rabbit with only two legs rather than four can, during the short period it survives, sire a perfectly formed baby rabbit. So, in respect of breeding, individual bodily form has no effect on ‘rabbitness.’ The rabbit case has the following logic: in respect of breeding, an individual rabbit remains completely unchanged so long as it is minimally embodied – i.e., has nothing left of a body except those bits necessary to mate. In all other respects, a severely deformed rabbit is not much like a rabbit.
In humans there is a second respect in which an individual human can survive pretty-well unchanged in spite of having a markedly untypical body. This is in the matter of linguistic fluency. The claim is that just as any damage to the body of a rabbit is completely invisible in a baby rabbit that it sires, so any damage to the body of a human (even congenital damage) is (or at least can be), completely invisible in the language it speaks. Though the language of humans, like the genetic code of rabbits, is structured by the bodily form of the species (a kind of body-centred Sapir-Whorf hypothesis) the language of any individual remains the same as that of the species whatever its body is like so long as the minimal amount is left that is required to enable embedding in the bath of language generated by the rest of the species. This minimal body might well include the brain, the larynx, and the ears or equivalents but it is up for debate.[Note 8]
This claim has been expressed in terms of ‘the strong interactional hypothesis: ’
A person with maximal interactional expertise and no contributory expertise will be indistinguishable from a person with both in any test based on verbal interchange alone.
Here, ‘contributory expertise’ is the means and abilities to take part fully in a human activity while interactional expertise indicates linguistic fluency gained through immersion in the linguistic community without any corresponding physical interaction.
The strong interactional hypothesis can be, and has been, experimentally and observationally tested.[Note 9] It has been shown that the colour-blind are indistinguishable from colour perceivers in Turing-test like situations because they spend their lives surrounded by the talk of colour perceivers; that a sociologist who has been long immersed in the field of gravitational wave physics can pass as a gravitational wave physicist when compared with and questioned by other gravitational wave physicists who knew that only one full-blown physicist was taking part in the test; and it is backed up in a looser way by Sacks’s observations of the linguistic abilities of the famously disabled ‘Madeleine.’[Note 10]
Socialness, Language, and Artificial Intelligence
So what does all this mean for the project of artificial intelligence? It could be said that AI is three different things. The goal of AI-1 is to engineer devices that are useful to humans because they can take over some of the things we normally have to do ourselves (such as grammar and spell-checking or controlling the washing machine). Whether these devices do the job in just the same way as humans, or even produce an outcome that is exactly the same as that produced by humans, is of no concern so long as the machines are useful. I believe that The Shape of Actions, the book by Kusch and I, provides a framework for putting together recipes for the construction of useful machines under AI-1. With the recipes in hand, progress would be more sure-footed and there would be far less chance of falling foul of the old mistakes caused by lack of understanding of the social in the wider AI community –that is, failure to understand polimorphic actions.
The goal of AI-2 is to reproduce and thereby understand human behaviour and human thought. Those with this goal in mind will certainly have to understand Dreyfus and Heidegger because their ideas are central to understanding the way individual humans interact with the physical environment. There remains the problem of understanding language and socialisation but that problem is common to AI-2 and AI-3, the discussion of which now follows.
The goal of AI-3 is to mimic human actions, or subsets of human actions, exactly, irrespective of the means. As I see it, the goal of AI-3 is not so much to understand the nature of humans as to understand the nature of knowledge. For AI-3, balancing on a bike is a certain type of knowledge the possession of which can be mimicked by a machine, while riding in traffic is a different kind of knowledge that cannot (foreseeably) be mimicked. For the Dreyfusian approach, centred on the body, bikebalancing and be bike-riding in traffic are not dissimilar because the way humans do them is equally hard to explicate. The fact that humans tend to learn both in roughly the same way – by guided instruction without self-conscious rule-following at the highest level of achievement – is just a coincidence as far as the knowledge approach is concerned. In principle, one can understand the nature of knowledge by building a machine that has knowledge even if it does not have it in the same way as humans, and has not learned it in the same way as humans. Analogously, humans pull things: one may understand the nature of pulling (AI-3) by examining farm tractors even though humans don’t have diesel engines or wheels; one may not, however, understand how humans use force (AI-2) by examining tractors. Here again, I believe The Shape of Actions established the correct dividing line between what kind of mimicking machines can be built and what kind can’t be built because it concentrates on knowledge rather than bodies.
To exemplify again, one sub-goal of AI-3 is to build machines that can pass the Turing Test irrespective of whether the artificial brain/entity is like the human brain. It has been said that the Turing Test is too easy to be a true test of AI, even AI-3, but this is far from true. No machine has come anywhere near passing unless the judges were unaware that a test was taking place. If the judges do not know it is a test then it becomes a test of hoaxing ability rather than language use; hoaxing is not imitating because the ‘hoaxee’ contributes a great deal to the result whereas in an imitation game, almost the whole contribution must be made by the imitator.[Note 11] Furthermore, a powerful Turing Test is very easy to design. It can be much more straightforward than the test as imagined by Turing. The test need only compare the ability of machine and person to edit small passages of text designed by a competent judge.
The problem of editing is easily explained. Consider the following sentence: ‘My spell-checker will correct weerd processor but won’t correct world processor.’ That is literally true as is revealed by the jagged red line beneath ‘weerd, ’ and the absence of such a line beneath ‘world, ’ in the text as it appears on my computer screen as I write this passage. (Try it!) Now, it might be possible to rectify the problem by making a more elaborate spell-checker that checks word-pairings as well as single words. But the point is that the human who edits this piece is going to know that neither word is in need of correction because it is written exactly as intended. To make a spell-checker that can do that would require that it understand the whole of this paragraph and that means being fluent in the language and understanding the argument – a matter of polimorphic actions – not just using look-up tables, however complex.[Note 12]
Thus, a machine that could edit well-chosen passages as competently as a human editor would have to mimic the social embeddedness of a human editor. But, as of now, the only way we know how to mimic social embeddedness is to embed in society – to do it the way humans do it. As things stand, then, AI-2 and AI-3 are identical in respect of this problem. To pass a well-designed Turing Test a machine would have to be embedded in society. Such a machine could develop interactional expertise in any domain in which it was embedded. It would no longer be merely mimicking what animals do but mimicking the thing that humans do that is beyond the reach of animals. It would, in other words, be the kind of entity which has socialness and, as result, could participate in language communities. It would have to come to own the specialist tacit knowledge that pertains to linguistic fluency in a specialist domain. It could do this, as I believe it to have been shown, without much in the way of a body.
Conclusion
I have argued that the problem of artificial intelligence cannot be solved unless it confronts the central role of socialness in human life. This confrontation will not take place so long as the problem of mimicking animals and the problem of mimicking humans is conflated. Unfortunately, such a conflation is encouraged within the new orthodoxy, which takes it that the body is central to the problem of AI. I can see no reason of principle (there may be lots of technical reasons), why animals should not be mimicked by artificial intelligence techniques. If this is correct, there is also no reason of principle why human abilities that consist of mimeomorphic actions alone should not be mimicked by artificial intelligence techniques.[Note 13] As for machines mimicking polimorphic actions, there may or may not be reasons of principle that prevent it being done. What we can say for certain is that there is no currently foreseeable way to do it. We do not even know how human babies grow up to be human adults never mind how to make machines embed themselves in human societies. Furthermore, such machines would have to embed themselves in the way that humans embed themselves. The ‘location’ of language and culture, in so far as it is the ‘grey matter, ’ is the grey matter in the many human brains that make up language-speaking or cultural communities. As Clark argues, the human mind is extended – but it is extended through other minds, not just artefacts.[Note 14] Individuals do not decide which words or which mannerisms will come into use in society and which will fade away, the collectivity decides. Individuals propose but only the collectivity disposes. An artificial brain would have to be able to propose and judge its proposals according to its judgements of potential success and then accept success or failure just as the human individual does. It is a business that is very hard to understand.
Dreyfus is right to pour scorn on Rodney Brooks’s attempts to model human behaviour by building the robot COG and its successors. Elsewhere I have referred to this as cargo cult science.[Note 15] Just as the Pacific Islanders hoped that building something in the form of a runway would bring cargo, Brooks seems to have hoped that building something with some minor resemblance to a human would bring intelligence. Dreyfus’s reasons and mine for criticising Brooks are different, however. Dreyfus thinks Brooks’s project was hopeless because he did not build anything that resembled a human in terms of bodily abilities. I think the project was hopeless because he did not even begin to think about how COG could be socialised. The idea that some simple reward and punishment regime is equivalent to socialisation is plainly ridiculous because, so far as I understand, even devices with brains and bodies identical to those of humans (human babies) brought up in this equivalent of a Skinner box fail to learn to be social adults. What is needed is to understand socialisation better or work out how to mimic it by some other means. Perhaps this will be more likely to come about if we incline ourselves to study human knowledge rather than the way humans possess knowledge.
References
Chalmers D. L. (1996) The Conscious Mind: In Search of a Fundamental Theory, New York: Oxford University Press.
Collins H. M. (1998) ‘Socialness and the Undersocialised Conception of Society’, Science, Technology and Human Values 23(4): 494–516.
Collins H. M. (2007) Bicycling on the Moon: Collective tacit knowledge and somatic-limit tacit knowledge. Organization Studies 28(2): 257–262.
Collins H. M. (2010 forthcoming) Tacit and Explicit Knowledge, Chicago: University of Chicago press.
Collins H., Clark A. & Shrager J. (2008) Keeping the Collectivity in Mind? Phenomenology and the Cognitive Sciences 7(3): 353–374.
Collins H. & Evans R. (2007) Rethinking Expertise, Chicago: University of Chicago Press.
Collins H. M. & Kusch M. (1998) The Shape of Actions: What Humans and Machines Can Do, Cambridge MA: MIT Press.
Collins H. & Pinch T. (2005) Dr Golem: How to think about medicine, Chicago: University of Chicago Press.
Sacks O. (1985) The Man Who Mistook his Wife for a Hat. London: Duckworth.
Selinger E., Dreyfus H. & Collins H. (2007) Embodiment and Interactional Expertise. In: H. M. Collins (ed.) Case Studies in Expertise and Experience: Special Issue of Studies in History and Philosophy of Science 38(4): 722–740 (December)
Endnotes
1
Dreyfus, H., 2008, ‘Why Heidegerrian AI failed and why fixing it would make it more Heideggerian.’ pp. 39–73 in After Cognitivism, (ed.), Karl Leidlmair, Dordrecht: Springer.
2
Evan Selinger has pointed out to me that in so far as Dreyfus concentrates on the embodiment aspect of Heidegger’s philosophy he is not being faithful to Heidegger himself. Heidegger’s overall approach includes a marked discontinuity between humans and animals. Heidegger, then, is not being clasped quite so close to the bosom of AI as Bert’s paper implies. Selinger suggests that, ironically, in this respect the critique advanced here is more Heideggerian than Dreyfus’s paper. My knowledge of Heidegger is minimal, so where I refer to Heidegger in this paper I should really be talking about ‘Dreyfus’s Heidegger’ at least as he appears here and in other works by Dreyfus on AI – that is where I get my Heidegger from. Karl Leidlmair has made similar points about the relationship between Heidegger and Dreyfus’s AI-Heidegger as his introduction to this volume indicates.
3
Chalmers (1996). The argument about socialness is first made in Collins (1998).
4
These definitions are from Collins (2010) forthcoming.
5
It is ‘mimics’ the action rather than ‘reproduces’ it because an action always goes with an intention and in the mechanical rider there is no intention.
6
This argument, and the use of the term ‘somatic limit tacit knowledge’ can be found in Collins (2007) and (2010) forthcoming.
7
See Collins and Evans (2007) for the latest use of these terms though they go back some years.
8
For an indication of how the debate might go, or even whether the thesis stands up, see Selinger et al. (2007).
9
Collins and Evans (2007).
10
Sacks (1985). As with many provocative experiments, the interpretation of these has been challenged (Selinger et al., 2007).
11
Collins and Evans (2007) has more on the editing test and on hoaxing vs. imitation games. See also Chapter 2, on bogus doctors, in Collins and Pinch (2005).
12
Very complicated look-up tables have been invented after the style of John Searle’s ‘Chinese Room.’ However ingenious, unless continually updated by humans, such those who construct the initial entries, they still fail any Turing Test that takes place in a changing world.
13
The domain of mimeomorphic actions is explored in The Shapeof Actions (Collins and Kusch, 1998).
14
Collins et al. (2008).
15
Collins et al. (2008).
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/4207 on 2017-08-03 · Publication curated by Alexander Riegler