Abstracts

Introduction: The "New Computationalism"
Matthias Scheutz, University of Notre Dame, University of Vienna, ASoCS
 



Invited Talks

Four Dialectics of Computing
Brian Smith, Indiana University Bloomington, USA
 
Four dialectics underlie all of computing.  The first is the classic relation between mechanism and meaning: how a device that on the one hand is "mere mechanism" can nevertheless mean things, represent the world, carry information or significance.  The second has to do with the relation between the concrete and the abstract: whether computation is fundamentally a physical or material notion, or whether it is more abstract (even mathematical).  The third involves the relation between the static and the dynamic: how stable, relatively passive entities, such as programs and machines, relate to and engender active processes and behaviour.  The fourth involves an interplay between one and the many: how objects that for some purposes are best treated as unitary or single are, for other purposes, better treated as several (one language, many programs; one program, many executions; one procedure, many call sites; one file, many copies; one product, many versions; one document, many editions; etc.).

Different accounts of computing deal with these dialectics in different ways.  Formal symbol manipulation conceptions typically separate meaning and mechanism, treating the latter abstractly. Dynamical systems approaches treat computing concretely, and take time seriously, but (usually) ignore semantics.  Ontological issues of one vs. many are inscribed in all sorts of traditional distinction (type vs. token, class vs. instance, set vs. member, variable vs. value, intension vs. extension, etc.), but are rarely theorised together, as variant species of a common genus.
 
No claim is made that these four dialectics are exhaustive.  But characterising computing in their terms, and identifying distinctive characteristics of each, provides a framework in terms of which to assess any candidate theory.



Ideas about a New Computationalism: Its possible limits and its relation to VR-Technologies
Rainer Born, University of Linz, Austria

Why a Closed, Rule-governed, 'Digital' System need not be a Formal System: the Case of the Ancient Game of Go
Adrian Cussins, University of Illinois at Urbana, USA
 



Authentic Intentionality
John Haugeland, University of Pittsburgh, USA

Original (not "intrinsic"!) intentionality is essential to genuine cognition. The paper sketches a positive account of what is required for a "system" to have original intentionality. The main conclusion is that original intentionality presupposes an ability to accept responsibility. Thus, contrary to the assumptions of many researchers, responsibility is an essential topic of cognitive science.



Narrow Versus Wide Mechanism
Jack Copeland, University of Canterbury, NZ

A narrow mechanist holds that the mind is a machine equivalent to a Turing machine. A wide mechanist holds that the mind is a machine but countenances the possibility of information-processing machines that cannot be mimicked by a universal Turing machine, allowing in particular that the mind may be such a machine. Relying on neglected work by Turing, I argue that it is wide mechanism, not narrow, that is the legitimate descendant of the historical mechanism of Descartes, Hobbes, La Mettrie, et al. It is often said that logical work by Turing and Church has shown that mechanism is exhausted by narrow mechanism, but this view is a muddle. Turing himself, a mechanist par excellence, was not a narrow mechanist. Standard arguments for narrow mechanism are vitiated by various closely related fallacies, including the 'equivalence fallacy' and the 'simulation fallacy'.



Symbol Grounding and the Origin of Language
Stevan Harnad, University of Southampton, UK

The Symbol Grounding Problem concerns the question of how to connect meaningless symbols to what they mean, rather than to just further meaningless symbols, all systematically interpretable to an outside mind, but meaningless in and of themselves. Evolution has clearly solved this problem in the case of both natural language and the language of thought. How has it done so? Some very simple artificial-life simulations of the adaptive advantages of symbolic "theft" over sensorimotor "toil" will be presented as a hint of how and why this might have taken place.

Harnad, S., Steklis, H. D. & Lancaster, J. B. (eds.) (1976) Origins and Evolution of Language and Speech. Annals of the New York Academy of Sciences 280. Harnad, S. (1990a) The Symbol Grounding Problem Physica D 42: 335-346.

Harnad, S. (1996b) The Origin of Words: A Psychophysical Hypothesis In Velichkovsky B & Rumbaugh, D. (Eds.) "Communicating Meaning: Evolution and Development of Language. NJ: Erlbaum: pp 27-44.

Cangelosi, A. & Harnad, S. (1998) The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories Presented at the Second International Conference on the Evolution of Language, London, April 1998. 



The Rumors of its Demise have been Greatly Exaggerated
David Israel, Stanford University, SRI International, USA
 
There has been much talk about the computationalism being dead. But as Mark Twain said of rumors of his own death: these rumors are highly exaggerated.  Unlike Twain's case, of course, there is room for a good deal of doubt and uncertainty as to what it is exactly that is being claimed to have died.  Whose old conception are we talking about?  Turing's? Fodor's?

I will leave the issues of the computational model of mind to the philosophers and cognitive scientists. I will address rather
some -- or at any rate, one -- of the real shifts of focus in theoretical computer science: away from single-processor models
of computation and toward accounts of interaction among computational processes.  I will even address the question as to
whether this is a shift in paradigms or simply (?) a quite normal evolution of interests within a paradigm.  Maybe a few
philosophical morals will be drawn.



Out of the Box: How AI Got Computers Wrong
Phil Agre, University of California, Los Angeles, USA

Numerous critics of artificial intelligence have argued that AI researchers, by using computers as the basis for their models, have misrepresented human beings.  Yet little attention has been paid to a complementary phenomenon: the ways in which AI research has misrepresented computers.  Anthropomorphic metaphors suggest treating computers in isolation as "boxes" comparable to individual human beings.  But computers, as they have developed historically, are in fact deeply embedded in their institutional environments.  This paper reviews several theories of the institutional embedding of computers, and uses
these theories to make sense both of the complaints against AI and of the real successes that the field has had.



Contributed Talks

Transparent Computationalism
Ronald Chrisley, University of Sussex, UK

A distinction is made between two senses of the claim "cognition is computation".  One sense (the opaque reading) takes computation to be whatever is described by our current computational theory and claims that cognition is best understood in terms of that theory.  The transparent reading, which has its primary allegiance to the phenomenon of computation, rather than to any particular theory of it, is the claim that the best account of cognition will be given by whatever theory turns out to be the best account of the phenomenon of computation.  The distinction is clarified and defended against charges of circularity and changing the subject.  Several well-known objections to computationalism are then reviewed, and for each the question of whether the transparent reading of the computationalist claim can provide a response is considered.



Representation for the "New Computationalism"
Tony Chemero, Indiana University Bloomington, USA

The "New Computationalism" that is the subject of this conference requires an appropriate notion of representation.  The purpose of this essay is to recommend such a notion.  In cognitive science generally, there have been two primary candidates for spelling out what it is to be a representation: teleological accounts and accounts based on "decoupling."  I argue that the latter sort of account has two serious problems. First, it is multiply ambiguous; second, it is revisionist and alienating to many of the potential allies of the "New Computationalism".  I also suggest that teleological accounts do not suffer from these problems, making them more appropriate as the foundation of any new computationalism.



Embedding Computation
Georg Schwarz, University of Vienna, Austria

It is sometimes observed that computation in its current form is often disembodied, abstract, or purely syntactic.  I propose to argue that these features are not inherent properties of computation.  Instead, they result from specific applications, such as the computation of abstract functions, the simulation of physical functions, and the emulation of computer programs.  Once we carefully separate abstract from physical functions, simulation from computation, and emulation from implementation, we should be able to identify the conditions for a properly embedded computationalism.



The Indeterminacy of Nondeterminism
Walter Warwick, Indiana University Bloomington, USA

Ideas about computation have come to play an increasingly important role in many contemporary philosophical debates. These intuitions are underwritten by a long tradition in theoretical computer science. In this essay, however, I present an open question from complexity theory where the received theoretical understanding runs counter to some very natural intuitions about algorithms. I point to the nondeterministic Turing machine as the source of these troubles and then sketch a brief history of nondeterminism  in theoretical computer science. Finally, I suggest that more recent developments in complexity theory are unlikely to resolve these philosophical tensions.



Computationalism - The Very Idea
David Davenport, Bilkent University, Ankara, Turkey

Computationalism is the view that computation, an abstract notion lacking semantics and real-world interaction, can offer an explanatory basis for cognition. This paper argues that not only is this view correct, but that computation, properly understood, would seem to be the only possible form of explanation! The argument is straightforward: To maximise their chances of success, cognitive agents need to make predictions about their environment. Models enable us to make predictions, so agents presumably incorporate a model of their environment as part of their architecture. Constructing a model requires instantiating a physical "device" having the necessary dynamics. A program and the computation it defines comprise an abstract specification for a causal system. An agent's model of its world (and presumably its entire mechanism) can thus be described in computational terms too, so computationalism must be correct. Given these interpretations, the paper refutes arguments that purport to show that everything implements every computation (arguments which, if correct, would render the notion of computation vacuous.) It then goes on to consider how physical systems can "understand" and interact intelligently with their environment, and also looks at dynamical systems and the symbolic vs. connectionist issue.



Productivity and the Progam/Memory Distinction
Oron Shagrir, The Hebrew University, Israel

In their seminal paper, Fodor and Pylyshyn (1988) argue that Classical, but not Connectionist, architectures support central features of cognition, such as productivity, systematicity and compositionality. Their argument challenges any non-Classical “successor” notion of computation. If the “successor” notion cannot account for these central features of cognition, it cannot serve as an explanatory notion of the mind. In this paper, I examine the arguments concerning productive capacities. I argue that Classical architectures do not support productive capacities, and, even if they did, connectionist architectures do not fare worse in respect to productivity.



No Cognition without Representation?
Dynamical Computationalism and the Emulation Theory of Represenation
Samir Chopra, CUNY Graduate Center, New York, USA

The dynamical hypothesis threatens computationalism and its commitment to mental representations: the continuous, reciprocally causative nature of cognition renders impossible such an analysis. Andy Clark and Rick Grush have suggested a hybrid approach that models cognitive systems using dynamical systems theory but employs mental representation via emulation to aid its explanations. I will call such a view dynamical computationalism and in this paper, I evaluate its prospects.

The proposed 'architecture' of representation,  which a cognitive agent must possess in order to be distinguished from a merely adaptive one, uses a notion of partial agent-environment coupling to facilitate its use of representations. In the capacity for emulating the environment for use as internal representation, dynamical computationalism claims to have laid down a necessary condition for cognition.

I will show that this approach reveals a misunderstanding of the force of the dynamical hypothesis. I argue too, against the plausibility and the necessity of the condition laid down: it is framed in terms of a capacity for emulation, but its absence is compatible with cognitive behavior. Dynamical computationalism would be better off without such an implausible condition on cognition elevated to the status of a necessity.



Part Binding in a Noisy Environment by Dynamic Binding of Synfire Chains
Gaby Hayon, Moshe Abeles, Daniel Lehmann, The Hebrew University, Israel
 
Part binding is one of the various types of binding in cognitive psychology where the parts of an object must be segregated from the background, and bound together. In noisy environments there is more then one solution to the part binding problem, yet the brain chooses one such solution based on the relations among the parts. One model of the situation may consist of binding of the representation of some primitive parts of an image to create a composite object. Expecting to see a specific object may effect which parts will be bound.

A synfire chain is a feed forward excitatory network consisting of a large number of pools. Synfire chain models may account for the representation of composite objects by dynamic binding (synchronization of activity waves) among such chains. Using synfire chains for part binding calls for some binding sensitive inhibitory mechanism which controls the total amount of synchronization in the network. Most known solutions assume some specific inhibitory connections which are not biologically plausible. To control the synchronization level we introduce a synfire chain with excitatory as well as inhibitory neurons within the pools. We study the main properties of this control mechanism with simple models. Using this control mechanism we solve a simple part binding problem and the effect of priming using a neural network simulation and a theoretical model.



Posters

Neuromorphic Engineering and a Successor Notion of Computation
Catherine Breslin1)

Introduction
It has been suggested that the construction of computational devices intended for the modelling of neurobiological processes, including cognition, will be unsuccessful unless the issue of physical implementation is addressed. This work focusses on the implications for neuron or cell models. It is assumed that the form and function of individual cells make a non-trivial contribution to overall computational behaviour and that there are cases in which it is appropriate to make a direct use of physical variables by implementation.

Technologies and Equivalences
The choice of substrate for implementation is constrained by the requirement that an equivalence exist between the technology of the substrate and the technology of the cell. Much has been said about the status of various kinds of equivalences: physical, algorithmic, computational. (Harnad, 1994, Marr, 1982) It is argued here that the investment of time and resources required to
construct a physical implementation is only justified in cases where a physical equivalence exists between the technology of the substrate and the technology of the cell. This means that the physics of the model cell must be identical to the physics of the biological cell.

Neuromorphic Engineering
A cell provides a barrier between the external world and its internal world. An electrical potential difference arises across the barrier and controls the diffusion of particles between extra- and intracellular spaces. The potential alters in response to changes in physicochemical variables. These alterations are the basis for information encoding and transmission in the nervous system. Likewise, a field-effect transistor provides a barrier between source and drain regions. The gate terminal controls the diffusion of particles between these regions. The physics in both systems is described by Boltzmann's distribution. The equivalence between the two systems was first noted by Mead (1989). This means that cellular computation can be constructed from the intrinsic properties of transistors and the materials from which they are composed. Neuromorphic engineering uses this technology to model cells, networks and whole sensorimotor systems. The technology is also capable of providing an interface with the external world and with other biological elements. Cell membranes are used interchangeably with the gates of transistors to create interfaces between the technologies of biology and analogue devices. (Fromherz & Stett, 1995)

Conclusions
The morphology of the cell can affect information encoding and transmission. Physically implementing this morphology makes it possible to explore the relationship between structure and function in way that has explanatory and predictive ability. Whilst this can be seen to have benefits for cellular neuroscience, it is not obvious how this extends to cognitive neuroscience. It may be necessary to decide whether it is more useful to continue to shape the silicon technology in order to closely model neurobiology, or to allow the properties of the silicon technology to determine the evolution of the silicon cells. (Etienne-Cummings et al., 1998, Sarpeshkar, 1997) This decision may be influenced by whether the structural richness and diversity of cells is seen as an explanation for the complexity of cognitive processing or as compensation for other deficits, such as the relatively slow speeds of transmission.

Acknowledgments
This work is funded by the Gatsby Foundation.

References
Etienne-Cummings, R., Van der Spiegel, J. & Mueller, P. (1998). Neuromorphic and digital hybrid systems. In L.S. Smith &
A. Hamilton (Eds.), Neuromorphic systems: engineering silicon from neurobiology. Singapore: World Scientific

Fromherz, P. & Stett, A. (1995). Silicon-neuron junction: capacitive stimulation of an individual neuron on a silicon chip. Physical Review Letters, 75 (8), 1670--1673.

Harnad, S. (1994). Levels of functional equivalence in reverse bioengineering: the Darwinian Turing test for artificial life. Artificial Life, 1 (13)

Marr, D. (1982). Vision: a computational investigation into the human representation and processing of visual information. New York: W.H. Freeman.

Mead, C. (1989). Analog VLSI and neural systems. Addison-Wesley.

Sarpeshkar, R. (1997). Efficient precise computation with noisy components: extrapolating from an electronic cochlea to the brain. Doctoral dissertation, Computation and Neural Systems Program, California Institute of Technology, Pasadena.

1) Department of Computing Science and Mathematics, University of Stirling
Stirling, FK9 4LA, Scotland



Virtual Reality - Real Virtuality
On the Interaction between Simulation and Reality
Edeltraud Hanappi-Egger1) &  Hardy Hanappi2)

A central feature of social entities is that they are model-builders. This model-building is intentional in the sense that it guides actions in the real world. Since models are intentional, they have to reflect processes in reality, depicting what the entitity considers to be essential. In other words, a virtual reality is created. If it comes to computer simulation of this model-building behaviour, this implies that the mental models, i.e. virtual realities, are translated into a simulation language. In doing so models of models emerge. Evidently the latter process might involve another social entity with its own way of model-building.
Since this simulated world is a virtual world that nevertheless is induced by real processes and indeed is embedded in them, we could speak of real virtuality.

Social entities E do have mental models that might differ from each other. The modeller M in our case being another social entity with his/her own way of model-building tries to catch the mental representation of a certain individual. Since each model-builder is paying his/her attention only to selected facts/phenomena, the various models will partialy be overlapping, partialy be different from each other. This is also true for the implemented model - the simulation.

1) Edeltraud Hanappi-Egger
University of Technology, Argentinierstraße 8
Vienna, A-1040, Austria
email: eegger@pop.tuwien.ac.at

2) Hardy Hanappi
University of Technology, Argentinierstraße 8
Vienna, A-1040, Austria
hanappi@pop.tuwien.ac.at


Coding and Subjectivity in Cortical Information Processing
Ken Mogi1)  & Yoshi Tamori2)

In this paper, we discuss two fundamental aspects of cortical information processing. The first aspect concerns the principles of neural coding. We argue that unlike today's digital computer, the elementary coding units in the brain should be a cluster of bits connected via synaptic interaction (interaction-connected bits) rather than a set of bits which may not be interaction-connected. Specifically, it is not the mapping between patterns of neural firings and some concepts or objects but the internal relation between the neural activities that characterise the neural code. In other words, the code should be embedded in the dynamics of the system, rather than in an abstract "lookup" table. In order to establish this point, we review recent research directions in brain science, especially as regards cortical information representation, with an emphasis on the non-local character of the neural code. The second aspect concerns more systematic features of cortical information processing. Specifically, we argue that the neural mechanism underlying "subjectivity" is an essential ingredient of brain-like information processing. In order to establish the second point, we review a psychophysical phenomenon called "binocular rivalry", where non-correlated inputs from the two eyes compete for emergence in our visual awareness. We present some striking instances of this phenomenon, and argue that the brain is able to streamline massively parallel representations of visual information in the context of the consistency and information-content of the visual scene. Finally, we argue that these two aspects of cortical information processing are connected through the non-local character of cortical information processing. We suggest that in order to elucidate these aspects of cortical information processing, we need to take seriously the phenomenological aspects of our perception, such as qualia and intentionality.

1) Sony Computer Science Laboratory
Takanawa Muse Bldg. 3-14-13, Higashigotanda Shinagawa-ku, Tokyo, 141-0022
Japan Tel +81-3-5448-4380 Fax +81-3-5448-4273
kenmogi@csl.sony.co.jp

2) HISL, KIT, 3-1 Yatsukaho, Matto-shi, Ishikawa 924 JAPAN
TEL  +81-76-274-8255  FAX +81-76-274-8251
yo@mattolab.kanazawa-it.ac.jp