CEPA eprint 3973

Observing objects and programming objects

Esposito E. (1996) Observing objects and programming objects. Systems Research 13(3): 251–260. Available at http://cepa.info/3973
Table of Contents
1. Observing operations
2. Programming the lack of programs
3. Polycontexturality in computers
4. Second-order observation in object-oriented programming
References
The paper uses the tools of second-order cybernetics (theory of the observation of observations) in order to examine the implicit ontology of first-order cybernetics, i.e. of informatics. The starting point is the distinction of operations and observations, which is used to show that computers are machines operating without the capability to observe, but have the task to process observations. This requires a highly complex structure of distinctions and observations, based upon the possibility to program the lack of programs. The progress of informatic programming and the extension to telematics impose today a very refined (although often unconscious) articulation of observation levels: this is shown using the example of object-oriented programming (OOP).
Key words: constructivism; observation theory; cybernetics; computer languages
The theoretical orientation going back to Heinz von Foerster is generally called the theory of second-order observation (observation of observations), or second-order cybernetics. But why do we speak of cybernetics? The word ‘cybernetics’ refers to the discipline founded by Norbert Wiener, which is oriented towards the construction of machines able to control recursively their own behaviour (feedback mechanisms) – the discipline that gave rise to the development of informatics (Wiener, 1948). But what does observation have to do with computers? And above all: in what sense do we speak of second-order observation (or second- order cybernetics)?
In this paper I will try to give an answer to this question, starting from Heinz von Foerster’s writings and reflections. I have the impression that, even apart from the specific history of the theory of second-order observation, there is a fundamental connection between this theory and cybernetics, and that the ontology of informatics refers (explicitly, or more often implicitly) to the orientation of the observation of observations – and that this orientation is in fact presupposed by the most elaborate and efficient approaches in the field of informatics (although this connection is not always and not necessarily a conscious one).
We are dealing certainly with a constructivist approach, but with a particularly elaborate and self-conscious constructivism. In order to account for the complexity of the situation it is not enough to say that the objects are constructed by the observers, but it is necessary to go deep into the complexity of the relationships between objects, observers, observations of objects and observations of observations. This complexity is the basis for the approach that Niklas Luhmann calls operational constructivism, distinguishing it from the too naïve ‘radical constructivism’ (Luhmann, 1990, Teoria sociologica, 1993): a constructivism that does not start from the simple distinction between objects and observers, but rather from the distinction between operations and observations – a decision implying a series of consequences that will be discussed in the remainder of this paper.
In my opinion, the point is that the working of machines that have the task of processing meanings[Note 1] requires the ability to distinguish the elaboration process as such from the attribution of a meaning – the operations of the machine from the observations of those using and interpreting the results. But in order to make this distinction the observer must be able to observe the observations as distinct from the operations: he must then be able to accomplish a second- order observation, observing first of all himself as an observer – hence all the paradoxes von Foerster’s work made us familiar with, and also a series of features of informatic programming.
The perspective I start from is that of a sociologist and of an observation theorist. The treatment of the issues of computer science will then not be professional or technical, but focused on the communicational structures rather than on information technology. What I am interested in is primarily to find out the structure of distinctions underlying the developments in this field and to refer it to certain constellations of observations. In order to do this, I will take into account first the role of the distinction of operations and observations in von Foerster’s theory (section 1), and will move then to the observational structure of computer operations (section 2). In section 3 the discourse opens with the issues of communication and of the plurality of observers, under the guidance of Gotthard Günther’s concept of polycontexturality. Section 4 will finally examine a specific trend of informatic programming, the orientation towards objects (OOP), in order to compare the presuppositions upon which it is based with those of the theory of second-order observation.
The aim of the confrontation cannot obviously be to offer suggestions to programmers or analysts inside informatics, but rather to point out some features of what could be called the implicit ontology of computer science. The suggestion could be – if the analysis looks plausible and one finds a convergence with second-order cybernetics – to make more intensive use of this relationship, looking at the work of Heinz von Foerster and of his school for further ideas and further clarification. Refinement of programming technique does not always actually correspond to an equally refined theoretical consciousness: the operativity of the machine seems to act sometimes as a substitute for theoretical reflection in order to select acceptable proposals and solutions. This does not mean, however, that reference to a theory like von Foerster’s could not help to develop better ideas and to do it sooner, or even to spare the time and effort dedicated to unsuccessful attempts. For those dealing with constructivism and observation theory, on the other hand, informatics offers a particularly relevant example of the fundamental role, the implications and the practical scope of the distinction between operations and observations.
1. Observing operations
As Heinz von Foerster knows very well, second- order cybernetics is a strange discipline arising from the combination of two issues that cannot be combined: that is why it is is so creative and why it must always be based on paradoxes. But what are these issues?
On the one hand, of course, the whole problem of recursivity is the discovery that in order to have stable forms and not arbitrary behaviours it is not necessarily required to have a project – even less a ‘rational’ project. The order comes about through the hooking of distinctions to former distinctions – through a computation of computations in which the reference to an independent reality disappears (von Foerster, 1976). Order does not come from the adjustment to regularities already present in the world (i.e. from order) but neither does it come from disorder (i.e. it is not merely a statistical regularity): it is much more interesting to remark – with an old formula of Heinz von Foerster – that order comes out of noise (von Foerster, 1960). Order arises first of all from the possibility to create an irritation, a nuisance, i.e. from the fact that there is something to be irritated. It is necessary to have a system going on with its operations, which can at any time receive or reject an external stimulus. That is why recursivity is crucial: what is needed is actually only a condition of closure making it possible to hook a distinction to former distinctions. Everything else is a consequence. One can then have non-arbitrary behaviours that are nevertheless entirely unpredictable – completely determinate forms that cannot be calculated a priori. This is the side going back to autopoiesis[Note 2] and ultimately to systems theory.
But there is another aspect superimposing upon this, which creates all the most serious problems. Second-order cybernetics is second- order because it is concerned not only with operations but also and above all, with observations – indeed with observations of observations. And even if observations are themselves (inevitably) operations, this does not erase the distinction. What is it a matter of? Recursivity certainly generates a border between inside and outside (between system and environment), but not for the operations themselves – rather for someone who is able to grasp it. It generates the distinction of inside and outside, but not the distinction of self-reference and hetero-reference. In order to have it an observer is required, who must be able not only to distinguish inside and outside (to do this recursivity is sufficient) but also, to orientate to this distinction and to use it to lead his operations. An observer is required who knows, for example, time, who is able to modalize, who projects possibilities. But these abilities do not descend from closure, even if they obviously presuppose it – even if observation is actually itself an operation.
Let us take the much discussed and very instructive case of the non-trivial machines proposed by von Foerster (see, for example, von Foerster, 1985). These machines are entirely determinate but nevertheless unpredictable, because their behaviour depends at any time on their internal state – an internal state that changes (again in a entirely determinate way) when the operations proceed. Therefore nontrivial machines do not always give the same output when they receive a certain input – unlike ‘trivial’ machines such as cars or mixers (if not broken): any time one turns the key, the car starts. Non-trivial machines, on the contrary, are surprising and forced to learn steadily from their behaviour, which is always different, as are the contingent circumstances: the same input can then lead to different answers in different moments – just because the internal state of the system (or of the ‘machine’) has changed. If, however, one could know the rule connecting the internal state with the operations (if the input is T when the machine is in the state I it turns to the state II and the output will be 4; if the input is S it remains in state I and the output will be 3, etc.), then the non-trivial machine would be re- trivialized, even if in a more complex way: relating input and internal state, one could predict the behaviour of the system. And the procedure can be repeated at each further level of de-trivialization.
Is there no ‘essential unpredictability’ then?[Note 3] Are there no machines whose behaviour is definitively unpredictable? How can we account for the case of individuals?
Let us suppose that there is a machine which ‘decides’ its state starting from the input and from the moment, i.e. a machine that is able to orient to the difference of inside and outside and to use this to lead its behaviour. Such a machine can never be re-trivialized, because the rule ‘determining’ its behaviour is dependent on the moment and on its very behaviour. It is then a machine realizing a ‘double closure’ (von Foerster, 1993), a closure at the level of observation that is entirely independent of the ‘first’ recursive closure of operations. At the level of operations one cannot actually know who settled the correspondence of input and internal state, nor can one decide to change it. One has to resort to the observer, i.e. to something independent from the simple performance of the operations.
Second-order cybernetics knows very well that operations and observations are separate questions, even if they actually almost always overlap: when one is concerned with ‘cognition’ in general, almost all interesting operations are in fact operations of observation. To remind us of the difference is the task of paradoxes, which arise when one confuses the observation levels. To generate a paradox is hence almost always possible – and almost always instructive. It is like a signal presenting in a concentrated form the impossible combination which is at the basis of second-order cybernetics and compels it to be creative. But there is a sphere where the two levels of operations and observations are separate and, not by chance, it is exactly the one from which this whole field of study started: the sphere of ‘first-order’ cybernetics, i.e. the study of computers. That is why it is so difficult to decide if computers are intelligent. I have the impression that von Foerster and Weston’s considerations about artificial intelligence can be read in this key: they highlight clearly which advantages one gets when one goes back to the first order coming from the second, equipped with the ability to distinguish operations and observations – the distinction of inside/outside and the distinction of self-reference/hetero-reference.
2. Programming the lack of programs
As we said above, what characterizes computers is the fact that they are machines realizing the first but not the second closure: operating but not observing. If this is true, then we must expect that their whole behaviour can be expressed in the form of a paradox – a paradox coming from the fact that one tends to attribute also to their operations the ability to observe, while their entire usefulness is due exactly to the fact that they do not. It seems to me that von Foerster and Weston’s considerations lend themselves to this kind of reading and implicitly prepare it.
With reference to Gordon Pask (1970), the problem of artificial intelligence is defined starting from two questions (and from their combination): the determination of purposes and the orientation to language. Making explicit the underlying paradoxes, I would like to express their conclusion as follows:
(1) Computers do not have their own purposes: they accept purposes from outside (von Foerster and Weston, 1973, p. 356). One says that computers are ‘general-purpose machines’: machines that – unlike cars or cash registers do not fulfil a specific task. Computers are the only machines that are not able to do anything, and just for this reason can be used to do the most different things. They are machines that can be used for every purpose, just because they do not have purposes of their own. In a strict sense it is not correct to say that they have a ‘general purpose’: computers do not have any purpose. Or rather: the purpose of computers is to have no purpose.
So stated, one could have the impression of dealing with useless machines – and this is obviously not the case. This indeterminacy of computers is made productive by their second feature.
(2) Computers are language-oriented machines (von Foerster and Weston, 1973, p. 359), i.e. machines that can be programmed and receive at any time the purposes that the situation (or the observer) requires. One could say that they are non-trivial machines because they offer the observer the possibility to settle the relationship of internal states and outputs, and behave consequently. They are unpredictable in their behaviour (they are in fact non-trivial) but predictable in their purposes, which must be settled from outside.
Were it only a matter of this, however, the question of machine intelligence would be unlikely to arise. Computers look autonomous and unpredictable; they seem to follow their own path and their own logic, or even to have their own intentions. This happens as a consequence of more elaborate forms of programming, corresponding to the so-called higher-level languages. In von Foerster’s words: it is a matter of the ability to be context-oriented that seems to be observable in expert systems, which apparently ‘remember’ the answers of the users and utilize them to draw deductions in order to orient their behaviour (von Foerster and Weston, 1973, p. 361). The hierarchy of programming languages (machine language, compilers, operative system, higher-level languages) serves just to provide the computer with an internal (linguistic) context mediating its (external) orientation to language.
But let us think of what actually happens. Is a computer provided with an operating system and with the ability to process higher-level languages programmed or not programmed? Has it a purpose or has it no purpose? Could one not say that the more the programming is refined the more the machine is prepared to receive the purposes settled by someone else? That it is always easier to orient its behaviour? The so- called 4GL languages (fourth-generation level languages) serve just to offer the user more and more structured tools in order to intervene in the programming of the system: i.e. programs are generated that serve to generate programs. Then, however, it is not simply a matter of an indeterminate machine, but rather of a system arranged in a very refined way just to be shapable by external intentions.
With a more openly paradoxical formulation, one could say that what is programmed is the lack of programs. The computer is used in this case to modify the way it can be used, and this happens self-referentially too: what the system can be made to do depends on what the system can do.
In both cases the paradox comes from the combination of openness with recursivity – or better from the fact that recursivity at the level of operations (the machine is not trivial) is not combined with the closure at the level of observations (purposes and orientation come from outside). The computer creates to a certain extent its own internal context, and its operations are indeed predictable in detail neither by the one who built the machine nor by the one who programmed it, and even less by the user. They are always to a certain extent surprising, hence informative. But they are informative for others: computers lack their own distinction of self-reference and heteroreference – exactly because, not being able to observe, they cannot refer to anything.
That is why computers remain – as somebody said – stupid machines. Luckily so: a really intelligent machine would be of no use. The computer is stupid because it does only what it is programmed to do – but this is actually done, and in such a reliable way that we are able to remark when the machine is broken. It can in fact still be established that the computer does not work without taking into account its moods or its preferences. But this does not mean that the computer does only what we tell it to do, and even less that we would be able to do the same: thanks to the hierarchy of programs and to the recursivity of operations, the machine is auton omous enough to operate choices and draw deductions. The machine is able to make selections and luckily does not need to be intelligent in order to do it. Resorting once more to paradoxes, we could say that the intelligence of computers relies on their stupidity. The experience with intelligent machines (individuals) teaches how difficult it is to ‘use’ them to achieve one’s purposes.
3. Polycontexturality in computers
To discover paradoxes is interesting only if one does not stop there but shows how they are used in order to give a creative impetus to the operations of the system. They correspond ultimately to the ability to use hazard or to orient to the outside world. We must then show how the paradoxes of informatics are solved. On what level do they display their creativity? What are they good for?
Let us think of what happens when someone uses the computer. One makes the machine work, setting in motion processes one does not control – retaining, however, control in the global sense of the operation. In other words: the efficiency and interest of the computer lie exactly in the fact that the one using it knows what he wants to do but does not need to know in detail what the machine is doing. As we saw, in fact, the programs are articulated at several mutually independent levels. This means that the logic orienting the programming at each level does not have to be combined with the one of every other level: i.e. each level follows its own criteria that are relatively independent of the criteria of the other levels. The one setting off an operating system for example does not have to foresee in detail all the ways it will be used by other programs of a higher levee. This means that global functioning is taken away from a central control: there is no single logic leading it – but it is nevertheless far from fortuitious. The use of computers remains in the sphere of technology, at least because it is still possible to use the distinction it works/it doesn’t work: [Note 5] it is still possible (as we saw) to establish that the machine is broken or that the program has bugs – i.e. to control the lack of control.
Computers are peculiar machines exactly because their functioning is based on the distribution of decisions over several distinct centres: the programmers writing the programs and the programmers using them (possibly also to write other programs). Using Gotthard Günther’s (1979) expression, we could say that computers realize an implicit ‘polycontexturality’: by this term one denotes a situation where one recognizes the existence of several orientations (‘contextures’) that are not necessarily compatible or comparable with others, because each of them is guided by a different distinction. Two- valued logic cannot deal with this kind of situation, i.e. ultimately with the fact that what does not coincide with the positive value of a contexture (for instance true) must not necessarily receive the negative value (false): it can simply refer to a value of a different distinction (for instance beautiful, or right, or morally reproachable). Each contexture is oriented to its own binarity (to its own distinction) without the possibility, or even the need, of a supra-contexture unifying them: the very complexity of the situation would get lost. This plurality of orientations can be observed also in informatic programming: each level follows its own logic, and the efficiency of computers lies exactly in the fact that no global coordination is required. In order to make the computer work the user does not have to know machine language, or C or higher-level languages.
Up to now we have referred to the utilization of a computer by a single user. Having introduced polycontexturality, however, with only one more step we arrive at a further question, which seems to be the natural – and more interesting – outcome of the development of informatics: the extension to telematics and to combined utilization by several users. If informatics is actually based on the possibility of connecting several independent perspectives, who says that we must refer to a single final context? Could we not think instead of a context combining also the perspectives of several users in order to get an even more improbable result? When the need for a unifying supra-contexture is abandoned, nothing forbids us from further broadening the field, including also the contextures of several single users.[Note 6]
From artificial intelligence one turns next to the question of ‘distributed intelligence’ – of course through stupid machines. Function and sense of informatics do not lie actually in the reproduction of thought by artefacts, which is probably inpossible and would be in any case entirely useless. They lie rather in the distribution of intelligence over several independent centres, which are more and more uncoupled from one another. We cannot now be surprised to find again a paradox in the very expression ‘distributed intelligence’: one cannot distribute something that, like intelligence, relies on closure. If and how something is intelligent can be decided only with reference to a context, inside which a certain processing is meaningful or not. And how can a context be distributed? But this is apparently what happens with computers: this seems indeed to be the ultimate sense of the whole (by now emptied) question of artificial intelligence. And this had once more been anticipated already by von Foerster in 1973: ‘since the recursive loop... may include an arbitrary number of brains and machines to form a single intelligent machine, the question where to locate this intelligence in the system becomes irrelevant’ (von Foerster and Weston, 1973, p. 358).
4. Second-order observation in object-oriented programming
I have the impression that the broadening to telematics highlights exactly the connection of informatic programming with the approach of second-order cybernetics. This connection was signalled recently also by people working in this field: they feel more and more urgently the need for tools at the same level of abstraction as the observation of observations. Nicholas Negroponte (1994) refers explicitly to Gordon Pask when he says that programming in the field of telematics needs second- and third-order models: one needs a model by the computer of our model of it, and our model of its model of our model.
In any case it is required a turn to a constructivistic approach, abandoning the reference to a univocal reality and facing the most painful renunciation that this abandonment implies: the univocality of the world had in fact the very useful function of justifying the coordination of all descriptions. From the coherence of the world came automatically the coherence of the various descriptions of it; they could look divergent because they considered one and the same object from different perspectives, but once combined they had to generate a univocal picture – if this did not happen, there had to be a mistake. This univocality becomes definitively lost when one turns to polycontextuality, where each context corresponds to its own reference ‘world’; not by chance, the first problem addressed by Gotthard Gunther is just that of ‘Vermittlung’, i.e. of the connection between contextures: there is a need for complex operators capable of moving from one contexture to a different one. And also not by chance, telematics seems today to be confronted with the same problem: the very awkward question of conversion.
The underlying problem seems to be in substance the ability to become uncoupled from the context to make the informatic tools recyclable and reusable in contexts that are different, unpredictable and not coordinated with each other. Concerning this we are reminded one last time that this question had also been anticipated by von Foerster, when he stated that the aim of informatic programming should be the ability to orient to the context (von Foerster and Weston, 1973, p. 361); because – I would say today – only the one who is able to orient to the context can put himself at a distance and become independent from it.
But what is the situation today? Are there actually tendencies in the field of informatics that notice these needs and develop this approach? Or is it still only a matter of theoretical speculation? We can remark first that there is a tendency inside programming that addresses explicitly the problem of independence from the context and is regarded usually as a real revolution: I refer to the so-called ‘object-oriented’ programming (OOP) (see, for example, Coad and Yourdon, 1991; Giustozzi and Polini, 1991; Meyer, 1988; Wiener and Pinson, 1989). The issue is reformulated there as ‘reusability’ – if and how is it possible to use portions of software written in a certain moment with a certain purpose in a different moment and with a different purpose. The primary reason is one of economics: one would like to avoid having to start each time from the beginning, and try to offer the possibility to build a program starting from already existing ‘modules’ (precisely the ‘objects’). These modules may have been written by the same programmer in a former moment or by another programmer – the question does not change: one has, however, to disregard the original context and to adapt to a different one – one has to put to work a form of polycontexturality.
The problem is clearly the management of the renunciation to control. The programmer does not need to build the whole software he needs every time – but rather relies on the use of modules upon which he cannot directly intervene. The objects are actually characterized by a form of ‘closure’. One has to take them as black boxes: each object is able to do certain things (corresponding to the ‘methods’ available to it), only those and only in the way it is able to do them. The programmer who wants to use them can only send ‘messages’ asking to activate those behaviours, but does not have to get concerned with the approach they use or with the logic they follow. The total software, as a consequence, will not be characterized by a unitary logic, which the programmer can overview globally – according exactly with the expectations of the approach of second-order cybernetics. And the problem will obviously be the coordination (the ‘Vermittlung’).
But if this is true, and if it is true that observation theory can be seen as the reflection and systematization of this kind of problem, one would have to expect that OOP’s position presents (consciously or not) certain parallels with the assumptions of observation theory. I will try now to check this hypothesis with the tools available to a sociologist, examining in a non-technical way some of the features of OOP.
I will restrict myself to a few hints. If one compares the structure of the object-oriented approach with the positions of constructivism, one gets the impression that context independence requires necessarily an approach that not only renounces to take as reference a univocal and independent world, but must also resort to an articulation of observation levels (here we speak of description levels), recalling very nearly the positions of second-order cybernetics. This comes up particularly clearly in what is unanimously considered the ‘purest’ and most rigorous OOP language: Smalltalk (Xerox Palo Alto Research Center, 1986).
But first of all: what are these ‘objects’? Where are they located in the world (or outside it)? If – as I would like to do here – one refers to constructivism, they can obviously not be mere data, mere reflexes of autonomously existing entities. The position of constructivism on this point (coinciding here with that of second-order cybernetics) can be summed up in three assumptions:
All objects are constructions of an observer and should be reduced to its operations.Observation is itself an object that can be observed: as a simple object or as observation.Observation of observations (itself an object and itself observation) is inevitably self- referential: it falls (as observation) within the field of the objects it observes.
Only a theory respecting these three assumptions can consider the link with the actual context (hence also the context independence) – this then measures also the ability of OOP to face these questions.
But is it able to do it? At least in its purest form (Smalltalk’s one) it seems actually that the presuppositions of OOP correspond exactly to the above three assumptions, which inside take the following form:
Smalltalk is a ‘totally integrated programming environment’: this means that all entities inside it must be objects and nothing exists that is not an object. But the objects are absolutely contingent: they are the result of a construction by someone and can themselves be modified (implying obviously all the following risks, and particularly the risk not to account for all relations an object undertakes with other objects and for the reciprocal dependencies). The very programming environment can moreover be modified by the user.There are two kinds of objects: ‘simple objects’ and ‘classes’ (class objects). Classes are descriptions (observations) of objects, and all objects must be allocated to the class describing them. Classes, then, are themselves objects described by further classes (called ‘metaclasses’).[Note 7] Metaclasses (themselves objects and themselves classes) are described as second-order descriptions by the special class ‘metaclass’, which has the feature to describe itself also with a specific loop – a loop which reminds us of von Foerster’s very famous one, which reduces cognition to computation of computations, while the specific contents get lost (e.g. von Foerster, 1973): also the class metaclass, becoming self- referential, is the only class that disregards the contents entirely.
The ‘ontology’ of OOP is now complete: even if each description can itself be described, one does not fall into an infinite regression from description to description. The very self-referentiality implicit in second-order observation puts an end to the proliferation of logical entities: each description of a higher order is ultimately a discription of descriptions and falls as such within the only class ‘metaclass’. No more is needed – just as each observation of a higher order is itself ultimately an observation of observations (beside being a first-order observation and an object). Complexity increases combining these three characteristics, with no need to add further categories of objects.[Note 8]
I do not think that the authors of Smalltalk or of the other object-oriented languages[Note 9] referred to the issues of second-order observations, nor that they developed consciously the ‘ontological structure’ I tried to describe. That is why I talked above of an implicit ontology (although extremely consequent and refined): this is due in my opinion to the operational reference informatics must necessarily start from. The aim of the programmer is to guide the operations of the machine in the most effective way, i.e. in such a way as to lead to a satisfactory elaboration of the observations of the users. The observation theorist, on his side, deals with observations, but has to face the fact that they are themselves operations. I tried to show that the source of the complexity of the theme is exactly the intertwining of operations and observations. Cybernetics is (implicitly) oriented to the distinction operation/ observation, starting from the side of operations. Second-order cybernetics, on the contrary, starts from the other side of the distinction, i.e. from observations. The connections spring from the fact that it is in both cases one and the same distinction.
References
Coad P. & Yourdon E. (1991) Object-oriented Analysis, Prentice-Hall, Englewood Cliffs NJ.
Foerster H. von (1960) On self-organizing systems and their environments. In: Yovits M. C. & Cameron S. (eds.) Self-organizing Systems, Pergamon Press, London: 31–50. http://cepa.info/1593
Foerster H. von (1972) Notes on an epistemology for living things, BCL Report No. 9.3, Biological Computer Laboratory, Department of Electrical Engineering, University of Illinois, Urbana. http://cepa.info/1655
Foerster H. von (1973) On constructing a reality. In: Preiser W. F. E. (ed.) Environmental Design Research (Vol. 2) Dodwen, Hutchinson & Ross, Stroudsburg: 35–46. http://cepa.info/1278
Foerster H. von (1976) Objects: Tokens for (eigen-) behaviors, Cybernetics Forum 8: 91–96. http://cepa.info/1270
Foerster H. von (1984) Observing Systems, Intersystems, Seaside CA.
Foerster H. von (1985) Cibernetica ed epistemologia: Storia e prospettive. In: Bocchi G. & Ceruti M. (eds.) La sfida della complessita, Feltrinelli, Milan: 112–140.
Foerster H. von (1993) Fiir Niklas Luhmann: Wie rekursiv ist Kommunikation?, Teoria Sociologica, I (2): 61–88.
Foerster H. von & Weston P. E (1973) Artificial intelligence and machines that understand, Annual Review of Physical Chemistry 24: 353–378.
Giustozzi C. & Polini S. (1991) OOP: Object oriented Programming. La programmazione degli anni ‘90, Technimedia, Roma.
Günther G. (1968) Strukturelle Minimalbedingungen einer Theorie des objektiven Geistes als Einheit der Geschichte, Actes du Illeme Congres International pour l’Etude de la Philosophie de Hegel, Association des Publications de la Faculte des Lettres et Sciences Humaines de Lille: 159–205. Also in Günther (1980): 136–182.
Günther G. (1971) Life as poly-contexturality. In: Fahrenbach H. (ed.) Wirklichkeit and Reflexion. Festschrift fiir Walter Schulz, Pfullingen. (1973) pp. 187–210. Also in Günther (1979): 283–306.
Günther G. (1976. (1979) 1980) Beitrage zur Grundlegung einer operationsfahigen Dialektik (3 vols) Meiner, Hamburg.
Le Moigne J. L. (1985) Progettazione della complessita e complessita della progettazione. In: Bocchi G. & Ceruti M. (eds.) La sfida della complessita, Feltrinelli, Milan: 84–102.
Luhmann N. (1984) Soziale Systeme, Suhrkamp, Frankfurt.
Luhmann N. (1990) Die Wissenschaft der Gesellschaft, Suhrkamp, Frankfurt.
Maturana H. & Varela F. J. (1980) Autopoiesis and Cognition, Reidel, Dordrecht.
Meyer B. (1988) Object-oriented Software Construction, Prentice-Hall, Englewood Cliffs NJ.
Negroponte N. (1994) Less is more: Interface agents as digital butlers, Wired, 11 (6) 142.
Pask G. (1970) In: Rose J. (ed.) Progress in Cybernetics, Gordon & Breach, London: 15–43.
Teoria sociologica (1993) I (2) Angell, Milan.
Wiener N. (1948) Cybernetics, or Control and Communication in the Animal and the Machine, MIT Press, Cambridge MA.
Wiener R. S. & Pinson L. J. (1989) An Introduction to Object-oriented-programming and C++, Addison- Wesley, Englewood Cliffs NJ.
Winograd T. & Flores F. (1986) Understanding Computers and Cognition, Addison-Wesley, Reading MA.
Xerox Palo Alto Research Center (1986) Smalltalk /V. Tutorial and Programming Handbook, Digitalk, Los Angeles.
Endnotes
1
Or information, which – as shown in von Foester (1972) – is as much subjective as meanings are.
2
The concept was introduced in Maturana and Varela (1980) in order to account for the ‘closure of living systems, which reproduce their operations only on the basis of other operations of the same system. The concept has since then broadened to include other kinds of systems also: see Luhmann (1984).
3
See Paul Valery’s definition of complexity, cited in Le Moigne (1985, p. 21).
4
Winogrand and Flores (1986, p. 90) speak of ‘opacity of representation’.
5
Luhmann defines technology as ‘a kind of observation that considers something from the point of view that it can get broken. The leading distinction is here sound/broken (heillkaputt) or, if one refers less to repairing and more to learning, faulty/ faultless (fehlerfret/fehlerhaft)’: see Luhmann (1990, p. 263).
6
The ‘general’ in the expression ‘general-purpose machine’ can then be taken to mean either ‘every purpose’ or ‘everybody’s purpose’.
7
Each object, it is said, is an ‘instance’ of a class, and the classes are themselves instances of the class ‘object’ (as objects) and of the corresponding metaclass (as descriptions).
8
A position agreeing ulitmately also with Gotthard Günther’s conclusions, even if he contemplates the passage to a higher-order ontology when six distinct logical values are introduced (see Gunther, 1968).
9
Where, however, the relationship with operational constructivism is much less direct.
Found a mistake? Contact corrections/at/cepa.infoDownloaded from http://cepa.info/3973 on 2017-01-03 · Publication curated by Alexander Riegler