Artificial Life: A Technoscience Leaving Modernity?
An Anthropology of Subjects and Objects

Lars Christian Risan

With a foreword by Inman Harvey

TMV-senteret 1997

AnthroBase.com

To download, print, or bookmark, click: http://www.anthrobase.com/txt/Risan_L_05.htm.
To cite, quote this address and the download date. Not for commercial use.
©
1997 Lars Christian Risan & TMV-senteret. Distributed with permission, by www.AnthroBase.com.
Do not remove this notice from digital or paper copies of this text. 

 

Contents

Acknowledgements
Note on style
Preface to report edition
Foreword, by Inman Harvey

 

Introduction

A brief introduction to the field of Artificial Life research
School of Cognitive and Computing Sciences (COGS)
Main theme
Theoretical perspective
Methodological considerations
The reflexivity of studying science scientifically

 

1. Working Machines, Objectivity and Experiments

Technoscience
Social Studies of Science
The simulation
The machine and the trustworthy witness
The subjects of the social sciences

Summary
 

2. The Technology of Artificial Life at COGS

The logos of Artificial Life

An ALERGIC reaction
The information processing paradigm in cognitive science
The cognitive science of ALife at COGS
Biological plausibility

The techne of Artificial Life

From simulated subjects to simulated worlds
Performativity: making and understanding
Genetic algorithms and the production of worlds to explore

Conclusion: anti-Cartesian, yet Boylean?
 

3. Representations of ALife: a Real or a Postmodern Science?

Some basic understandings of science and engineering

Representations of Artificial Life as a real science

A defence of Artificial Life as a natural science
The power of a unified Nature

Representations of Artificial Life as a postmodern science

Creative engineers
Out of control

Conclusion
 

4. Metaphors and Identities of Artificial Life Research

What is a metaphor?
Metaphors and Identities; Everyday Expressions and Scientific Models
Metaphors as more than "flashes of insight"
Contested literalness
The literalness of evolution
The literalness of "brain" and "neurone"
Purifications and Anthropomorphisms
Heretical Engineers

Summary and Conclusion
 

5. Intuitions and Interfaces

Laboratory life
Programming computers and understanding statistics
Running the GA on the network
The legitimacy of talking about skills and intuitions
Fiddling around with the parameters
The mutual definition of skills and tools
Interfaces into worlds in the making
The experienced difference between The Simulation and the I
Limits to thinking in terms of "inside" and "outside"

Conclusion: The emergence of subjects and objects
 

6. The Objectivity and Enchantment of Artificial Life

Artificial Life as Science: The Objectivity of Artificial Worlds

Everyday nature
Scientific nature
Windows and television
Distance

Artificial Life as Art: The Technology of Enchantment

The Technology of Enchantment and The Enchantment of Technology
The enchantment of "High Tech"
A synthesising example
The enchantment of machines with agency
Concluding remarks on ALife as art
 

7. Conclusion

Irony and Engagement
Monstrous technology or letting go of control?
 

References
Notes
 

List of figures:

Figure 1 Callon and Latour's Modern purification versus Non-Modern mixture
Figure 2 "Good Old Fashioned Artificial Intelligence"
Figure 3 Artificial Intelligence-brain versus Artificial Life world
Figure 4 Evolved robot "brain"
Figure 5 Robots finding a white triangle
Figure 6 Degrees of literalness
Figure 7 A Parallel Distributed Network
Figure 8 Making associates
Figure 9 "Excuse me for anthropomorphising"
Figure 10 A "world" of quantities in a graph

 

List of plates:

Plate 1
Plate 2
Plate 3
Plate 4
Plate 5
Plate 6


Acknowledgements

For letting me be part of their works and lives for 8 months during 1994, for great intellectual inspiration, and for many good times, thanks to the employees and the students of School of Cognitive and Computing Sciences at the University of Sussex. Special thanks to:

Inman Harvey
Michael Wheeler
Seth Bullock
Philip Jones
Paulo Costa
Horst Hendriks-Jansen
Ronald Lemmen
Fred Keijzer
Ron Chrisley
Phil Husbands
Peter de Bourcier
Adrian Thompson
Pranath Fernando
Eevi Beck
Christian Mullon
Matthew Elton
Jim Stone
Dave Cliff
Guillaume Barreau
Stephen Eglen
Geoffrey Miller
Arantza Etxeberria
Robert Davidge
Margaret Boden
Chris Thornton

Special thanks to Henrik Sinding-Larsen and Claus Emmeche for guiding my first steps into the Artificial Life community.

For supporting, inspiring, and critical comments to the many drafts that have led to this thesis, thanks to: Henrik Sinding-Larsen, Geir Kirkebøen, Stefan Helmreich, Maria Guzmán Gallegos, Hendrik Storstein Spilker, Helge Kragh, Leif Lahn, Eevi Beck, and Kari-Anne Ulfsnes.

For being a source of creative ideas, and for providing accurate and patient corrections to a lot of inaccurate and sometimes quite far fetched writing, special thanks to my supervisor Finn Sivert Nielsen.

Many thanks to Mary Lee Nielsen for her thorough proof-reading of the thesis.

Financial support for my fieldwork was provided by The Norwegian Research Council NFR (102542/530). The Centre for Technology and Culture (TMV-senteret) has provided me with logistic, economic, and moral support. Many thanks.

Special thanks to my parents, Ragnhild Risan and Ernst Olav Risan, for giving me a Real Life with love.


Note on style

One of the themes of this thesis is how researchers use some sort of "quotation marks" when uttering or writing particular words. Emic terms are therefore set in italics, for example; artificial evolution. When the normal practice of the Artificial Life researchers was to add quotation marks to a particular word (possibly mimicking them in the air when speaking), I have marked this by adding quotation marks and italics to the word, for example; the "brain" of a robot. When words are set in quotation marks without italics, I am the one who makes reservations on the literalness of the word, not they.

Transcripts of taped interviews are marked off from regular quotations of written works by indent and italics.


Preface to report edition

This report is a publication of my Cand. polit. thesis in anthropology at the University of Oslo. Some minor corrections have been made, mostly concerning the language. In line with the recommendations of the examining commission, some persons have become more visible in this revised text. E.g., some instances of "an ALifer said" have now become a more proper subject, even if this person may be anonymised. Hopefully, I have in this way avoided some inaccurancies that were perceived as doubtful generalisations. Many thanks to all those who have read the thesis and made valuable comments. These corrections aside, the overall content of this report is similar to the thesis. Thanks to the Norwegian Research Council NFR and to the Centre for Technology and Culture for supporting the publication of this report.

Lars Risan

Oslo, September 1997


Foreword

I come from a small tribe of Artificial Life researchers, our particular village based at the University of Sussex. One day we had a visitor from a different culture, who asked if he could study us and learn our ways. We like to show respect (at least initially...) to strangers, so we agreed; the unspoken bargain was that if he did not respect our culture and our goals then we would put him in the cooking pot and eat him.

Well, Lars Risan stayed many months, and became a friend and a colleague, and we didn't have to eat him. Many of us have travelled in (literally) distant lands and met people whose culture, day-to-day concerns, and language, are apparently so different from ours as to make communication minimal; but so often some shared event or worry or problem shows that we do indeed have common interests that bridge the gap between our cultures. It became clear very soon that some of Lars' concerns as a cultural anthropologist echoed our own as A-Life researchers - in particular, concerns about the reflexivity and objectivity of our respective research programmes - and a dialogue started which is reflected in parts of this book.

A-Lifers have to get to grips with what it means for anything to be alive - anything from a bacterium to a tree, from a human to (potentially) a robot. One theme that some of us use is to relate Life to Cognition (in a broad sense): living creatures know their world, know what has significance for them, through their interactions with it, and without a world of meaning for X, X cannot be alive. As humans, as scientists, we ourselves inhabit a world of words and of theories, where through interchange of ideas we try to find a common language through which we can make sense of our fields of study: common sense, objective or inter-subjective agreement. The language of objectivity usually implies that we can stand apart from our field of study as impartial, godlike observers, but above all here we must recognise that we ourselves, our modes of understanding, are inextricably linked with our subject matter.

Lars, of course, has a comparable problem in applying the cultural norms of anthropological research to the study of different cultures; above all a culture where we have opinions about his task. As Lars says, we were from his perspective often colleagues as well as informants, and the same holds true from our perspective - his research has relevance to our own concerns. I have gained immensely from his thoughtful analysis of the problems of objectivity and reflexivity that we all face.

As an interested participant, I can recommend his study as being extremely fair and insightful regarding the attitudes and beliefs, the conflicts and debates within our community. Yes, it all rings true, Lars understood and respected our ways; I am glad we didn't have to eat him.

Inman Harvey

Sussex, September 1997


Introduction


We are attending a seminar at COGS - School of Cognitive and Computing Sciences - a school at the University of Sussex, where I conducted the major part of the anthropological fieldwork on which this thesis is based. One Artificial Intelligence (AI) researcher defends a characterisation of human beings that he labels "Man-the-Scientist". He claims that human beings, generally, can be understood as a "scaling down of the scientist". Humans, he argues, are fundamentally rational beings. Even in the play of small children we find elements of logical inductions and deductions. If we want to understand intelligence we have to understand these rational, logical patterns. This can be done by making computer programs that exhibit this rational behaviour and that follow the same logical rules as human beings. It can also be done by studying the human brain, the place where these rational processes occur naturally.

The opponent AI-researcher argues that this view of intelligence is really a defence of a Judea-Christian conception of the "supreme man", the unique man securely situated "above" all other living creatures, given his privileged position by the fact that he is "intelligent". Rather, the opponent argues, human intelligence, including our ability to use language and to think logically, is part of a much larger phenomenon - the ability of all animals, or perhaps even of all life, to behave in an "intelligent" fashion, to act as cognitive creatures. These cognitive abilities, the researcher claims, are dependent on the creature's body and its interaction with its environment. Cognition, he says, is embodied and embedded.

The second researcher works in a new scientific field that is known as Artificial Life, called, for short, ALife. The first researcher upholds a view known as Good Old Fashioned Artificial Intelligence, or GOFAI. He defends a philosophical or scientific position that Artificial Life research at COGS was a reaction to. In doing this he represented the academic "significant other" to the Artificial Life research at COGS.

The ideas and practices of Artificial Life research, and the interactions between these ideas and practices, are the topics of this thesis. How can the study of life, which ALife researchers see as pregiven by Darwinian evolution, be combined with the study of the artificial, which they see as "man made"? What implications do the combination of "artificial" and "life" have on how they practise their science? We will see that this combination makes Artificial Life a blend of a traditional naturalistic science and what they themselves sometimes call a postmodern science.

A brief introduction to the field of Artificial Life research

The term "Artificial Life" was coined in 1987 by an American scientist, Chris Langton, working at Santa Fe Institute, a scientific centre in New Mexico, USA. The field got off to a good start at a workshop that Langton arranged in Santa Fe, and was confirmed two years later by a second workshop, "Artificial Life II", and by the publication of the proceedings of the first workshop; Artificial Life. The proceedings of an interdisciplinary workshop on the synthesis and simulation of living systems (Langton 1989).

In the introduction to these proceedings, Langton defines Artificial Life as "...the study of man-made systems that exhibit behaviours characteristic of natural living systems." (Langton 1989b:1) These "man-made systems" are usually computers and robots.(1) I have called Artificial Life a "field" and not a "discipline" because of the interdisciplinary nature of the research. People come to Artificial Life from such disciplines as computer science, philosophy, psychology, biology and physics. Unlike biology proper, where researchers tend to get involved in minute details within biological sub-disciplines, ALife researchers are interested in life at its most general; its origin, its fundamental properties, its distinction from "not-life". Thus, Artificial Life has much in common with theoretical biology or with what is traditionally known as natural philosophy, the general inquiries into what life is and how it can be understood.

In 1991 Francisco Varela and Paul Bourgine arranged the first European Conference on Artificial Life (ECAL 91). In their introduction to the proceedings of this conference Varela and Bourgine define Artificial Life research as part of a line of research and thought which "...searches the core of basic cognitive and intelligent abilities in the very capacity for being alive." (Varela and Bourgine 1992:xi) According to this definition, Artificial Life research is, strictly speaking, not a study of the general properties of life, but a study of the general properties of "cognitive and intelligent abilities". Varela and Bourgine search for these properties "in the very capacity for being alive." Being "alive" is thus defined as having "cognitive and intelligent abilities"; life processes are also cognitive processes. This is the main message of one of the papers in the proceedings. The paper is called Life = Cognition (Stewart 1992).

In their introduction Varela and Bourgine emphasise that Artificial Life is part of a longer line of thought. As many other ALife researchers do, they trace Artificial Life research back to the advent of Cybernetics in the late 1940's (1992:xi). Norbert Wiener defined Cybernetics as "control and communication in the animal and the machine." (Wiener, 1948) Practitioners of cybernetics in the late 1940's, as ALife researchers now, were occupied both with making life-like or intelligent computers and robots, and with understanding life and cognitive or mental phenomena. The emphasis that the ALifer in the story above placed on seeing cognition as embodied and embedded, that is, as an aspect of a larger system than the human brain, is something that he shares with many cyberneticians. Gregory Bateson, an anthropologist deeply involved in cybernetics, writes that "the mental characteristic of the system is immanent, not in some part, but in the system as a whole" (Bateson 1972:316). During the 1950's the cybernetics movement fragmented. Some social scientists, with Bateson as a leading figure, started to apply the systemic perspective to social systems. This "social cybernetics" became particularly popular within family therapy (see for example Bateson et. al. 1956). Within engineering, cybernetics became a technique for making control systems (such as thermostats or goal seeking missiles). The discipline that combined the human/biological interest with the technical interest of the early cyberneticians became known as Artificial Intelligence (AI), today often referred to a bit ironically as GOFAI. The practitioners of this discipline distanced themselves from cybernetics. They rejected the holism of the systemic perspective and emphasised the formal and logical aspects of human cognition. The advent of Artificial Life research at places such as COGS is thus a reintroduction of the early cybernetical notions in contemporary artificial intelligence research.

ALife-researchers called themselves ALifers and, as a whole, the ALife community. The term "ALifer" was invented a bit as a joke at the first Santa Fe workshop in 1987. The term "ALife community" derives from the English designation of scientific communities (you also have the "anthropological community"). I first learned that Artificial Life existed from a book on the topic, written by the Danish biologist and philosopher of science, Claus Emmeche(2) (Emmeche 1991). I wrote to Emmeche to find out where the people studying Artificial Life actually were, and the ball started rolling. I soon learned that the ALife community consisted of people in many universities and research institutions in Europe, North America and Japan. These researchers communicated mainly through a channel I at that time knew nothing about, electronic mail. In addition they met a couple of times a year at international conferences on Artificial Life and related topics. Both in order to learn more about Artificial Life and to find a suitable location for an anthropological fieldwork I set out on two journeys in 1993. In May I went to the European Conference on Artificial Life in Brussels, and then, equipped with some more names, addresses, and invitations, I visited the University of Zürich, the University of California at Los Angeles, the Santa Fe institute, and the School of Cognitive and Computing Sciences (COGS) at the University of Sussex. I also attended a conference on Artificial Intelligence in France and a one-week summer school on Artificial Life in Spain. At COGS I found a small but active group of ALife researchers, several visiting researchers involved in ALife, and, perhaps most important, a number of Ph.D. students of Artificial Life. Here, a part of the ALife community was also observable in periods between the international conferences. I was generously invited back, and in January 1994 I started my 8 month long fieldwork at COGS.

School of Cognitive and Computing Sciences (COGS)

The interdisciplinarity of Artificial Life research is reflected in the way the University of Sussex is organised. The university, as well as the campus, is divided into interdisciplinary schools. These schools are concerned with various topics. People can belong both to a discipline and to a school, but the actual buildings are schools, not faculties. Thus people from one discipline can belong to different schools. For example, there are psychologists at least at three different schools, one of them COGS. COGS has a particularly interdisciplinary orientation: it houses people both from the sciences and the arts.(3) This combination is reflected in the name of the school; people either study something that has to do with computing, that is, they are computer scientists with a mathematical or natural-scientific background, or they study something that has to do with cognition - "thinking" or "mind". That is, they are interested in what is known as philosophy of mind and are philosophers or psychologists. Most people do both. They study both computing and cognition, and particularly the relation between these two. This endeavour is known as Artificial Intelligence or Cognitive Science. It is characterised by a combination of mechanistic/formalistic methods and perspectives, and by an interest in the philosophy of mind. This double commitment of Artificial Intelligence research is symbolised in the acronym "COGS", which has associations with both "cognition" and the mechanics of cogwheels (a cogwheel has "cogs" around its rim).

During my fieldwork there were about 60 people employed at COGS, including researchers and office personnel, about the same number of D.Phil. students and a couple of hundred undergraduates. The core of the ALife group was the evolutionary robotics group, made up of three researchers; Dave Cliff, Inman Harvey and Phil Husbands. About four or five other researchers and between 10 to 20 Ph.D. students were more or less involved in Artificial Life research.

The members of this group of ALifers had different academic backgrounds, including physics, electrical engineering, philosophy, psychology, biology, and computer science. Depending on the context of relevance the same researcher could be studying artificial intelligence, cognitive science, computer science or ALife, and they could call themselves computer scientists, cognitive scientists, ALifers, or philosophers. To complicate the picture further; ALifers at COGS sometimes call their science Simulation of Adaptive Behaviour (for short: SAB). Practitioners of this study are part of the SAB-community. This "community" meet every second year, at a conference series named SAB90, SAB92, SAB94, and so forth. Both with respect to the content of these conferences and to the people attending them, these conferences have a lot in common with the ALife conferences. We will return to what "simulation of adaptive behaviour" means later. I mention this scientific sub-field here to illustrate the large variety of disciplines and research fields that make up the ALife group at COGS.

People at COGS have a relaxed attitude towards disciplinary boundaries. This allows for an open playfulness with ideas and positions, researchers can hold a position and defend it without always having to defend their own personal and/or disciplinary identity. I think this playfulness and interdisciplinarity is one of the most important reasons why ALife research has become such a big thing at COGS. (The playfulness, of course, takes place within certain premises or taken for granted frames that define research at COGS. I will later return to both some of the premises and some of the playfulness of the scientific practice at COGS.)

In this thesis I will refer to the people that do ALife research at COGS as "the ALifers at COGS". This is partly justified because this group meet regularly to discuss ALife affairs, and sometimes, even if not very often, they call themselves ALifers. They do however also call themselves and their endeavour a lot more than "ALifers" and "ALife" - notably, perhaps, "cognitive scientists" and "cognitive science". To avoid too much confusion I will stick to the terms "ALifer" (or "ALife-researcher") and "ALife", unless other distinctions are of relevance. This means that the term "ALifer" will occur much more frequently in this thesis than it ever did at COGS, and that "the ALifer" appears to be more important in this thesis than he actually was at COGS. I hope this warning will help avoid that possible misconception.

We may talk broadly about two ALife traditions. The first tradition attempts to explain life at its most general, whereas the second attempts to explain cognition at its most general. If, as most ALifers hold, "life equals cognition", then these two endeavours should logically be identical, but there are historical differences. The first tradition of which the Santa Fe Institute is an exponent, is to a larger degree shaped by people from biology and physics (physicists have been involved in explaining life for the last 50 years), whereas the second tradition is shaped mostly by students of cognitive science, psychology and artificial intelligence. The first tradition has had a biological phenomenon or a biological system - an ecosystem, a social system or a biological cell, etc. - as its starting point. Chris Langton at Santa Fe Institute, for example, has constructed a computer program that generates a dynamical system (Langton, 1989b). The only thing this system does is to make copies of itself. It is this general aspect of life - its dynamical reproduction of itself - that Langton simulates. The second tradition has had as its starting point an intelligent or thinking individual - a human being or an animal. What do we mean by calling such an individual "intelligent"?, and what does it mean to be "intelligent"? Artificial Life at COGS belongs to the latter tradition.

I will not further discuss these differences within ALife research in general, but primarily focus on how ALife was practised and understood at COGS, even if the international conferences are also important arenas for the Artificial Life research here presented.

Main theme

A central theme of this work is how ALife researchers construct their facts. By the term "construction" I mean more than a philosophical construction in the sense of a Wittgensteinian "language game", or in Berger and Luckmann's sense of people negotiating an agreement (Berger and Luckmann 1966). Philosophy is probably one of the practices which comes closest to being a pure "language game" or "social construction". (Philosophers sometimes jokingly say that they should not let their principal thoughts be affected by reality.) Artificial Life research is also philosophy - in the sense that ALifers share some interests with philosophers, and that it includes philosophers among its practitioners. But Artificial Life researchers add to this philosophy the making of machines. They program computers and build robots. In a very concrete sense they attempt to construct or engineer their machines so that these machines will exhibit what ALifers understand to be life-like or cognitive phenomena. Hence, the "constructions" I study are more than social constructions, they are technical constructions. The interactions that I focus on include more than interactions between humans (social interactions). They include interactions between humans and machines. We will see how the Artificial Life researchers and their machines relate to each other both in the laboratory, when the machines are made (chapter 5), and later, at the conference, when the machines are presented to a larger audience (chapter 6). But we will also see how ALifers talk about, understand, and negotiate meanings about their machines (chapter 4), and we will see various ways in which they reflect upon and discuss their own scientific practices (chapter 3).

There is one general theme that runs through all these different topics. This theme is, speaking generally, the boundary between subjects and objects, between humans (or life in general) and machines, between the subjective and the objective. ALifers both blurred and reproduced these boundaries in their scientific practices, and they contested and discussed them. In their practice they created methodological closeness or distance between themselves (subjects) and the machines they studied (objects). In their ontology they talked about their machines as human beings or as living systems, and they talked about human beings and living systems as machines. We might phrase these themes as questions: As practitioners of a science that studies machines in order to say something about life or cognition, and that even attempts to endow machines with life or intelligence, how do ALifers understand and contest the border between life and machines? Moreover, as a science that largely takes place within institutions of objective, rigorous, experimental science, where the relation between the questioning subject and the questioned object is characterised by objective distance, how and to what degree do ALife researchers reproduce these scientific practices when the nature they study so obviously is constructed? This thesis revolves around these questions.

Theoretical perspective

The main sources that have inspired me in writing this thesis, and that I have used in order to make sense of how ALifers, in their research process, relate to their machines, and how they, in their theories and understandings, relate life (or cognition) to machines, are first, the writings of Francisco Varela and Gregory Bateson, and more recently, those of the French sociologists Bruno Latour and Michel Callon. What all these writers have in common, is to be (more or less explicit) expressions of 20th century continental phenomenology. This tradition is heavily inspired by the writings of Martin Heidegger. A brief introduction to his writings thus follows here.

Continental philosophy since Heidegger is a huge body of philosophy of which I can just scratch the surface. There is one theme in this thinking that is relevant here, namely the way in which a perceiving and acting subject, a self, is related to a world of objects or bodies (including other human beings). Edmund Husserl, Heidegger's teacher, was, with respect to his understanding of the human self, following a philosophical tradition that dates back to Descartes (Lübcke 1982:67). Husserl defended the notion of the transcendental ego. If we understand ourselves as living "in" a stream of consciousness, and if we also understand this to be a stream of ever changing intentions - we direct our intention towards this task in one moment, towards that task the next moment - then we can define the transcendental ego as follows:

The concrete "I" is the continuity in the stream of consciousness [...]. It is this fully developed, concrete I understood as a unity in the stream of intentionality, that Husserl speaks of as the transcendental ego. (Lübcke 1982:65, my translation and italics)

Heidegger rejects this unity of the self (Lübcke 1982:124). Behind the stream of events and actions, of objects perceived and reacted to, there is no unified self, no transcendental ego, or Cartesian soul. The self is its lifeworld; it is, in Dreyfus' translation of Heidegger, a "Being-in-the-World" (Dreyfus 1991). The Being-in-the-World is both one and many (but never two clearly separated parts). It is one because the world and the self cannot be separated, and it is many because it changes all the time, it is in continous flux, changing with changing contexts.

A certain relativism follows from the rejection of the transcendental ego; the world is dependent on, as it is a part of, the subject. But there also follows a certain realism because the subject is directly dependent on the world. This latter dependence can be illustrated by taking a look at one of the major philosophical problems of philosophers such as Husserl and Descartes. The problem is known as phenomenological solipsism (Lübcke 1982:66), the absolute loneliness. If there exists an I at a distance from (or as a "transcendental premise to" (1982:66)) the stream of experienced phenomena, then how can this "I" know that the stream of phenomena, including the perception of other people, is not just an illusion? How can "I" know that "I" am not totally alone in a stream of illusions referring to nothing at all "out there"? I am not going to describe how Husserl and Descartes solve this problem. The point is that both of these philosophers deal with the problem extensively (Lübcke 1982:66). In short, it is a problem to them. It is not a problem to Heidegger, it disappears as a problem because he rejects the existence of an I that can occupy the position of solipsist loneliness. The French phenomenologist Merleau-Ponty, following Heidegger, stresses the interdependencies between the self, or the subject, and the world:

The world is inseparable from the subject, but from a subject which is nothing but a project of the world, and the subject is inseparable from the world, but from a world which the subject itself projects. (Merleau-Ponty 1962:430)

Here, solipsist relativism is as absent as objectivist realism.

I am going to make a large jump in the history of ideas, to the contemporary sociology of the French sociologists Bruno Latour and Michel Callon (Latour 1987, 1988, 1993, Callon 1986, Callon and Latour 1992). We will return to their work many times in this thesis. Here I will just give a brief outline of some of their ideas. One of their main projects is, following a Heideggerian philosophy, to question the distinction between the "subjective" and the "objective", the "inner" and the "outer", the separation of humans from things. In what they have called a "network theory" they have developed a vocabulary that does take the distinction between subjects and objects, the subjective and the objective, into consideration. An "actant", for example, is more than a human actor. It may be an automatic door opener (Latour 1988), or it may be scallops in the sea (Callon 1986). In networks of humans, machines, animals, and matter in general, humans are not the only beings with agency, not the only ones to act; matter matters.

Methodological considerations

In Latour's terminology, the epistemologist and the political scientist are the academic "significant others" of the anthropologist of science (Latour 1993:143). The epistemologist is the philosopher who, thinking principally and normatively, tries to find ways in which scientists can represent Nature faithfully. The political scientist is occupied with finding ways in which politicians can represent Society faithfully. Latour's ideal "anthropologist of science", on the other hand, follows a well-trodden path in anthropology. He or she writes a holistic monograph in which one does not take the distinction between "politics", "cosmology", "religion", etc. for granted, but rather describes empirically how the subjects of study themselves make distinctions of these or other kinds. The anthropologist of science, then, should not take the distinction between politics and epistemology, Society and Nature, for granted, but rather explore how these distinctions come to be established. Latour's prescription for an anthropology of science is one that I to a large extent follow in this thesis, but with some reservations.

The particular perspective on science offered here is shaped by a central component of anthropology; fieldwork with participant observation. Rather than, for example, a historian's overview, I will present what Cole has called "a pig's eye view of the world". This, according to Cole, is "the view that researchers get when they leave the office or archive and spend time in the village mud." (Cole 1985:159) My "village" is the ALife-laboratories at COGS and the international conferences on Artificial Life. It does, however, also include written materials, as part of the everyday life of my "tribe" is the production of scientific papers. Other important sources of information are the public E-mail lists at COGS. In these lists, discussions were held in an informal, "oral" style, but with the important exception that things "said" could be perfectly captured for the future, and, for example, copied directly into an anthropological dissertation. My focus on local practices, I should note, does not mean that this thesis is not unconcerned with general philosophical interests and perspectives, only that the very general is combined with descriptions of minute details.(4)

In Latour's anthropology of science studying "politics" is as relevant as studying "epistemology". To implement this holism, Latour prescribes a "network analysis". We should follow scientists wherever they may take us - from the workbench to the committee rooms of multinational companies and national governments (Latour 1987). Such a network analysis, if it is to be combined with participant observation, can be quite demanding - Latour's own laboratory study went on for a period of two years (see Latour and Woolgar 1979). Not denying the possible usefulness of such network analysis, the present work is nevertheless more limited in scope. My fieldwork at COGS and at conferences of ALife gave me a micro perspective of the practices and theories of ALifers. Some of the limitations of my perspective can be seen, for example, in the ways in which I did not acquire insight into how ALife was financed. When the UK Science and Engineering Research Council (SERC) visited the ALife group at COGS, I was not invited to participate in the meeting. The ALifers, I felt more than was explicitly told, did not want to disturb this important meeting by making it into the public event that the presence of an observing anthropologist would have created. Moreover, if I was to understand the relationship between COGS and SERC, I would also need to know what the people from SERC thought about COGS. This would have taken me out of COGS and into the offices of SERC, a place that I did not have the time and resources to visit.

This thesis, then, is not based on a network analysis in the full sense of the term. I am, however, inspired by Latour's insistence on taking neither Nature nor Society, neither the objective nor the subjective, for granted, but to study how these distinctions of science are constructed. I will show that these distinctions are central in the scientific enterprise, and I will, from the "pig's eye view of the world", look at how and to what degree these distinctions are reproduced and challenged - both in theory and practice - in the work of a group of scientists who have made it into a point to blur the boundaries between the artificial and the natural, the "man made" and the pregiven.

My allegiance to this phenomenology-inspired sociology has certain methodological implications - as some of the ALifers that I write about were also inspired by Heideggerian, post-Cartesian phenomenology.

The reflexivity of studying science scientifically

At the beginning of my fieldwork, at the weekly ALife seminar, I gave a talk where I told the ALifers at COGS what I intended to do. One of my "informants" then told me that there were Heideggerian elements in my ideas. He copied a chapter of the book Being-in-the-World (Dreyfus 1991) for me to read.

The quotation from Merleau-Ponty above is taken from a book by Varela, Thompson, and Rosch that explores the implications of phenomenological (and Buddhist) thought for cognitive science (Varela et.al. 1993). Bateson's and Varela's more cybernetic (yet phenomenological) way to question the boundary of self has inspired me and is part of the background for this thesis. But, as we have seen, Bateson was not only an anthropologist. He was one of the first cyberneticians, and "my" Varela is the same Varela who arranged the first European Conference on Artificial Life.

I will later describe the implications that this phenomenological influence has on ALife research. Here I mention this influence in order to address some of its methodological implications. The people that anthropologists write about are always "experts" in the culture or social life that the anthropologist wants to understand. The anthropologist is almost always a rather fumbling "novice", an outsider. Some of my "informants" are experts in the philosophy that shapes the sociological perspective that I apply. That is, I have been informed by my "informants" about the theories on which this thesis is based as well as about the data I present. My "informants" are also my "colleagues". Latour and Woolgar discuss two major ways in which an anthropological (or sociological) work can be legitimated. They write:

One of the many possible schemes designed to meet criteria of validity holds that descriptions of social phenomena should be deductively derived from theoretical systems and subsequently tested against observations. In particular, it is important that testing be carried out in isolation from the circumstances in which the observations were gathered. On the other hand, it is argued that adequate descriptions can only result from an observer's prolonged acquaintance with behavioural phenomena. Descriptions are adequate, according to this perspective, in the sense that they emerge during the course of techniques such as participant observation. (Latour and Woolgar, 1979:37)

Latour and Woolgar (following Marvin Harris) call the first of these criteria etic validation (and the method applied is known as hypothetico-deductive method). The empirical testing may confirm or falsify a theory, but it is the community of fellow anthropologists who, ultimately, evaluate if a theory is valid or not. Latour and Woolgar call the second of the criteria of validity emic validation (and the method applied is hermeneutics). When it comes to this validation, "the ultimate decision about the adequacy", Latour and Woolgar write, "rests with the participants themselves." (1979:38) Lévi-Strauss and Geertz may be picked out to represent the two approaches (even if no anthropological work is based exclusively on emic or etic validation), Lévi-Strauss for his emphasis on building a grand, structuralist theory of human mind and culture (testing his theory against a large corpus of comparative data), and Geertz for his interest in approaching local meanings (through the hermeneutic interpretation of public symbols). However, as Latour and Woolgar point out, also the anthropologist who seeks emic validation "remains accountable to a community of fellow observers in the sense that they provide a check that he has correctly followed procedures for emic validation." (1979:38) That is, it is the community of anthropologists who ultimately decide if, for example, Geertz has given a valid description of how the Balinese experience their cock fights.

True emic validation, however, becomes increasingly important as our informants become a literate audience who read what is written about them. This thesis is written in English in order to make it available to ALife researchers, and I am invited back to the ALife seminar at COGS to present my work. I am, as most anthropologists, both concerned with achieving emic and etic validity. But my emic validation will not only be of a kind where my informants will check if I use their technical vocabulary correctly, they will have informed opinions on the validity of the theoretical perspective I apply. This means that we will see examples of common interests between some of my "informants" and me in this thesis. My anthropological position will necessarily be in agreement with some ALifers and in disagreement with others. Taking sides cannot be avoided in some of the controversies I write about. I will attempt to write about these controversies from a "neutral", culture-relativist position. My position, however, will be present throughout the thesis, and particularly in the introductory and final remarks. This position will, to the degree that Artificial Life researchers read this thesis, be a part of the discussions that I write about.

Before ending this introduction I would like to discuss the role that I let the ALifers (my "data", but also my "colleagues") play in this text. This, as we will see later, is of particular relevance because one of the themes of this thesis is the role that the ALifers let their computer simulations play in their science.

There are generally two major ways in which people are visible in anthropological texts, as "(Barth 1968)" or as "one informant said...". People are colleagues or informants. Let us take a look at the difference between these two.

Thirty years ago Radcliffe-Brown referred to one of his colleagues as "Professor Durkheim" (Radcliffe-Brown 1968:123). Today we seldom see the formal title in academic writings. However, we still use our colleagues' surnames, not their first names. We refer to "Durkheim", not "Emile". Following this convention we treat our colleagues as respectable members of our scientific society. This is a public society, not a private sphere (as "Emile" would have suggested), and it is a society of individual subjects. By crediting (and possibly criticising) our colleagues for what they have written, we grant them both their individuality, with their rights and duties, and their membership in our society. We might say that by using the traditional reference, "(Radcliffe-Brown 1968:123)", we make our colleagues into subjects of our society. Now, let us take a look at the other large group of people that anthropologists refer to, our "informants".

Informants often appear in our texts anonymised, as "Marc", "one ALifer", or perhaps as "17% of the population". To anonymise may be necessary for many reasons. It may in extreme cases be a matter of life or death to those involved, for example when writing about people living under oppressive regimes. Not denying its frequent necessity, I am here concerned with one effect of anonymisation which may be problematic; It removes those we write about from the social and moral sphere in which we place ourselves and our colleagues. It does not give them rights and duties as subjects of our (academic) society. So whereas the academic reference subjectifies the human being, the anonymisation does the opposite; it creates distance, it objectifies. This distance is quite visible in two of my above-mentioned examples; "one ALifer" and "17% of the population". The third example, "Marc", is more tricky. Rather than creating distance, it creates intimacy. If "Marc" is written about at length we may get familiar with "Marc", thus, calling this person by a first name enhances this familiarity and intimacy. But still, it does not include him in the same society of subjects to which "Professor Durkheim" or "(Radcliffe-Brown 1968:123)" belongs.

The ALifers I am writing about will appear in this text in both of the ways mentioned above. I will refer to their publications, for example "(Langton 1989b)", and I will refer to things they have said and done; "...one researcher talked about..." I have, many times, considered not anonymising my accounts of informal events and utterances. The first time I had to consider this was after the first interview I conducted. I told the researcher that I would, if she wanted, anonymise her. She was offended by the very suggestion. She was not afraid of standing up for what she said. Did I think that she had something to hide? She did not want to become some kind of amoral Mrs. X. Her point impressed me. Should I include my "informants" in the scientific society (of responsible individuals) to which the anthropologists belonged? This question was particularly pertinent since I needed and wanted to refer to their publications, and then I would definitely use their real names.

I have decided not to use their real names. There is a lot of me in this thesis, even in the most empirical parts. When telling stories from my fieldwork I have tried to be faithful to what "really" happened, but all storytelling includes a lot of "construction", I have created the contexts to the empirical accounts, and I have selected, during and after the fieldwork, from what is really an enormous amount of events, a tiny handful of stories to illuminate my points.

One of the ways in which scientists make a living and a career is by writing - not merely saying - things that are later referred to and quoted. If I use an ALifer's real name and a third person quotes the ALifer - who's informal talking has been made into something written by me - this quotation may say something that the ALifer him- or herself would never have wanted to publish. Using pseudonyms will remove the possibility of formally referring to the ALifers, even if those who know the people I write about will recognise them. I have used first names as pseudonyms to mark the informal contexts from which the stories I tell are taken. It would have been quite absurd to do otherwise; people at COGS, independent of academic titles, always addressed each other by first names.

However, I have also wanted to show normal academic respect for scientists' work by referring to their publications when I use them. Hence, some of the ALifers at COGS will appear in this thesis as two characters; with their real names when I discuss and refer to things that they have written and published, and with a pseudonym when I refer to events and utterances from more informal settings.(5)


Chapter 1: Working Machines, Objectivity and Experiments


Artificial Life research attract people both from the arts and the humanities, but its basic foundation is in the sciences. It is, to a large degree, practised as a rigorous, experimental science; hypotheses should be tested by setting up an experiment, these experiments should be reproducible, the results should be statistically significant or prove their validity by enabling the production of new, working technology. This last criterion is of central importance in ALife research. Working machines(6) play a pivotal role in the production of legitimate scientific results. People at COGS - ALifers as well as other researchers and students - ask for results, and by this they mean a working computer or robot which does something that can be classified as an intelligent or life-like behaviour. One of the early accomplishments of the first ALife robot at COGS was to find the centre of the small room in which it moved. Researchers and students who were critical of the ALife projects frequently asked questions of this kind: "Now, it did room-centring, but will it be able to do more advanced things, will it scale up?" The future of ALife research at COGS is linked to the question of whether the robots and computer programs of ALife will perform "more advanced tasks" - of whether they will "scale up" from, for example, room-centring to something which is more clearly an intelligent or life-like behaviour. The ALifers did not contest this frame of reference. To "scale it up" was one of the explicit aims of their research.

Technoscience

The aim of this chapter is first to situate Artificial Life within the larger context of science in general, and then to give a description of this general context. This description will be given from the particular point of view known as social studies of science. Hence, it will be as much a description of this point of view as of science itself.

To indicate Artificial Life research's dependency on working machines I will refer to this science as what Bruno Latour calls a technoscience (Latour 1987:174). This word denotes the same scientific disciplines as "experimental science" does, thus encompassing the natural sciences and some of the social sciences (such as economics and parts of psychology). But "technoscience", giving associations to "technology" and the Greek techne (skills), is also meant to refer to the practical and social contexts, the work, the skills, and the machines, of these sciences and scientists. "I will use the word technoscience", Bruno Latour writes, "to describe all the elements tied to the scientific contents no matter how dirty, unexpected or foreign they seem, ..." (1978:174). I also use "technoscience" rather than just "science" to establish distance between anthropology, "my" science, and Artificial Life, the science that I study. Social anthropology may be a science (though this is sometimes questioned), but it is not a technoscience.

To follow Latour and describe "... all the elements tied to the scientific contents..." might mean studying huge networks involving everything from laboratories to, say, weather satellites, global ecology, and international politics. In describing such networks there are many possible elements of relevance, and there are many perspectives from which they may be described. One of the central elements that I have focused on in studying ALife research is the role of the working machines and the results they both produce and are instances of themselves. But before discussing this in more detail, I will take a general look at some of the ways in which technoscience can be studied from a social science point of view. I will also consider the motivation and background for my own perspective.

Social Studies of Science

During the last 25 years or so a new academic field has developed. It is known as the field of Science, Technology and Society (STS) or as Social Studies of Science (SSS). This academic field is made up of scholars from disciplines such as philosophy, anthropology, sociology, and history. If we should try to find one common theme in these studies, then the rejection of objectivism is probably a good candidate. The aim of these studies is, in different ways, to explain and describe the construction of scientific facts by embedding these processes of construction in social, cultural, and/or technological contexts. One of the pioneers in this field was the philosopher Thomas Kuhn. In The Structure of Scientific Revolutions (1962) he discussed the influence of social factors - such as changes in generations - on scientific changes. Kuhn, discussing the natural sciences and mainly physics, claimed that these disciplines did not develop gradually, but that they changed in leaps, as one paradigm replaced another.

In the 1970's sociologists, anthropologists, and others began to study the natural sciences empirically. These studies can broadly be divided into two main categories. The first has been called laboratory studies (Knorr-Cetina 1995). I call the second category discipline studies. Based on qualitative fieldwork, the laboratory studies have focused on how scientific facts are constructed. These studies have focused on the diverse practices - social, technical and conceptual - of life in laboratories, in international networks of laboratories, and in networks that exceed what is strictly "science" and include political, commercial and other interests (Latour 1987).

Harry Collins (1975) did pioneering work showing that the replication of facts in physics involved a replication of certain skilful practices. In 1979 the first technoscientific monograph, Latour & Woolgar's Laboratory Life, The Construction of Scientific Facts (1979), was published. Karin Knorr-Cetina's The Manufacture of Knowledge, An Essay on the Constructivist and Contextual Nature of Science (1981) followed two years later. Both of these works are based on participant observation in laboratories (a neuroendocrinology(7) lab and a plant-protein lab respectively). The books however, are not mainly about the particularities of these biological labs and traditions. They address the practice of technoscience in general.

The general focus of both of these books is exemplified in Latour and Woolgar's account of their first meeting with their field. This account is not written as the story of one concrete ethnographer (Latour) visiting one particular place. It is the story of "An Anthropologist [who] Visits the Laboratory" (title of Chapter 2). The style is general, and the actual lab Latour visited only figures as an example of how such a meeting might take place. For example, they write: "Our anthropological observer is thus confronted with a strange tribe who spend the greatest part of their day coding, marking, altering, correcting, reading, and writing." (1979:49) Note however that even if the focus of this book is general, the method is a study of the particular. In order to explore the general notion that a fact is socially and technically constructed, Latour and Woolgar undertake an extremely detailed analysis of the events that took place in the construction of a single piece of fact, the existence of a particular hormone, that came to be known as the TRF(H) molecule. We learn nothing about the position of this hormone inside the human body, or its role in a particular, situated understanding of the human body from reading Latour and Woolgar. What we learn is the position of the hormone relative to the technical equipment in the lab and to a network of scientific articles that refer to each other. The TRF(H) molecule is in Latour and Woolgar's account not part of a human body or an understanding of the human body, it is part of a technical and social network of laboratory equipment and researchers.

In contrast to these sociological accounts of scientific practices, we find the "discipline studies". Rather than studying "the laboratory" and the skilled, social practice of its inhabitants, these studies probe the actual content of specific scientific traditions. They have a lot in common with studies in the history of consciousness or history of ideas. Such historical studies have shown, for example, how Wittgenstein's thoughts were influenced by the late 19th century Habsburg Vienna (Janik and Toulmin 1973): "Regarded as documents in logic and the philosophy of language," Janik and Toulmin write, "the Tractatus and the Philosophical Investigations stand - and will continue to stand - on their own feet. Regarded as solutions to intellectual problems, by contrast, the arguments of Ludwig Wittgenstein, like those of any other philosopher, are, and will remain, fully intelligible only when related to those elements in their historical and cultural background which formed integral parts of their original Problemstellung." (1973:32)

The discipline studies perform the same kind of contextualisation with present day scientists and sciences that Janik and Toulmin did of Wittgenstein. They give cultural, and culture critical interpretations of disciplines.

Donna Haraway is one of the influential writers in the cultural critical tradition of science studies. In her earlier work she focused, in her words, "on the biopolitical narratives of the sciences of monkeys and apes." (Haraway 1991:2) Her own political position is explicitly present in her works: "Once upon a time, in the 1970s, the author was a proper, US socialist-feminist, white, female, hominid biologist, who became a historian of science to write about modern Western accounts of monkeys, apes, and women." (1991:1) In her study of how biologists understand the large apes, she takes the role of the cultural context further than Janik and Toulmin, who looked only for the motivations behind Wittgenstein's work. Haraway looks at how the very content of science is shaped by social factors:

People like to look at animals, even to learn from them about human beings and human society. People in the twentieth century have been no exception. We find the themes of modern America reflected in detail in the bodies and lives of animals. We polish an animal mirror to look for ourselves. (1991:21)

In her Cyborg Manifesto (1991 [1989]) Haraway invents the feministic cybernetic organism, the Cyborg. In contrast to what she sees as feminist-socialist technophobia, this is an appropriation - a rewriting or contesting rather than a total rejection - of what she sees as the dominant image of the high-tech-man, the integrated man and machine, archetypically seen as the independent astronaut moving freely in space. Haraway, as I understand her, wants to use the image of the cyborg to draw our attention to "leaky distinctions" (1991:152) between humans and animals, between organisms and machines and between the physical and the non-physical. In line with Marxist thought she is fighting the naturalisation of social inequalities, and she sees in the Cyborg the possibility to rework the distinction between Nature and Culture, and hence the distinction between civilised and primitives, man and woman, etc., "the one can no longer be the resource for appropriation or incorporation of the other." (1991:151) They will possibly be parts of messy Cyborg-networks rather than parts subsumed under the hierarchical wholes of highly militarised, male dominant economies. In drawing these alternative images (of which I have just given a sketch) Haraway is openly utopian: "How might an appreciation of the constructed, artifactual, historically contingent nature of simians, cyborgs, and women lead from an impossible but all too present reality to a possible but all too absent elsewhere? As monsters [that is, as liminal objects that demonstrate], can we demonstrate another order of signification? Cyborgs for earthly survival!" (Haraway 1991:4)

In the high energy physics monograph by anthropologist Sharon Traweek - Beamtimes and Lifetimes, The World of High Energy Physicists (1988) - we find a combination of laboratory and discipline studies. Traweek gives us a description, "As thick as it could be," she writes (1988:162), of an American group of high energy physicists (doing so partly by comparing this group to a Japanese group). In her book we become acquainted with a group of American men who are organised into a particular social structure, a structure with its own academic cycles of reproduction and its own hierarchy. These men have specific identities, determined partly by their ancestral history (their professional forefathers, reproduced in textbooks by pictures showing a man, from the waist up, and dressed in suit, sometimes with a tie, other times more casually), partly by their huge and mythologically important machines (like SLAC, the Stanford Linear Accelerator), and partly by their national differences (as they understand these differences themselves).

In the three works mentioned above (Janik and Toulmin's, Haraway's, and Traweek's) the larger culture of which the scientists are a part plays an important role. But the role of this cultural context is different in each of the three works. To Janik and Toulmin, Wittgenstein's Vienna was a context that motivated Wittgenstein's writing, but it did not explain the content of his works. They did not want to reduce the logic of Tractatus to social life of 19th century Habsburg Vienna. Haraway, in her writings on ape research, describes a more profound cultural influence on science. The very content of their research is shaped by "modern America" (Haraway 1991:21). Traweek follows Janik and Toulmin rather than Haraway. The culture that she describes, a culture which is specific both to the physicists and to American culture as a whole, shapes the "laboratory life" of the physicists, but not demonstrably the content of their physics. Traweek does not try to explain electrons, photons and quarks as determined by the culture of physicists. She leaves physics to the physicists.

These three positions may to some degree reflect different conceptions of what "culture" determines. To look for such theoretical differences is beyond the scope of this presentation. However, it is quite clear that these differences are dependent on the scientific disciplines studied: It is much easier for biologists to read human society into a group of interacting gorillas than it is for physicists to do the same with interactions between electrons. Having had this look at the differences between these three works, I would like to restate their similarity; all three look at how the culture and society of scientists shape their science.

The present work on Artificial Life research is, on the one hand, a laboratory study. This is a result of the method - the anthropological fieldwork - on which it is based. I did my fieldwork at one particular lab, COGS, and at a number of conferences on Artificial Life and Artificial Intelligence. What I want to depict is the way that Artificial Life was practised at this lab and at these conferences.

On the other hand, I am interested in the particularities of Artificial Life as a discipline, not only as a means of generating general sociological theory (as Latour Woolgar, and Knorr-Cetina do in their laboratory studies). The specific content of the discipline of Artificial Life is therefore part of the subject matter of this thesis.

Hence, the present work is situated between the two main categories I outlined above. It is a laboratory study, but, rather than describing scientific practice in general, it attempts to describe how the technoscience called Artificial Life is constructed. It is also a study of an academic discipline, ALife, but it aims at explaining this direction of thought by locating it in the social and technical context of laboratory life rather than in the cultural context of the larger society.

In relating the content of ALife research to the practices in laboratories and conferences of ALife, I am not ready to follow Traweek and leave physics to the physicists (or ALife to the ALifers). In this thesis I relate the specific content of ALife at COGS to the specific practice of ALife at COGS.

It is in the meeting point between the scientific content of ALife, the particular "tradition of thought" (Varela and Bourgine 1992:xi), and the practice of ALife, that the working machines with which I introduced this chapter play a pivotal role. We will see that they occupy a central position throughout this thesis. Artificial Life researchers both made machines and studied them. I now turn to an initial description of these machines.

The simulation

Artificial Life researchers mostly study things that go on inside computers. It can therefore be useful to compare Artificial Life with the endeavour that is known as Computer Science. In the Oxford Dictionary the latter is defined as "the branch of knowledge that deals with the construction, operation, programming, and applications of computers." Computer scientists make computers, and they study them in order to make them better. Sometimes Artificial Life research fits such a definition. This is the case when researchers, inspired by biological ideas, attempt to make a program that solves a specific problem better than other programs do. For example, inspired by the Darwinian notion of evolution by natural selection ALifers made so called Genetic Algorithms. They then used this algorithm to evolve a controller system - or "brain" (here the quotation marks are theirs) - for a robot.

More often, however, ALifers did more than engineering useful artefacts. They made biologically inspired programs that produced large sets of phenomena, and then they studied these worlds of phenomena as surrogates, alternative "natures" or, in fact, as "artificial lives". These artificial lives were studied in order to be able to say something general about life or cognition. That is, they made what they called a simulation.

If you are a human being, to simulate means to pretend, to communicate signs which refer to something other than what you are literally. A person may simulate sick while really feeling okay. The person signifies sickness. To treat a computer program as a simulation means to treat it as a sign or a system of signs that refers to something that it is not. When a computer program is understood as a simulation it has a referent outside itself. So, when ALifers made simulations they tried to understand something more than the program that they made. They could, for example, use what they learned from running genetic algorithms to say something about biological evolution.

In order to talk about simulations I need to be able to talk about the referent of simulations. I first thought about using ALife researcher Chris Langton's term life-as-we-know-it (Langton 1989b:1). In his definition of Artificial Life, Langton defines life-as-we-know-it by contrast to life-as-it-could-be:

By extending the empirical foundation upon which biology is based beyond the carbon-chain life that has evolved on Earth, Artificial Life can contribute to theoretical biology by locating life-as-we-know-it within the larger picture of life-as-it-could-be. (Langton 1989b:1)

There is however a problem about the "we" in this phrase. When ALifers formally (in scientific papers, etc.) relate their simulation to life-as-we-know-it, this life is often life as some other scientific discipline knows it. That discipline is usually biology, but it could also be cognitive psychology or a social science (like economics). Hence, the "we" in life-as-we-know-it would be biologists, psychologists, etc. The referent would be life-as-biologists (etc.)-know-it. In more informal settings life-as-we-know-it could be the life of the researcher. A researcher could use metaphors from his daily life in order to explain some simulated phenomena. Here the referent is "life-as-I-know-it". When ALifers made robots, they often first made a computer simulation of a robot in an environment. They developed a virtual robot in virtual space, before they made a real one. In these cases life-as-we-know-it - the referent of the simulation - is another machine.

Not quite satisfied with life-as-we-know-it (as I wanted to use it), I came across a term in Sherry Turkle's last book about young Internet users (Turkle 1996). One of her informants simply spoke about the life he lived outside the many Internet communities to which he belonged as "RL" - real life. I have adopted his RL, although I will spell it out as "Real Life". Real Life is not meant to refer to some kind of objective reality. It is always the Real Life of someone, and it is relative to Artificial Life (here not understood as the discipline, but as the "life" in the computer simulations). I will return to the Real Life references of Artificial Life in later chapters, here I will turn to another aspect of simulations, namely their role in experiments.

One element that profoundly shapes Artificial Life research is the increasing speed and complexity of computers. They can run more and more complex programs. ALife researchers utilise this speed and power by making simulations that produce unpredictable phenomena. A program that mimics Darwinian evolution, a Genetic Algorithm, for example, may simulate one or more populations consisting of, say, 100 individuals (often talked about as animats, from "animal" and "automata"). These populations may reproduce themselves through, say, 1000 generations. The animats may interact with each other and with a simulated environment. To understand what is going on in such a process is not easy, even for the one who has designed the simulation; it has to be studied. The virtual domain of phenomena that a simulation creates and that the ALifers study is often talked about as a world. There are a couple of features of these worlds that made them into appropriate objects of a technoscientific study. First, they either produced results or they did not. There is always room for doubt as to what may count as results, but there is also something "objective" about them; we cannot - in Berger and Luckmann's sense of "reality" - "wish them away" (Berger and Luckmann, 1966:13). Related to this objectivity is the fact that these worlds exist behind computer screens. That is, they tend to exist at a distance from the observer. They are "out there", behind a piece of glass, behind a computer screen.

The possibility of producing this objectivity, of creating distance between the observer and the observed, is not only an aspect of ALife research, it is one of the central elements in all technoscience. Therefore, before I look at how the distance between the ALifers and the virtual worlds they study is established, and to what degree this distance and objectivity are established in ALife research, I will turn to a general discussion of the distance and objectivity of technoscience. This discussion will of course be "situated", it will present a particular perspective on the matter. The perspective I will present is, of course, first and foremost my own, but it is, as we will see, inspired by Michel Callon and Bruno Latour.

The machine and the trustworthy witness

In technoscience the scientist is commonly understood to be a person endowed with theories, feelings, interests, and intuitions; in short, with human subjectivity. As such, he is a member of the scientific community, or, we might say, a member of the Society of Subjects. The objects of study are understood to exist independently of this subjectivity. They are parts of what I call the Nature of Objects. The objectivity of technoscience, then, is based on a separation of the Subject and the Object and of the Society and Nature to which these belong. To understand "objectivity", we also have to understand "subjectivity". Moreover, this separation is based on the construction of particular machines, the experimental apparatus. These machines presumably provide the necessary distance between the one who studies and the thing studied.

Many Western philosophers have argued that there exists an objective (that is, subject-independent) reality, and that human beings through science can achieve an understanding of this reality. These philosophies can be classified under the "-ism" known as "objectivism". In addition to these philosophical speculations people might try to establish an objective practice. People can, in practice, try to establish a separation of subjects and objects. In the following I will not discuss the objectivism of various philosophers, I will discuss how technoscientific objectivity (to the degree that it exists) has become an established practice of laboratory life.

I will do this by telling a story, a sort of "origin myth" of technoscience, a story that - even if it is just one of many building blocks - metonymically may stand for the larger whole of which it is a part. The story is based on the books of Shapin and Schaffer (1985) and Latour (1993), and it is about the experimental philosopher Robert Boyle, his air pump, and his opponent Thomas Hobbes.

In the 17th century there was a natural-philosophical debate between the vacuists and the plenists. The former argued, philosophically and principally, that there could be space without matter - vacuum. The latter denied this. Space is a property of bodies. Without bodies of some sort there cannot be a space (and seemingly empty space is filled with so called ether.)

Robert Boyle (1627-1691) made a major contribution to this debate. He designed a highly sophisticated mechanism; an air pump and a glass globe. The manually operated air pump could suck the air out of the glass globe, and objects inside the glass globe could be manipulated without opening it. He then experimentally produced a vacuum. But he also refrained from taking side in the debate between plenists and vacuists. Shapin and Schaffer write: "By 'vacuum' Boyle declared, 'I understand not a space, wherein there is no body at all, but such that is either altogether, or almost totally devoid of air'." (Shapin and Schaffer 1985:46) We might say that Boyle recontextualised the problem: Instead of arguing principally for or against "metaphysical" vacuum, he argued empirically by referring to the experiment. He defined an experimental context for the notion of vacuum, parallel to the philosophical debate. Furthermore, rather than appealing to the authority of logical, stringent thought, he invited trustworthy members of his community to witness the production of a vacuum under highly controlled circumstances. Hence, in this process Boyle did more than argue for the existence of experimental vacuum, he also argued for a new authority, made up of the machine and the trustworthy witness. The machine could reproduce the same results time after time, and the machine itself could be reproduced (within few years there were 6 "high-tech" air pumps in Europe). The results of these machines were obviously "there", they could not be "wished away" (cf. Berger and Luckmann 1966:13). Boyle argued for the authority of the opinion of the witness by referring to an English legal act, Clarendon's 116 Treason Act, where it was stated that, in a trial, two witnesses were necessary to convict a suspect (Shapin and Schaffer 1985:327). The opinions of these witnesses could be trusted because they were English Gentlemen, they were trustworthy, and they witnessed a phenomenon that they did not create.

In this double process, making an experiment and invoking a number of witnesses, Boyle (and his followers) performed more than "pure science". In arguing for a new source of legitimation - the trustworthy witness - he performed a political act. This can be seen most vividly in Boyle's interaction with one of his philosopher colleagues, Thomas Hobbes. In civil war-ridden England, Hobbes wanted to create one authority, one Power, under which all people would be united. Protestants and Catholics had been fighting each other quite a while, so this authority should not be religious, but a secular Republic ruled by the Sovereign, a Leviathan. In a social contract the subjects of the Sovereign would give Him the absolute authority to represent them. In presenting his plan Hobbes argued against Protestants' "free interpretation" of the Bible, using what he saw as a mathematical demonstration. This demonstration was inspired by Euclidean geometry, where one makes logical deductions from a set of basic, self-evident premises. These logical demonstrations could not be refuted (if one accepted the premises) and were not dependent on ambiguous sensory experiences. Everything - Man, Nature, God, and the Catholic Church - could be united in one, mathematical and geometrical universe. Hobbes thus argued mathematically for his Leviathan.

However, Hobbes was more than one of the first political scientists, he also argued mathematically (that is, principally and logically) against the possibility of the existence of vacuum. But when Boyle made his air pump, Hobbes did not set up a counter experiment (as physicists might have done today), he denied the very legitimacy of the whole experiment, including the use of witnesses. Hobbes not only argued against the existence of vacuum, he argued against a new authority - the experiment and the opinion of the observers - who would threaten the absolute authority of the Sovereign. The opinions of Boyle's witnesses claimed authority not because they followed logically from a set of first principles, but because they were testimonies of Nature (just as the opinions of the Protestants claimed authority as testimonies of the will of God). So when Boyle and his followers founded the first community for promotion of "experimental natural philosophy", the Royal Society of London, Hobbes wrote to the king, to warn against its activities. Gradually Hobbes got his secular state. Boyle & Co, however, also got their Royal Society, and the first modern scientific authority, based on the machine and the trustworthy witness, was established. This institution discusses matters of Nature independent of the State, the Society or the Subject. Thus, two major, separated domains of authority were constituted; one political, that was in charge of the laws governing the subjects, and one scientific, that administered the laws regulating the objects, the natural laws. The politician became the legitimate representative of Society, the scientist became the authorised representative of Nature.

There is a strange duality in what Boyle did. On the one hand he created a situation - the experiment - where facts could be fabricated by humans in a highly controlled way. On the other hand he invoked the notion of the trustworthy witness, an English gentleman in Boyle's time, who, like the witness of a crime, was observing a phenomenon that he had no responsibility for creating, but was merely describing.

Hence, on the one hand we may ask: Can the vacuum that Boyle produced be reduced to a Nature existing independently of human beings? The answer is clearly no. The whole experiment is a highly specialised and localised social and technical arrangement. It required highly advanced technical equipment that could only be made by the most skilful craftsmen of the time, and it was dependent on an institution of authority, the Royal Society, with its trustworthy witnesses. On the other hand we may ask: Can the vacuum be reduced to a Social Construction? Again the answer is no. It was precisely Boyle's point that whatever metaphysical arguments the plenists (or the vacuists) made about ether or the principal possibility of having space without body, the glass globe sucked empty of air is there to be observed. About this Bruno Latour writes:

Ironically, the key question of the constructivists - are facts thoroughly constructed in the laboratory? - is precisely the question that Boyle raised and resolved. Yes, the facts are indeed constructed in the new installation of the laboratory and through the artificial intermediary of the air pump. The level does descend in the Torricelli tube that has been inserted into the transparent enclosure of a pump operated by breathless technicians. 'Les faits sont faits': "Facts are fabricated," as Gaston Bachelard would say. But are facts that have been constructed by man artefactual for that reason? No: for Boyle, just like Hobbes, extends God's 'constructivism' to man. God knows things because He creates them. We know the nature of the facts because we have developed them in circumstances that are completely under our control. Our weakness becomes a strength, provided that we limit our knowledge to the instrumentalized nature of the facts and leave aside the interpretation of causes. Once again, Boyle turns a flaw - we produce only matters of fact that are created in laboratories and have only local value - into a decisive advantage: these facts will never be modified, whatever may happen elsewhere in theory, metaphysics, religion, politics or logic. (Latour 1993:18, references deleted)

Thus the authority of technoscience is - according to Shapin, Schaffer, and Latour - not solely based on the logic of geometrical, Euclidean thought, but on a description by a trustworthy witness of an event that - even if it only occurs in specially designed circumstances - the witness has not himself created. The bird that demonstrates "a space [...] devoid of air" by suffocating in the glass globe, or the feather that demonstrates the lack of Hobbes' "ether wind" by falling right down, cannot be reduced to "ideas" or "social relations". What is constructed - by humans - in the experiment is, paradoxically, a sort of independence from human factors. It is this independence, or objectivity, which makes it possible to see the observer as a distanced witness and not an accomplice in the event.

If, as Latour writes, Boyle can be said to "extend God's 'constructivism' to man", then Artificial Life research further extends this constructivism. In Boyle's experiment a bird that is not constructed by skilled craftsmen is put into the (constructed) glass globe, and air, also not constructed, can be sucked out of it. In a biological lab an oak leaf may be chemically prepared so that this preparation can be seen in a microscope. The bird, the air, and the oak leaf are, I think to most people with some familiarity with technoscience (most of "us"), related to the world outside the laboratory. There are references to the outside world. These references are to a large degree metonyms. The items brought into the lab are related to the world outside as parts of this larger whole.

In ALife research, however, the birds are "boids" - quasi-birds - that have never been outside the virtual space in which they fly (Reynolds 1992), the leaves have not been created by trees in the forest, but by a recursive computer algorithm (Oppenheimer 1989). These experimental objects are related to Real Life by similarity rather than by metonymy. ALifers understood this similarity either to be identity or to be metaphor. Artificial Life could be (in some respect) identical to Real Life, or it could be metaphorically similar to Real Life. Whether it was one or the other was contested.

The facts, first, that ALife worlds are constructed in a very literal sense of the word, and, second, that the relationship between Artificial Life and Real Life was understood to be one of similarity and metaphor (rather than one of metonymic "parts of") fostered, as I will return to later, a particular awareness among ALifers that there was something artifactual about their facts.

The subjects of the social sciences

To explain the objectivity of something - a fact or Nature - is, as we have seen, to focus upon the relation of this something to the observer. But when one focuses upon how a thing is constructed to be "subject-independent" (i.e. objective) one also, by necessity, explains the process by which the observer becomes "object-independent" (i.e. subjective). The objectivity of things and the subjectivity of humans are not independent because the separation of one from the other establishes both as independent elements. Therefore, when I ask how, and to what degree, Artificial Life simulations are constructed as modern objects (of technoscience), I also need to ask how and to what degree the Artificial Life researchers in the course of their research become modern, distanced subjects (of society).

However, this modern subject is not only the subject of technoscience, he or she is also the subject of the modern social sciences. The "creation myth" I told above not only recounted the story of how the Royal Society of natural scientists (following Boyle) got their Nature of Objects, but also how the social sciences (following Hobbes) got their Society of Subjects. Thus, the story also concerns anthropology as a discipline. I think it is fair, when talking about technoscientific objectivity, also to take a critical look at anthropological and sociological "subjectivity". It would be a bad mistake to deconstruct the one and take the other for granted. One way to make such a mistake is to say that Nature is a social construction. By saying this one claims that the Social precedes the Natural, that it comes first and is some sort of "first mover", a foundation of the Natural.

I will further illustrate how and why I find the social constructivist view misleading by sketching out one such position, a position held by two central sociologists of science, Harry Collins and Steven Yearley. They argued in defence of their position in a debate in which they criticised the so called network theory of Michel Callon and Bruno Latour (Collins and Yearley 1992, Callon and Latour 1992). The controversy is famous among people working within the social studies of science, and will thus also give the reader a deeper understanding of a current topic within these fields of study. Let me begin with a brief introduction to Callon and Latour's theory.

Inspired in part by the story of Hobbes, Boyle, and the air pump, Callon and Latour see the separation - and the stabilising of the separation - of the Subject and the Object, Society and Nature, humans and things, as an important part not only of technoscience, but of Western science and society in general. In order to talk about these distinctions without taking them for granted Callon and Latour have explored a vocabulary that is not based on these differences. I introduced part of their terminology in the Introduction. An "actant", for example, is more than an "actor". Both humans and nonhumans may be actants. An actant may be "enrolled" as "allied" to give strength to a position. When I quote one ALifer - an "object" of my study - as saying, for example, "we use Genetic Algorithms because evolution works" (see chapter 4), or when I add "(Latour 1993)" - a "subject" in my society of anthropologists - at the end of one of my sentences (as I will do many times), I am in both cases enrolling an actant to support my position. When a biologist argues for the existence of a molecule, the data that prove this existence are enrolled actants. Two Artificial Life researchers write: "For instance, an averagely aggressive animat [a computer-simulated animal] which is within visual range of a fight will tend to pursue the weaker or less aggressive of the two combatants in the aftermath" (de Bourier and Wheeler, 1994). Here, the behaving animats are "actants" (and here not having to decide a priori if they are "subjects" or "objects" is particularly useful, as the actants talked about are aggressive subroutines of a running computer program, and as the ALifers themselves discussed to what extent these creatures should be seen as cognisers - thinking entities, subjects - or not.) Let me give a brief example of what Callon and Latour's analytical language (i.e. "theory") looks like. Latour is discussing the early attempts to use wind mills for grinding corn:

How can the wind be borrowed? How can it be made to have a bearing on corn and bread? How can its force be translated so that, whatever it does or does not do, the corn is reliably ground? Yes, we may use the word translation and interest as well, because it is no more and no less difficult to interest a group in the fabrication of a vaccine than to interest the wind in the fabrication of the bread. Complicated negotiations have to go on continuously in both cases so that the provisional alliances do not break off. (Latour 1987:129, my italics)

There is a lot more to be said about Callon and Latour's network theory, of which this analytical vocabulary is a part, but I will not attempt to give a full description of it. We have seen enough to make sense of Collins and Yearley's critique of it.

Collins and Yearley's first premise is what they, following Peter Berger, call "alternation" (Collins and Yearley 1992:301). The sociologist (and the anthropologist and philosopher) are professionally trained to not adopt, be within, and specialise forever in remaining in one frame of reference, but rather to be able to switch - alternate - between different frames of reference. Sociologists of scientific knowledge have special training in this alternation. "Where, for example, sociologists must understand the culture of religious believers and of worldly atheists, SSKers(8) must be ready to be convinced by geographical uniformitarianism and catastrophism; now they must know that the universe is filled with gravity waves, now that it is not." (1992:302)(9) Their second premise is that there exists no valid epistemology that gives one specific frame of reference privileged access to reality; not the physicists' frame of reference, nor any of the sociologists' frames; "each is a flimsy building on the plain" (1992:308). Their third premise is laid out in the following passage:

In the absence of decisive epistemological arguments, how do we choose our epistemological stance? The answer is to ask not for the meaning but for the use. Natural scientists, working at the bench, should be naive realists - that is what will get the work done. Sociologists, historians, scientists away from the bench, and the rest of the general public should be social realists. Social realists must experience the social world in a naive way, as the day-to-day foundation of reality (as the natural scientists naively experience the natural world). (1992:308)

Their pragmatic epistemological stance is thus, first, that one needs to be a sort of pragmatically, naive realist in relation to one's object of study. A mathematician that gave a talk at COGS claimed that mathematicians, pragmatically, whether they defended it philosophically or not, were realists. But they were neither social nor natural realists, they related to mathematical "things" - ideal planes, circles, spheres etc. - as real objects "out there". They were Platonists, or, as he put it "closet Platonists", because defending Plato's notion of an objectively existing sphere of ideas is not good tone in mathematical circles.(10)

The "pragmatical Platonist" argument of the mathematician highlights the second aspect of the pragmatism of Collins and Yearley, namely that they limit what one can relate to as "reality" to two domains; Society and Nature. They exclude any other kind of pragmatical realisms, for example mathematical realism (Platonism), or, more important in this case, "network realism".

Collins and Yearley's main criticism of Callon and Latour is that using the same language on humans and nonhumans alike gives agency back to nature. This occurs because their language introduces symmetry between humans and nonhumans by anthropomorphising things (as well as objectifying humans). Giving agency back to nature is something that Collins and Yearley see as reactionary: "This backward step has happened as a consequence of the misconceived extension of symmetry that takes humans out of their pivotal role." (1992:322) To understand why they see this "extension of symmetry" as "a backward step", we need to introduce a principle that is fundamental in most social studies of science, known as the symmetry principle (introduced by Bloor 1973).

Collins and Yearley define symmetry thus; "sociologists of scientific knowledge should treat correct science and false science equally; they should analyse what are taken by most scientists to be true claims about the natural world and what are treated by most as mistaken claims the same way." (1992:302) This principle was a reaction to a convention in writings about science, namely to treat failures as if they were the consequence of social or human factors (someone had "misunderstood", "cheated", or perhaps had "commercial interests"), while treating successes as consequences of nature (they reveal things as they really are, the interests of the researchers were uninteresting or a curiosity). Collins and Yearley (and most sociologists of science with them) want to undermine the privileged authority of natural scientists to say what things "really are". One way to do this is by, symmetrically, explaining both failures and successes as a consequence of social negotiations. These social negotiations follow the same pattern, independent of the outcome (success or failure). In this pattern Nature plays no role. Nature is the consequence of social agreement, not its cause.

Collins and Yearley apply their symmetry principle in their critique of Callon's account of a group of French scientists who tried to develop a sea farm of scallops. Scallops were a popular food in France, the natural population of scallops had been in decline due to heavy fishing, so it seemed a good idea to make a sea farm where they could be grown artificially. One of the problems the scientists who attempted this had was to get the scallops to anchor - to fasten themselves - to specifically designed collectors. Callon describes this process as one where the scientists had to negotiate with the scallops in order to get them interested in anchoring themselves.(11) (Callon 1986)

.... [Callon's scallop story] is prosaic because the story of the scallops themselves is an asymmetrical old-fashioned scientific story. A symmetrical SSK-type account would analyze the way it came to be agreed, first that the scallops did anchor, and second - at a later date - that they did not anchor. Into the analysis the question of whether or not the scallops complied would not enter. The informing assumption would be that whether there were more or fewer scallops anchoring early and late in the study did not affect the extent to which the scallops were seen to be anchoring early and late. No SSK study would rely on the complicity of the scallops; at best it could rely on human-centred accounts of the complicity of the scallops. (Collins and Yearley 1992:314-315, my emphasis)

Callon and Latour's way of dealing with the symmetry principle, as they explain it in their reply to Collins and Yearley's criticism (Callon and Latour 1992), and elsewhere (Latour 1993), is not to explain truth and falsehood as determined by Society (negotiations), or as determined by Nature (facts and falsifications). Rather they take one step back and ask: To what extent do people - natural scientists, sociologists, politicians, whoever - invoke Nature and Society as explanatory principles (Bateson 1972) to account for successes and failures? Callon and Latour introduce a new axis. To the degree that people introduce pure Nature or pure Society - pure Objects or pure Subjects - to account for some phenomenon, they purify the Nature:Society distinction. When Collins and Yearley let humans and human negotiations determine Nature, and claim that only humans should be endowed with agency, they purify these poles by letting the one explain the other. The naive sociobiologist does the same, but the other way around; Nature ("genes", "survival of the fittest" etc.) determines Society. To the extent that one is involved in such purification, one also reproduces a central element of modernity. To the extent that one rejects such purifications one adopts a non-modern position. We may draw the following figure (inspired by the figures of Callon and Latour, 1992:346 and 349):

Figure 1 Callon and Latour's Modern purification versus Non-Modern mixture

According to Callon and Latour, people variously invoke Nature ("facts" etc.) and Society ("beliefs" etc.) to account for something. Thus, in their reply to Collins and Yearley's criticism of Callon's way of referring to scallops they state that:

... the scientists Callon portrays are constantly trying to bring the scallops to bear on the debates among colleagues and among fishermen; they simultaneously entertain dozens of ontological positions going from "scallops are like that, it is a fact"; to "you made up the data"; through positions like "this is what you think the scallops do, not what they really do", or "some scallops tend to support your position, others don't"; to "this is your account, not what it is." To pretend that to document the ways scientists bring in nonhumans, we sociologists should choose one of these positions - that scallops do not interfere at all in the debate among scientists striving to make scallops interfere in their debates - is not only counter intuitive but empirically stifling. (Callon and Latour, 1992:352-353)

There is one more element in this controversy that I would like to consider. Collins and Yearley argue that one problem in letting nonhumans, scallops, for example, play a role in scientific controversies is that the sociologist, Callon, has no other access to the nature of scallops than by asking biologists, thereby accepting their authority:

There is not the slightest reason for us to accept his [Callon's] opinions on the nature of the scallops if he is any less of a scallop scientist than the researchers he describes. In fact, we readers would prefer him to be more of a scallop expert than the others if he is to speak authoritatively on the subject. Is he an authority on scallops? Or did he merely report the scientists' view on the matter... (Collins and Yearley 1992:316)

The reason why Collins and Yearley question Callon's account of scallops is that the biologists, with their professional skills and practices of biology, have more authority in speaking about scallops than Callon. Callon can do nothing but rephrase their account of the state of scallops. To this Callon and Latour reply "...the accusation is levelled at us by sociologists ... who claim to explain the very content of science." (Callon and Latour, 1992:357) They go on to ask why Collins and Yearley grant the natural scientists their privileged right to say what nature is, and the answer, they reason, is that it is "because that would mean abandoning their privilege, and that of social scientists in general, of defining the human world, the social world." (1992:358) By granting natural scientists privileged access to Nature, through their natural realism, Collins and Yearley, through their social realism, can themselves keep the position of privileged access to Society.

Collins and Yearley's relativism, their alternation between equally "flimsy buildings on the plain", thus turned out to be sociological foundationalism; natural scientists have their privileged right to decide what Nature is, but what this privilege amounts to, is the right to be a member of a scientific society. It is the processes of this society which - to use a Marxist stock phrase - "in the last instance" determine Nature. The sociologist - with his professionally privileged position - is able to explain these social processes.

Collins and Yearley accuse Callon and Latour's perspective not only of being "backward", but also of "impotence" (Collins and Yearley 1992:303). There are some truths in these characterisations that Callon and Latour would probably agree to. Their network theory is designed not to be "modern" (hence, it may be said to be "backward"). The "potency" of modern explanations and theories is obvious: In the technosciences it has led, and still leads, to an incredible production of new technology. In the social sciences, the "potency" of the "grand narratives" (Lyotard 1979) has fostered some very strong ideologies, with cold-war Marxism and liberalism as the most obvious examples. By not telling a grand narrative, based on an equally grand reduction, for example the reduction of Nature to Society, one rightly loses some "potency".


Summary

In this chapter I have described the central position of working machines in technoscience. I have indicated that machines also play a central role in ALife research. As one of the critics of ALife said, the fate of this research is tied to whether their machines "will do more advanced things". Producing working machines is also a way of giving objectivity and distance to the research.

I also commented that I in this thesis combine a "lab study" with a "discipline study". The specific way in which I will do this, within the context of COGS, is by relating the scientific content - the theories and thoughts - of ALife to the practice of this research.

In the next chapter I will give these general themes more empirical content. Turning to COGS we will get a more concrete picture of what the theories and thoughts of ALife at COGS are about, what their technoscientific practice consists of, and how these two are related. I will also consider the central role that their machines, their computers and robots, play in all this.

Through this thesis as a whole my ambition is, following Callon and Latour, to situate myself outside the Nature:Society axis. Rather than attempting to explain one in terms of the other I will look at how objectivity and subjectivity are delegated, how the difference between them is purified or how it is blurred.

When ALifers, in making machines, find inspiration in human or animal life, it seems that they subjectify or vitalise machines. When they use these machines to say something about human or animal life they seem to objectify human or animal life. When they discuss whether they should look at their computer simulations, on some level, as identical to Real Life or only as metaphorically similar to it, they are discussing the boundaries between humans, life in general, and machines. How does this affect a science practised within institutions dedicated to a "Boylian" natural philosophy based on the objectivity of the trustworthy witness?

Likewise, when they construct their machines so that they appear in a specific relation to themselves as researchers, the boundary between subjects and objects is at the heart of the matter. This time the subject pole is themselves as practitioning researchers. The object pole is their simulations. In the next chapter I will situate Artificial Life research within the institution of technoscience, an institution based on a distinction between subjects and objects and a "division of labour" between "Boylian" natural scientists and "Hobbesian" social scientists. However, even if Artificial Life is practised within a technoscientific tradition, it does not necessarily reproduce this tradition faithfully, and it most certainly does not perfectly reproduce the particular, yet general, picture of technoscience that I have presented here. To what degree, then, did the ALifers make their machines so that they could situate themselves as distanced, trustworthy witnesses, and to what degree did they engage in a closer relationship to their machines?

In writing about these topics I am inspired by Callon and Latour, particularly their insistence on not taking the Society:Nature - or Subject:Object - distinction for granted, but rather to look at how these distinctions are constructed. The constructions I will describe are not solely social constructions, even if they involve, for example, the use of language in making identities and metaphors between humans and non-humans (see chapter 4). They are also, and to a substantial degree, technical constructions involving skilled engineering (see particularly chapter 5).


Chapter 2: The Technology of Artificial Life at COGS


In the previous chapter I gave a general introduction to technoscience. I outlined how Artificial Life research is an example of technoscience by the fact that ALifers make and experiment with machines (computer simulations(12) and robots), and by the importance of "results" - understood as working machines - in ALife research. In this chapter I will give a more concrete description of ALife as such a science.

Situating this thesis in relation to other works within the social studies of science, I said that I would combine a "discipline study" with a "laboratory study"; I will look at the content - the logos - of ALife (as a "discipline") in relation to the laboratory practices - the techne - of ALife. The first part of this chapter sketches out the logos of ALife, the body of explicit knowledge or "representational system" that makes up the discipline of ALife at COGS. The second part is concerned with the techne of ALife at COGS. The Greek term techne is normally translated as "art", "craft", "skill", or "technique". According to Carl Mitcham the term was mainly used in this practical, action-oriented sense in non-philosophical Greek writings (Mitcham 1994:117). My own understanding of techne includes the physical objects - the tools and the machines - that are handled artistically or skilfully, because, as I argue in chapter 5, skills and tools define each other mutually. Together, the techne of ALife and the logos of ALife make up the technology of ALife; the subject matter of this thesis.


The logos of Artificial Life

An ALERGIC reaction

One of the major institutions at the School of Cognitive and Computing Sciences for the practice of ALife was the Artificial LifE Reading Group In Cogs. It was established by the first ALifers at COGS, around 1990. This loose group met every fortnight in term time and had expanded rapidly the last couple of years before my fieldwork to become one of the most popular seminars at the school. The seminar was known by its acronym, ALERGIC. This was also the name of a mailing list in the virtual space of COGS (its computer-network). Mailing lists are an important part of life at the school. For students as well as researchers, "working", to a large degree, means sitting in front of a computer screen, making or running computer simulations, writing in a text editor, or communicating on Internet. On your screen you also have your personal mailbox, and if you chose to connect or "subscribe" to a mailing list, you receive a copy of all the mail sent to this address. Such a mailing list is like a notice board, only more dynamic, or discursive, and it is read more than notice boards on walls. In ALERGIC-the-list the coming events in ALERGIC-the-seminar are announced (they are not announced anywhere else), advice sought and given, and discussions held.

The name "ALERGIC" indicates that artificial life at COGS was seen as a reaction, and, as we saw in the introduction, ALife was an (allergic) reaction to the research field known as classical Artificial Intelligence (AI), somewhat ironically referred to - by ALifers as well as by others - as GOFAI (Good Old Fashioned AI).

This paradigm is more neutrally called the information processing paradigm in cognitive science, and will in the following be referred to as the "IP-paradigm" (Kirkebøen 1993a). Within this paradigm, human cognition was seen as a process going on in our brains, a process which essentially involved manipulation of symbols (1993a:10-11). That is, it involved computing.

In short, the difference between ALife and the IP-paradigm was perceived by the ALifers as a difference between a computational, rationalistic view of cognition and a more biologically plausible, embodied, and embedded view of the matter.

I will later outline what ALifers mean by these terms, but first I will briefly sketch out its philosophical "allergen" - GOFAI - the "substance" that ALife at COGS was a reaction to.

The information processing paradigm in cognitive science

To the cognitive scientists and Ph.D. students at COGS (ALifers and others) the basic principles of GOFAI were seldom made explicit. It was part of the taken-for-granted background of their profession. Thus, to present a picture of this paradigm I have turned to the general literature on the topic. My main source is a Doctoral thesis published at University of Oslo (Kirkebøen 1993a). Margaret Boden's The Philosophy of Artificial Intelligence (Boden ed. 1990), has also been an important source to GOFAI and to some of the criticism of it. (Boden was one of the founders of COGS in the early 1960's, professor at COGS during my fieldwork, and one of the grand old ladies of AI. She was also actively involved in Artificial Life research, forthcoming with The Philosophy of Artificial Life.)

The essential idea of the IP-paradigm in cognitive science is, as I mentioned above, that cognition is information processing, and that this involves logical operations on symbols. This is the essence of a hypothesis set forth by two of the credited founders of AI, Alan Newell and Herbert Simon. Their famous physical symbol system hypothesis runs as follows:

A physical symbol system has the necessary and sufficient means for general intelligent action. By 'necessary' we mean that any system that exhibits general intelligence will prove upon analysis to be a physical symbol system. By 'sufficient' we mean that any physical symbol system of sufficient size can be organised further to exhibit general intelligence. (Newell and Simon, 1990:111)

This physical symbol system, or information-processor, has four main components: memory (or to be more precise, different kinds of memories), a processor, receptors ("sense organs") and actuators ("action organs").

These four components are still the main components of most computers today. Given a set of explicit rules (a program stored in memory) and a set of inputs, computers solve specified problems, that is, they produce an appropriate output. Newell and Simon's point - that was widely accepted in AI/cognitive science, and at the same time the focus of many controversies - was that human minds also work this way: ".... programmed computers and human problem solvers are both species belonging to the genus Information Processing Systems...." (Newell and Simon, quoted in Kirkebøen 1993a:11). This means that on some abstract level of description we are computers, with Newell and Simon's reservation: "...at least when [we] solve problems." (Newell and Simon quoted in Kirkebøen 1993a:99) Human experts, when they solve problems, do so by applying a set of rules to a set of discrete symbols. Both the rules and the symbols exist inside the expert, in the expert's brain (see figure 2).

Figure 2 "Good Old Fashioned Artificial Intelligence"

Several philosophers have criticised the IP-paradigm view of the human mind. Psychologist and sociologist Sherry Turkle, who has worked extensively on the personal, cultural and sociological consequences of using AI-systems and computers in general, sums up this criticism: "Faced with computers that follow rules philosophers see humans as unique because their knowledge is socially and physically situated, not rule based but embodied, concrete and experiential." (Turkle, 1991) In response to the IP-paradigm, she continues, John Searle stressed the importance of a more biologically plausible understanding of the human mind/brain; Hubert Dreyfus emphasised the importance of (phenomenological) embodiment and situated knowledge; and Joseph Weizenbaum stressed that knowledge sometimes is ineffable, it cannot always be formally expressed in rules and discrete symbols (Turkle 1991:249). She continues:

These responses from a professional community are echoed in the popular culture. When confronted with computers that in some way seem to think like a person, people use everyday language to capture their sense of human uniqueness. They talk about feelings, flesh, intuitions, spark, direct experience, or, as Joseph Weizenbaum summed it up, the things we know but cannot say, the wordless glance that a father and mother share over the bed of their sleeping child. (1991:225)

The understanding of both knowledge and experts in the IP-paradigm has also been criticised by sociologists and anthropologists of science, see for example Diana E. Forsythe (1993), and Harry Collins (1990). In refuting the IP-paradigm, Collins' main argument is that we can make machines that behave intelligently only in domains where humans have disciplined themselves to behave like machines.

I cannot go into all these arguments here. Criticising the IP-paradigm is not the topic of this thesis. The important point is that this criticism is widely read and discussed by cognitive scientists, and, even more important, that the ALifers at COGS agree with the general points in their criticism, raising their own criticisms as well. One ALifer lent me his copy of Collins' book, Artificial Experts (1990), characterising it as one of the books that was important in his rejection of classical AI.

In a paper entitled What might cognition be if not computation? an American cognitive scientist raises an alternative to the IP-paradigm (van Gelder, 1992). It was a much quoted and discussed paper at COGS during my time there. I now turn to the ALifers-at-COGS' (and van Gelder's) answer to this question.

The cognitive science of ALife at COGS

In the opening scene of this thesis, which is taken from an ALERGIC meeting, a classical AI researcher defended a theory of human beings as a "scaling down of the scientist". The opposing ALifer characterised this as Judeo-Christian ethnocentrism.

All ALifers at COGS with whom I talked about the concept of "IQ", rejected this concept as a universal measure of intelligence, as well as the rationalistic assumptions behind it. IQ, they argued, measures a specific kind of adaptation to a mathematical and logically oriented culture. A visiting psychologist, who was part of the ALife group at COGS and who had doctorated on the evolution of the human brain, examines some traditional biases in theories of human evolution. In his dissertation he writes:

Science aims at intellectual convergence onto an orderly, veridical account of the world through orderly, public methods, not as a diversification of chaotic, entertaining, fictions through chaotic, idiosyncratic methods. According to [the theory of some writers], we scientists tend to make our methods of empirical inquiry into theories about our topic of inquiry. For example, the familiarity of psychologists with the inductive statistics used in our data analyses has driven many to speak about humans as "inductive statisticans". More generally, the cognitive tools that we use in science - deduction of implications from theories, warranted induction from evidence, hypothesis-testing, systematic construction of alternative theories, and controlled experimentation - may tend to color how we view the adaptive functions of the encephalized brain.

There is a strong temptation to validate the scientific enterprise itself, and our human capacity to participate constructively in it, by projecting the scientific spirit backwards onto the "curious Cro-Magnon" [one of our ancestors] who triumphed over the supposedly unintellectual Neanderthal. In the fair fight of brains against brawn, Nature herself, through her intermediary natural selection, supposedly awards the victory of the proto-scientific Cro-Magnon. (Miller 1993:145:146)

As an alternative to the rationalistic intelligence of the IP-paradigm ALifers claimed that intelligence, or cognition, has to be seen as embodied and embedded. Cognition is embodied, that is, it is not only a property of the brain, or perhaps a part of it, it is a property of an organism, a body. Furthermore cognition is embedded, that is, it is not only a property of an organism or a body, but of the physical, ecological, and social system of which the organism is only one part. Another common way in which ALifers at COGS talked about this "embedded embodiment", was to say that cognition is adaptive behaviour. To be a cogniser (an intelligent being) is to be able to adapt to some environment successfully. What counts as "success" is defined locally, so when the local environment of some organism (or population) is made up of other organisms who adapt to the first organism, we get what ALifers talked about as co-evolutionary or dynamical system in which the parts continuously respond to each other's moves without the system as a whole ever reaching any objective goal. As Varela puts it, the system drifts (Maturana and Varela, 1987). To have knowledge about something is, according to most ALifers at COGS, not to remember a set of discrete "symbols" or "representations". The greater part of what most animals and humans know, they know without knowing that they know it. They know it tacitly. To the ALifers, knowledge consisted to a large extent of embodied intuitions.

In the allegiance to notions such as embeddedness and embodiment, we see a direction of thought which is not unique to ALife. We recognise it from anthropology, for example in Pierre Bourdieu's discussions of how the norms and moral injunctions of a society ("sit up straight!") become internalised in bodies - become a "habitus" - through primary socialisation (Bourdieu 1990:69). As we saw in the introduction, at COGS (as well as in much modern anthropology) these thoughts were often inspired by continental phenomenology as it developed after Martin Heidegger's contributions in the 1930's.

ALifers often described their philosophical position (as Heidegger's position is described, (cf. Dreyfus 1991:5)) as anti- or non-Cartesian. By anti-Cartesian they also meant anti-GOFAI. The similarities between Cartesianism and GOFAI can be briefly sketched out as follows.

To Descartes the human body (as all bodies) was a machine, while our particular humanness was given us by our transcendent soul. This soul communicated with the world by the means of the "input" and the "output" machinery of the mechanistic body, and it communicated with this body through the pineal gland in the back of our skulls. According to GOFAI-metaphysics, central features of our humanness are given us by our internal intelligence. First I would like to note that this intelligence differs from Descartes' soul in an important manner. In line with the central materialistic premises of science, the GOFAI intelligence is thought to be wholly dependent on interactions between parts of the brain. It is thought to be an aspect of the material world. Thus, Paul Churchland, a central defender of classical AI, strongly rejects a dualism such as Descartes' division between res cogitans and res extensia - "mind stuff" and "matter stuff" (Churchland 1993).

However, as ALifers pointed out, both the Cartesian soul and the GOFAI intelligence give humans their particular humanness by existing in a linear chain somewhere in between the body-machinery of perception and action (see figure 2, and E-mail message below). They are in this sense both transcendent, both hidden somewhere "inside" us, separated from the world but communicating with it by means of the perceiving and acting body. ALifers, then, contrary to Churchland's own claims, did not accept that he had really rejected Cartesianism. Rather they thought that he (and the tradition of which he is a part) reproduced it.

The following E-mail, taken from the ALERGIC E-mail list exemplifies ALifers' rejection of GOFAI (even if this message does not explicitly classify GOFAI as "Cartesian".) The E-mail was written by an ALifer at COGS that often defined debates by - given a simplified axis with GOFAI in the one end of it and ALife in the other - positioning himself on the radical ALife-side of that axis. I have called him "Gregory". The E-mail was a response to someone (whom I did not know) who asked:

> Is anybody aware of "Perceptual Control Theory" (PCT)....
> If so, how widespread and how well received is this theory in
> ALife, AI and Cognitive Science ?

Gregory answered:

My general impression is that PCT is little known in the orthodox Cognitive
Science field. Because of lack of recognition, and difficulty in getting papers
published, the people behind the theory have been forced to adopt something
of the nature of a rebel cult centred around Bill Powers, the founder. Their
publications circulate in a different parallel universe to mainstream stuff. I
once went to a talk by Tom Bourbon who is a prominent advocate, but
otherwise have rarely come across them.

My caricature of how their views relate to other views is thus:

(A) Trad Cog Sci, GOFAI etc, assume that cognition is all about arrow X in
this diagram (X is 'inside' the agent):

WORLD -----> PERCEPTION ------X------> ACTION ----> WORLD

(B) Sensible people (...like me...) realise that cognition is actually about
BOTH arrows X and Y in this diagram:

<------Y--------
   WORLD AGENT
-------X------->

(C) The PCT people realise (A) is stupid, have got half a grasp on diagram
(B), and think the important bit to stress is Y:

AGENT -----> ACTION ------Y-----> via world ---> PERCEPTION

So they talk in terms of Actions determining Perceptions, the opposite to
people (A). [That is, PCT is a constructivistic theory, focusing on how the
world we perceive is determined - constructed - by our own actions.]
They are right to stress the importance of this half (which is unrecognised by
GOFAI), but if you want the whole picture go for (B). The Dynamical
Systems approach, Enactive, whatever you want to call it.

In this E-mail we see the rejection of the view of intelligence as something "inside" us (the "X" in the first figure). But we can also read out of this E-mail another important element in the relation between ALife at COGS and its "allergen", GOFAI. The element I have in mind is the objectivism of GOFAI. This objectivism is implicit in the first figure:

WORLD -----> PERCEPTION ------X------> ACTION ----> WORLD

In this picture of cognition, the world comes before the perception of it, that is, it is pregiven. In his introduction to AI, Paul Churchland expresses this objectivism clearly:

"The red surface of an apple does not look like a matrix of molecules reflecting photons at a certain critical wavelength, but that is what it is. The sound of a flute does not sound like a sinusoidal compression wave train in the atmosphere, but that is what it is. The warmth of the summer air does not feel like the mean kinetic energy of millions of tiny atoms, but that is what it is." (Paul Churchland 1993:15, italics in original)

In this passage phenomena such as colours, sounds, and heat are taken to be "interpretations" or "representations". They belong to the subject. They are representations of an objective, (i.e. subject-independent) reality. However, in order to talk about sounds and colours as "representations" one has to have a reference (the reality, or Nature) of which they can be representations, and in order to have this, one has to have a point of view from where these references can be seen. The point of view from where Churchland sees this reality is a perspective where the world is made up of "photons", "sinusoidal waves", and "mean kinetic energy". That is, Churchland's objective reality (photons, sinusoidal waves, mean kinetic energy) is Nature as it is given to us by technoscience.

ALife at COGS was also a reaction to - and a definite rejection of - objectivisms like the one advocated by Churchland. The quotation of Churchland above is an example of what Gregory with emphasis called "the God-given objectivity of GOFAI". I understood the ALifer to say that GOFAI delegates a special position to some people - those who describe the real world as a place in which colours, smells, etc. are mere representations. These people have to have an authority outside themselves in order to occupy this privileged position, their position is "God given".

In the E-mail above Gregory also presents, implicitly, an ALife alternative to this objectivism. Let us first consider the (stereotypic) "PCT" alternative:

AGENT -----> ACTION ------Y-----> via world ---> PERCEPTION

This, taken alone, is an expression of a position that is in radical opposition to GOFAI. If one focuses solely on how our actions determine our perceptions, then one ends up in a radical constructivism; the world in which we live is entirely made up by ourselves. The ALifers at COGS, generally, adopted some of this constructivism, but, usually, did not take it to the extreme "PCT"-position. This is expressed in the second figure of the E-mail message:

<------Y--------
  WORLD AGENT
-------X------->

Here there is an interaction between agent and world. They define each other mutually. One of the popular words used to express this interdependence was co-evolution, a word that takes into consideration that the "WORLD" (in the figure above) is normally made up of other "AGENTS".

A position within cognitive science that emphasises the interdependence between "WORLD" and "AGENT" is referred to in Gregory's E-mail message as the enactive approach. Francisco Varela (who co-arranged the first European Conference on ALife) is an exponent of this position. Varela, together with two other authors, begin a book on "enactivism" with the following:

"A phenomenologically inclined cognitive scientist reflecting on the origins of cognition might reason thus: Minds awaken a world. We did not design our world. We simply found ourselves with it; we awoke both to ourselves and to the world we inhabit. [...] Yet it is our structure [as human beings] that enables us to reflect upon this world." (Varela, Thompson and Rosch 1993:3)

The duality in experiencing the world as "given", while admitting that it depends on ourselves as living and knowing beings, is also expressed in the quotation of Merleau-Ponty in the Introduction to this thesis. I repeat it here:

The world is inseparable from the subject, but from a subject which is nothing but a project of the world, and the subject is inseparable from the world, but from a world which the subject itself projects (Merleau-Ponty 1962:430)

With this quotation I have taken us back to continental philosophy and to the blurring of the boundary between the subject and object, self and world, that started with Heidegger.

But ALife at COGS was more than anti-Cartesian phenomenology. Equally important was the increased interest in biology.

Biological plausibility

In Artificial Life, two major trends in biology were mixed. The largest of these trends is known as Neo-Darwinism. This direction is based on a combination of post-World War II molecular genetics and Darwin's theory of evolution by natural selection. Briefly, it focuses on how life, on the molecular level, can be seen as a system of genetic codes. According to the biologists (notably Richard Dawkins) within this tradition, the units that compete for survival are not organisms but genetic codes. These codes use organisms in their struggle for survival. Neo-Darwinism has particularly inspired the construction of Genetic Algorithms, programs that mimic natural selection, and that are widespread within ALife research. (I will present these algorithms in more detail later.)

The second biological tradition that inspires ALife research, and this is a more peripheral tradition within biology, is what we may call systemic biology. Calling it "biology" is somewhat arbitrary, it may also be said to be a direction within cognitive science. It has been inspired by systemic and holistic traditions such as cybernetics and chaos theory, and has some common traits with the phenomenology I outlined above. A central element of these systemic theories is that a system - biological, physical, or social - has certain properties that cannot be reduced to its component parts, for example to genes. These properties are linked to the dynamics of the system, and these dynamics are crucially dependent on the circularity, the "feedback", of the causal chains. Above, I cited cognitive scientist van Gelder's question What might cognition be if not computation? The answer he gives is that it is a dynamical system (van Gelder 1992). His simplest example of such a system is one that is well-known in cybernetics; James Watt's steam engine with a governor (see also Bateson 1972, 1979). The governor keeps the steam engine running at almost constant speed. It does so by establishing a feed-back loop where less speed on the engine makes the governor open up for more steam into the cylinder, thus increasing the speed of the engine and causing the governor to let less steam pass to the cylinder, and so forth. The system fluctuates within certain limits, keeping the speed almost constant by constantly changing the amount of steam that is let through to the cylinder. Dynamical systems are, as Gregory put it, characterised by a constant interaction between slow-moving (the speed of the stem engine) and fast-moving (the changing amount of steam) variables, or ,as an anthropologist might put it, between continuity and change. The continuity or reproduction of a system is maintained by a constant flux of change within that system.

The central tenet of systemic biology is that a living system equals a cognitive system and that both are dynamical systems. Let me take the biological cell as an example in order to compare Neo-Darwinism and systemic biology. Neo-Darwinism focuses on the genes as carriers of genetic information. This focus can be justified because the Neo-Darwinist sees the genes as determining phenotypic traits. Systemic biologists does not deny the importance of genes, but they also pay attention to how genetic information is interpreted by the biological cell. This "interpretation" is part of the dynamics of the cell, and it makes the meaning of the information dependent on the active interpreter of the information (a cell) as much as on the information itself. The biological cell is thus a "cogniser", a thinking, interpreting entity. Another - witty yet serious - of Gregory's contributions to an ALERGIC E-mail debate, this time on Adaptive Behaviour, expresses such an opinion:

"Personally I think saying that plants do not behave, or are not cognitive, is just narrow-minded speciesism, or anti-plantism; based largely on the fact that they tend to move on much slower timescales than we do, so we do not recognise their behaviour. We should be studying Plant Cognition, as well as Animal and Human Cognition."

We might say that as ALife research "biologises" cognitive science and AI, it also "cognitivises" biology.

In giving a short and general overview of a "culture", a "logos", or the content of a scientific discipline, such as I have done above, one necessarily has to simplify. Ideas are unequally distributed and accepted within a population. I have organised the ideas that make up COGS' Artificial Intelligence-research at large, by setting up two poles; GOFAI and Artificial Life. The picture I have given of GOFAI is a conservative position that few cognitive scientists - ALifers or not - would hold now. But many cognitive scientists would defend some kind of GOFAI, also today. Likewise, the picture that I have given of systemic biology and phenomenology is, if accepted in its entirety, a picture of the "ALERGIC reaction" taken to its radical conclusion. Probably few ALifers would agree with all of this, many would have reservations, modifications, or comments to make. However, "radicals", at COGS as well as elsewhere, existed; I have let Gregory's voice sound above, and their positions - by virtue of being radical positions - did often set important premises for the debates within their research.


The techne of Artificial Life

 From simulated subjects to simulated worlds

The philosophical or theoretical distinction between Artificial Intelligence and Artificial Life research at COGS - rationalism vs. embodied holism - is reflected in the way in which researchers in these two traditions use the computer.

The researchers in the IP-paradigm concentrated on creating the symbol-manipulating subject itself, the part of us in which our intelligence was supposed to reside. The symbols that this intelligence manipulated were often words in some human language, for example English. The researchers could thus be in direct interaction with this computer subject. The researcher, as well as the computer subject communicated with each other in English using a computer screen and a keyboard. Joseph Weisenbaum's Eliza is a famous GOFAI program, made to simulate a psychotherapist. An interchange with Eliza can look like the following interaction, here with sociologist Harry Collins:

how do you do, please tell me your problem.
I'm feeling lonely.
is it because you are feeling lonely that you came to me?
Yes, that's right.
you seem quite positive.
I just wanted someone to talk to.
why do you need to tell me you just want someone to talk to?
Isn't it obvious?
i'm not sure i understand you fully?
I need someone to care.
what would it mean if you had someone to care?
I would feel happier.
can you elaborate on that?
I need someone.
why do you want someone?
etc.

(Collins 1990:10)

One of the first ALifers at COGS, Dave Cliff, sketched out his reasons for rejecting the IP-paradigm in a paper that he called Computational Neuroethology: A provisional Manifesto (Cliff 1990). In this paper he pointed out that interactions as the one above can only take place if the human being involved in it knows the language that the system has been programmed to "speak" (here it is English). The human being is able both to interpret the output from the computer, and to provide meaningful answers to it because he knows English. To the classical AI-researcher, this may not have been a problem. He could focus on the structure of "the intelligence" itself and forget about the relation that this atomistic unit had to its environment. However, with ALife, the relations that organisms have to their environments are the foci of the study of intelligence. Then, as Dave Cliff pointed out in his paper, we come to see that the English-speaking AI-researcher is one of the necessary parts of the interaction that makes the above dialogue seem like an intelligent interaction. If the researcher did not understand English, there would be no interaction going on, and no artificial intelligence to be seen.

To avoid making the intelligent behaviour of the artificial subject dependent on the language capabilities of the researcher, Cliff argued that one ought to, as he writes, "embed ... [the artificial, intelligently behaving organisms] in simulated environments which close the external feedback loop without human intervention" (1990:1). The method that Cliff suggested, which is also the method used by most ALife-researchers, was thus to make an artificial world where the organisms could be in interaction with a simulated physical environment and with each other. Figure 3 illustrates the two ways to make and study artificial intelligence.

Figure 3 Artificial Intelligence-brain versus Artificial Life world

In this figure I have indicated the interactions that are necessary in order to be able to speak about intelligence, adaptive behaviour, or cognition with an arrow pointing in both directions.

The Artificial Intelligence is engaged in an "intelligent interaction" with the researcher, the animats in the ALife-word interact with each other, showing their ability to adapt to the environment in which they are embedded. The arrow only pointing in one direction indicates that the ALifer, by making a simulated world, has positioned himself at a distance - with a bird's eye perspective - from the interaction he is studying. He is, some ALifers would say, objectively observing the interactions he is studying. I will return to this distance later. First I will attempt to answer the following question: There are many ways in which one, hypothetically, can study interactions between intelligent beings - anthropologists, for example, conduct fieldwork. Why, then, was the making of artificial worlds the sensible thing for ALifers to do? Answering this will tell us something about institutionalised practices at COGS.

Performativity: making and understanding

As I indicated above - and if we think a bit hypotethically about it - making an artificial world is not the only way one could study the interaction between an artificial organism (or a subject) and its environment. One could also study the interactions between a classical artificial intelligence and its maker and/or user. One could make the whole arrangement in figure 3A into one's object of study. This would be a sociological way to proceed, and it has been used, for example by Collins (1990). However, one would then be abandoning the commitments of an Artificial Intelligence lab and applying the methods and means of sociology and anthropology. This, of course, was not something the ALifers at COGS did. I am by no means implying that I think they ought to have adopted this viewpoint. I am just making it clear that they actually did not study the interaction between artificial subjects and their human environment.(13) Some ALifers at COGS were inspired by sociological or anthropological studies (like Collins' work), but actually doing such studies was not their primary concern. Alifers made and studied machines - artificial worlds, in computers or as specifically sealed off physical spaces where robots could roam about. In the following I will look at a couple of examples of the rationale behind the making of artificial worlds.

Thomas came to COGS as a graduate student in 1987 to take a Masters degree in a branch of AI called Natural Language Processing (NLP)(14) He wanted to make a system which "translated" from a picture to a string [a linear chain] of text. If you pointed a video camera at something, the result should be a few lines of descriptive text. "In doing that", Thomas told me,

I came to realise that the computer vision literature and the computer linguistic literature were almost entirely separate. And this really, I suppose, was because within AI there was the assumption of central mediating representations.(15) And so the assumption of the computer vision people was that you just have to deliver the right representation. And the assumption of the language people was that from the right representations [delivered by the internal processor], we can generate language. There was relatively little work done tempting to assure that you could go all the way from sensors ... to effectors [or actuators, see figure 2]. ... it really, really surprised me that you could find so little work where the representations generated by sensation were actually used by actuation, ... so it made me think about the necessity for these kinds of representations, or whether this was the most sensible way of doing things, or in particular, why it was so difficult.

In this passage Thomas is criticising the IP-paradigm in AI. He is questioning the idea of internal representations or symbols, and thereby the whole concept of information processing. Hence, he is questioning a particular theory about the human mind (or generally about minds), and he does so by asking if "this was the most sensible way of doing things", that is, in this context, making machines, and "why it was so difficult". This is not to say that Thomas did not want to understand "nature" or "man" or the world outside the computer lab at COGS scientifically. He did science, and he wanted to do science, but practising science at COGS is, to a large degree, making machines.

In this example we see what I will call "the performativity of COGS". The truth of a statement or theory is linked to whether it enables the researcher to produce machines that work. Working machines give "strength" to a scientific position. In the beginning of chapter 1 I presented an example of this, when people at COGS asked if the room-centring robot would "do more advanced things"; would it "scale up?" The future of ALife research at COGS is linked to the question of whether their robots and animats will perform "more advanced things."

I am in dept to Lyotard for thinking in terms of "performativity" (Lyotard 1979:41). Lyotard's understanding of how research is legitimated through the performativity of technology is captured in the following sentence: "Technology is [...] a game pertaining not to the true, the just, or the beautiful, etc., but to efficiency: a technical 'move' is 'good' when it does better and/or expends less energy than another." (1979:44) The performativity that I discuss in this section partly fits with Lyotard's view. A working machine is also a machine that works more efficiently with respect to a specified task than another machine. However, later in this thesis we will see that the performativity of ALife technology needs to be understood wider than as a mere quest for efficiency. In chapter 6 I will show that the performativity I am discussing here is an aspect of artistic performances. These performances, resembling Turner's "cultural performances" (Turner 1986), talk to a wider aesthetics than only the performativity of fast computers. They are, we will see, related to the agency of creative artists.

The movement towards biology (and the turning away from rationalist AI) that I discussed in the first part of the present chapter is also related to the commitment to making working machines. If one wants to study an organism's interactions with its environment by simulating this interaction, then one has to settle for something simpler than a simulation of a human being in a world, not to say a society of human beings. At COGS everybody knew that there is neither know-how, nor computer power to achieve that. The only sensible way to combine the two commitments - focusing on adaptive behaviour, and making machines that work - was, according to the main ALife arguments at COGS, to understand and make something simpler than a human, for example an insect. Thus, many of the ALifers at COGS studied the branch of biology which is concerned with animal behaviour, ethology, and the more specialised disciplines neuroethology and behavioural ecology. The latter of these two is a branch of biology which studies the behaviour of animal species in relation to their ecological context. In neuroethology biologists study neural activity in relation to a given animal's behaviour in various contexts. Thus, one does not merely study the activity of neurons (in order to learn how neurons work), or the behaviour of animals in context, one combines two biological directions, correlating specific, situated behaviour with specific neural activities.

Cliff, in his manifesto (1990), defines Computational neuroethology as the study of neuroethology by means of computer simulations. The term bears commitments not only to the domain to be studied, but also to the method applied. Aspects of how flies or bees navigate in their environment, for example, are more conceptually and technically manageable than aspects of human movements. They can be simulated, in robots or in computer worlds. Thus, in turning to biology, the ALifers at COGS did more than extend the definition of "intelligent beings". Biology gave them the means to be faithful to the quest for machine performativity and to the quest for understanding life scientifically.(16)

Genetic algorithms and the production of worlds to explore

I will now to take a closer look at how the artificial worlds of ALife are made, and how a "nature" is produced, containing legitimate objects of scientific enquiry. I will start by introducing a particular type of computer program, known as the Genetic Algorithm (GA). In a certain sense, the defining characteristic of the Artificial Life group at COGS was its use of this algorithm. People within the ALife group used the Genetic Algorithm (except for a few ALife philosophers who did not program computers at all), people outside did not use it. ALife at COGS was, by ALifers as well as non-ALifers, to a large degree associated with research on various usages of GAs.

Genetic Algorithms are many things. International conferences on GAs are held regularly, and thick books about how to design and apply them are published. I will give a brief introduction to this kind of program with an example that you can find on Internet. If you visit this Internet site (maintained by the Carnegie-Mellon University),(17) you are presented with 9 abstract computer-graphics pictures. These pictures may be scored according to the aesthetic preference of the visitor, 1 for the worst, 9 for the best. When ten visitors have voted, the GA at Carnegie-Mellon sums up the scores. Two of the program codes are then mixed ("mated") in a way vaguely similar to the way a pair of chromosomes (those of the "mother" and "father") are combined to make an offspring (a "child-picture"). The program codes of popular pictures are most likely to be picked out for reproduction. There is no "sex" involved in this process. All 9 pictures can "mate" with all the others. This mating is done 9 times to produce a new generation of 9 pictures. The pictures in the new generation will be a blend of the most popular pictures in the previous generation.

This process is modelled on the idea of "survival of the fittest", the more "fit" pictures pass on their "genes" to the next generation, the "unfit" ones become extinct. The "selection pressure" - the environment in which these pictures have to survive - is the aesthetic preferences of those who visit the Internet site.

Artificial Life simulations are usually more complicated than Carnegie Mellon's genetic art. An ALife world may consist of one or several populations of animats that interact through many generations. One such "world" - existing in virtual time and space - may be inhabited by thousands of individuals. A central aspect of getting such a system "debugged and up running" is to get it to behave unpredictably. This may be done despite the fact that the program is a deterministic machine. In "chaos-theoretical" terms this is known as "sensitive dependence on initial conditions" or the "butterfly effect".(18) If a GA is programmed right, then the butterfly effect is activated, which means that very small initial changes in the system may make a big difference in the resulting evolution. In such cases there is no way to know exactly where the evolutionary process is going to lead. This unpredictability is a central aspect of GA's "lifelikeness". It makes it slightly irrational, slightly autonomous, a bit out of control, to use a common ALife phrase.

When a process is unpredictable we may also say that the result of the process is "un-thought-of" or "un-designed". GAs may thus be used to make un-thought-of things that may be useful relative to some specified task. Phil Husbands, one of the founders of ALife at COGS, wrote the first Ph.D. thesis in England on genetic algorithms. He applied a GA to evolve efficient industrial processes. Very briefly, what he did was to evolve more economic ways to put together several products (say different kinds of TV-sets and video-players) in one factory by using the same set of machines. (In such cases one has to work out the order in which to put the different products together so that one avoids a situation where machines are needed simultaneously in several production lines.) The solutions Husbands found were, in principle, or at least in practice, impossible for a human being to design or predict rationally beforehand.

Experience with such problems underlay one of the main hypotheses of much ALife research at COGS, namely that if you want a robot to do something lifelike, there is no way that you can design the controller program (or "brain"). But robot brains can be evolved, and this is what many researchers and Ph.D. students attempt to do at COGS. Two phrases that are often used to distinguish this kind of evolution from rational design are top down versus bottom up.(19)

In the bottom up, uncontrolled, aspects of the GA we see another way in which ALifers see themselves as different from the classical AI researchers. Classical AI designed computer brains or "life" top down, ALife attempts to induce these phenomena to emerge bottom up. The term "emergence" is central to the way in which ALifers combined reductionist molecular biology and holistic systemic biology. The dynamics of systems emerge as a result of interactions between its parts, but cannot be directly reduced to these parts. Moreover, as Chris Langton at Santa Fe Institute often stressed, the emergent properties of a system may define the context in which the single parts act. Hence, one ought to allow for a certain top down causality in artificial life systems as global structures shape the actions of the individual parts, even if these global structures are generated bottom up by interactions between the parts. (It is a bit like the hermeneutical circle, except that in Langton's argument the "whole" is not a text, but a simulated world, and the "parts" are not the words and sentences of the text, but individuals in the simulated world.)(20)

When the result of running a computer simulation is not obvious, not "thought of", then this result may be analysed. One may study the emergent process, the evolution in the simulation, as something that, even if it is constructed by the researcher, nevertheless appears to him as something given, something "out there". The simulation may be set up as an experiment that legitimately may be studied scientifically as a technoscientific "nature".

In figure 3 we saw how the ALifer by making a simulated world can study the interactions between animats - their adaptive behaviour and embeddedness - by situating himself outside the interactions to be studied. In doing so he does more than change focus from the "intelligence" itself to its relations with its world, he also produces technoscientific distance. He situates himself as a distanced witness, observing relations which are not "infected" by human language or other social factors. The observed relations, as Latour wrote, "will never be modified, whatever may happen elsewhere in theory, metaphysics, religion, politics or logic." (Latour 1993:18) Dave Cliff (who argued for the making of simulated environments, see page 50) is also aware of this. He concludes his manifesto thus: "Subjectivity will give way to objectivity." (Cliff 1990:21) I have found it illuminating to compare the story here told (the story about the scientific legitimacy of studying artificial worlds) with the following story of how Good Old Fashioned AI was legitimated.

The AI-systems of the 60's and 70's - the information processors that manipulated discrete symbols - could be described using formal logic. They were logical machines. As argued by several authors (Kirkebøen 1993a, Turkle 1991), this legitimated a shift from the behaviouristic psychology of the 1950's to the mentalistic psychology of the 1970's. The human mind of the 1970's could be described in terms of internal states of the mind, not only in terms of observable human behaviour.

The behaviourists of the 1950's (notably Skinner) argued for the stimulus-response arcs and against earlier mentalistic psychology by claiming that the latter was based on the introspection of the psychologist. This introspection, it was argued, could not be made intersubjectively available, and, hence, it could not be scientific (Kirkebøen 1993a). However, as it became clear that logical machines - computers - with observable internal states could generate intelligent behaviour (like playing chess), it also became scientifically respectable to describe human behaviour in terms of internal states. One could talk about internal states of the human mind - rational procedures of information processing - without lapsing into unscientific introspection. One had an objective, physical model; the computer. The quality of being a formally describable mechanism - and a logical machine that was actually produced - made the information processor a legitimate scientific model of the human mind (Kirkebøen 1993a)

By rejecting the view that makes cognition formally equal to logical machines, ALifers also reject the rationalistic source of legitimation that followed the GOFAI view of the brain as a rational, inducting, and deducting expert. ALifers do however gain new legitimacy, first, by studying interactions between parts that are wholly external to the researchers themselves, and second, by the fact that these interactions produce new, emergent phenomena which researchers do not fully understand and are unable to predict. These phenomena can be analysed with technoscientific rigor. One student of ALife described this difference between AI and ALife in the following way: "When a Classical AI system is debugged and works, then the researcher also understands it, [because as a logical, formal system, once it works, it is also logical, and hence in principle understood] and the system can be sold. Whereas once an ALife system is debugged and is up running, then the problem of understanding it starts." ALife simulations open up new worlds to explore. By creating such worlds to explore ALifers also situate themselves in a position similar to the one Boyle constructed; as a distanced witness of phenonemena which, albeit highly controlled, nevertheless emerge to the researcher as "un-thought-of", a nature existing in the machine.


Conclusion: anti-Cartesian, yet Boylean?

In the first part of this chapter I pictured a group of scientists who, in their theoretical speculations, denied that thinking entities should be seen as transcendental, Cartesian subjects who observe an objective world outside themselves. In the second part I have shown that the same researchers, in their practice, seem to reproduce the distance between the acting and perceiving subject and the object of action and perception. It seems as if they actually produce and reproduce the transcendental subject - the Boylean subject of technoscience - which they in their anti-Cartesian philosophies and theories deny.

There are several reasons why this need not be - and actually is not - as paradoxical as I have made it seem here. First, one may very well create distance between the observer and the observed - as part of a methodological pragmatism - without believing that one thereby discovers universal, subject-independent truths. Some ALifers hold such a pragmatical position (some with a good deal of cynicism).

Second, one might ask how "objective" an ALife simulation really is? After all, when the simulation is not running, it is the result of a computer scientist's programming. It is constructed by a human being. I am not asking this, at least not primarily, to critically deconstruct a proposed technoscientific objectivity. My point is to ask to what degree the ALifers themselves question the objectivity of their simulation. To what extent do they really position themselves as distanced witnesses of objective worlds, and to what extent do they change or contest this position? Or, to put it differently; to what extent is there some kind of correspondence between their relativistic philosophy and the role they give themselves as researchers? To what degree do they not only - in their philosophy - question the Cartesian dualism, but also - in their practice - question the Boylean distance?

The answer I believe is that to some extent they do. The main way in which they do this is by valuing and appreciating the "techne" part of their "techno-science". Here, the Greek techne should be understood both as "art" and "technique" (or engineering). ALife simulations are, on various occasions, valued not only as producers of an artificial nature which can be scientifically understood, but as aesthetic expressions of art and engineering. ALife simulations are, sometimes, explicitly aesthetic expressions. Through these expressions of engineering and art the researcher comes "closer" to his creation; the separation of the technoscientific witness and the object observed is blurred. How this take place is one of the topics of the following chapters.

Yet ALifers are also doing "hard science", producing "sober", scientific analyses of their artificial worlds. How this element of technoscientific tradition is mixed with the element of novelty that I outlined above is a question that my further projections of the theories, practices, and machines of Artificial Life - the rest of this thesis - will represent.

The first chapter in the second part of this thesis is concerned with a problem that the ALifers at COGS raised. The problem can be briefly expressed thus: Is Artificial Life a "normal" technoscience? - or, to use emic terms: Is ALife a real or a postmodern science?


Chapter 3: Representations of ALife: a Real or a Postmodern Science?


Artificial Life. There is seemingly a contradiction between the two words that make up this phrase. "Artificial", as I said in the Introduction, mean to ALifers for example "man-made"(21), constructed or artifactual. "Life", however, is by most ALifers thought to have emerged on Earth about 3 to 4 thousand million years ago, and later to have developed in a process of Darwinian evolution. "Life", in contrast to "artificial", is pregiven. The relations between the "made" and the "found", the artificial and the natural are in many ways central in ALife research. As the announcement of the European Conference on Artificial Life 1997 (arranged by COGS) states: "This interdisciplinary conference aims to provoke new understandings of the relationships between the natural and the artificial." Let me sum up some of the relations between the "made" and the "found", the artificial and the natural, in ALife research that we have seen so far in this thesis.

In chapter 1 I discussed how technoscience, by means of certain machines (the experimental apparatus), the juridical idea of the trustworthy witness, and the separation of the subjects (of society) and the objects (of nature), constructs phenomena that appear as "pregiven". These phenomena are fabricated in technical and social contexts that also establish distance between the researcher and an object of inquiry that thus appears to be "untouched by human hands".

In chapter 2 I presented how ALifers programmed their computers so that they could conduct technoscientific experiments. Their simulations produce worlds of unpredictable phenomena. The emergent phenomena of these simulations have qualities similar to the objects of technoscience, for example Boyle's vacuum; they appear to the researcher as something that he finds rather than creates, they are "un-thought-of", "autonomous", and distanced. Like Real Life, the emergent phenomena exist as an objective reality, they can not be wished away. Like Real Life, the Artificial Life of the simulations appears as pregiven, as evolved in processes external to human beings. Yet ALifers know that their simulations are constructions of some sort, because they they have programmed these worlds themselves.

In chapter 2 we also saw that ALifers, in line with notions from continental phenomenology, reject the objectivism of GOFAI. GOFAI-objectivism is part of a particular understanding of human beings. According to this conceptualisation, cognition is a process that involves subjective representations of an external, pregiven, and subject-independent reality. In contrast, the ALifers hold, an agent and its world define each other mutually, for example in co-evolutionary processes. Thus, the worlds of the agents are never totally "pregiven" to the agents, they also depend on the agents. When ALifers - and cognitive scientists in general - discuss "objectivism" versus "enactivism", when they discuss the relation between the actions and perceptions of an agent, then they are concerned with a theme that has to do with the relation between the constructed and the pregiven, the artificial and the natural. When ALifers describe the relation between an agent and its world as co-evolution, they sometimes also discuss the relation between the artificial and the natural. If we think of ourselves as such adaptive agents, then the worlds in which we live are not entirely pregiven (or natural), nor are they altogether constructed (or artificial). Co-evolving in interaction with our worlds, we are adapting to worlds that adapt to us.

This chapter is concerned with another theme which is related to the tension between "Artificial" and "Life", man made and pregiven. The particular focus is on how ALifers at COGS understand the notions of engineering and science, the making of machines versus the understanding of some kind of nature. In what sense is ALife an engineering enterprise, concerned with constructing artificial agents and worlds, and in what sense is it a scientific enterprise concerned with discovering new truths about nature? How can it be both? The following is a discussion of how ALife researchers understand their own endeavour as a scientific and an engineering enterprise. In this chapter I will not be concerned directly with "science in action" (Latour 1987) or scientific practices, I will discuss how ALifers talk about and value their own practice. We will be presented with "norms" and "representations", rather than with "actions" (cf. Holy and Stuchlik 1983). In the chapters that follow this one, I will present science in action; I will discuss how ALifers in practice understand and construct their artificial worlds. In this chapter my concern is with how ALifers themselves conceptualise their own production of knowledge and machines.

Some basic understandings of science and engineering

A central event that drew my attention towards this topic, was a discussion among ALifers at COGS during my fieldwork. The discussion revolved around the question of what Artificial Life research was and what it ought to be, and it focused especially on whether ALife ought to be a scientific or an engineering enterprise.

At COGS in 1994, there were (among others) two important reasons why the question of whether ALife was, or should be, a scientific or an engineering enterprise was raised. The first reason was a talk given by an ALifer, Terrence, at the ALERGIC seminar. The talk was entitled "Artificial life as theoretical biology: How to do real science with computer simulations." In this talk Terrence argued that Artificial Life ought to be - or become - what he rhetorically called a real science. His definition of such a science fits what I in chapter 1 described as modern technoscience, a rigorous and experimental study of a unified Nature. Some of those who contested this view characterised their ALife research as a postmodern endeavour. They were less concerned with discovering new truths about the nature of things, less concerned with understanding what was pregiven, and more occupied with creations of artificiality.

The second reason why ALifers discussed the terms science and engineering during my fieldwork, I believe, was that, partly inspired by Terrence' talk, I started to ask people how they understood these terms. The visiting anthropologist, asking what Artificial Life research was about, seemed to stimulate a particular awareness of this question among those who became objects of his study. I think one effect of my questions about their understanding of the terms science and engineering, was that the difference was further stressed. The difference, I suspect, became more important during my stay than it was both before I came and after I left. In addition, the sharp rhetoric of the talk at ALERGIC - which I am about to present - also contributed to the increased importance of the distinction.

To approach the COGS debate on the scientific and engineering aspects of Artificial Life, I first need to explain two general ways in which the terms science and engineering were used and understood by ALifers and most other people at COGS. The difference in the use of the terms did not seem to me to vary with different people, but rather with different contexts.

In the first, general and common use of the terms science and engineering, the former had to do with understanding aspects of Real Life whereas the latter had to do with making machines. The following quotation is an example of this usage:

"People within AI can be broadly divided into those who are doing science - they are interested in, perhaps, computational models of cognition and intelligent artefacts as an aid to understand how humans and animals behave, perceive and plan; and those who are doing engineering - they are interested in creating useful or intelligent artefacts, and treat ideas from biology, neurobiology, evolution, psychology, etc. merely as means to this end." (Harvey, 1993:2)

In the second, general sense in which these terms were often used, engineering was seen as a type of science. Engineering, in this use, was the application of scientific methods in the making and understanding of machines. Engineering was the science of the machine. When science was opposed to this endeavour, it, too (in a COGS context), meant to make and understand machines. But something was added; this making and understanding should also say something about Real Life, about biological, social, or cognitive processes in general.(22) In science, the constructed and understood machine appeared to the ALifer as a system of signs pointing to something outside itself.

To most computer scientists at COGS (ALifers as well as others), engineering was a quite concrete practice. They would probably recognise themselves better in Lévi-Strauss' "bricoleur" - the scientist "of the concrete" - than in his "engineer" (Lévi-Strauss 1966). ALifers emphasised that their engineering required a set of practical, embodied skills in dealing with the equipment of their laboratories. Following Lévi-Strauss we might say that their computer programs and algorithms, like the elements of myths, were applied and joined together according to the meanings they had acquired through their "history of use" (Lévi-Strauss 1966:16-17) rather than through abstract and formal rules.

In the following I will present two opposing views of what ALife was and what it ought to be. I will call these opinions the "modern" - or, following Terrence, the "real science" - and the "postmodern" positions. I am using the terms "real science" and "postmodern" because these words were used by the ALifers themselves. I should, however, note that these views were not in held any clear-cut way by people who held opposing views. The same researcher could give his or her allegiance to both views, depending on the context of the things said. My distinction between a "modern" vs. a "postmodern" understanding of ALife is therefore an analytical distinction, separating views more than people.


Representations of Artificial Life as a real science

Quite often, when ALifers spoke about engineering and science, the former was normatively subordinated to the latter. Within this frame of reference the making of machines was seen as a means to the more important and respectable end of doing science, that is, of gaining knowledge of the natural world. Engineering, the practice of "merely" making machines, was not as important as making machines that said something about something other than themselves, namely, Real Life. A British ALifer told me that this normative classification was particularly strong in Britain. Here, "engineer" is not a protected title that you have to have an academic degree in order to use. On the continent, he said (and in Norway I may add), engineering is, within academia, a more respectable endeavour. This is symbolised in the formal education needed to become an engineer.

A defence of Artificial Life as a natural science

Terrence, in the talk I mentioned above, emphasised the normative asymmetry of science and engineering. His own announcement of his talk in the ALERGIC computer list gives a good picture of its content:

For scientific purposes, ALife is simply a way of doing 'thought experiments' in theoretical biology, by using computer simulation rather than human imagination and hand written maths. But most ALife research tries to construct amusing, overly complex systems, rather than doing scientific analysis of cause-and-effect relationships through the simplest possible systematic experiments and comparisons. This talk will focus on how to use ALife methods to address real scientific issues in evolutionary biology.

In this announcement Terrence also lists some methodological heuristics for ALife. In short, these heuristics tied ALife to biology. Terrence argues that in order to do real science you should start with a well-known biological problem, do your thorough biological scholarship, and get "accepted, published, and critiqued in real biological journals." Terrence ends his announcement with the following:

The goal of this little manifesto will be to initiate some constructive debate about standards of research and scholarship in the ALife field, and to develop some methodological heuristics for making our research more scientific.

In his talk Terrence distinguishes between real science and computer science. This is part of an overhead slide that he shows:

 

Goal:
 

Methods:
 

Skills:

Real science:

- knowledge of nature
- theoretical importance

- analysing existing systems
- hypothesis testing

- scholarship

Computer science:

- speed & money
- economic relevance

- build new systems
- debugging

- technical insight

One of the major problems in making ALife into a real science, according to Terrence, is the computer science influence. People are too concerned with building new, fancy systems, they do poor scholarship, and they do too little to understand nature.

Terrence also gave (a version of) his talk a second time, in 1995, a year after I had ended my fieldwork. (I did not attend the talk, but I received the announcement on E-mail, as I am still on the ALERGIC computer list.) His second announcement ends with the following:

This talk will not address ALife as engineering, entertainment, pedagogy, philosophy of biology, or runaway postmodern cult.

Most ALifers, when I asked them what they thought about Terrence's talk, said that they basically agreed with his criticism. The majority of the ALifers - especially Ph.D. students of ALife - did at some point express their desire to follow Terrence's norms. They did this by reading biological literature, by including biological debates and topics in their dissertations, and by attempting to make their simulations or artificial worlds prove or extend some biological knowledge about Real Life.

One of the major efforts to meet some of Terrence's norms that I observed during my time at COGS, was the attempt to get biologists to attend an international conference on ALife (or more precisely, on adaptive behaviour(23)) that COGS arranged in Brighton. The organisers hoped to convince biologists from the University of Sussex who had had something to do with COGS to attend the conference. One of their great achievements (which they were proud of themselves) was to get John Maynard Smith, a leading post-World War II evolutionary biologist to give a talk. However, except for this prominent invited speaker, few biologists actually attended.

At the end of the conference there was a session called Discussion of SAB94, planning of SAB96. During this event someone argued that it would be a good thing to get more biologists to attend these conferences. One of the conference chairs of SAB92 replied something like this: "Biologists don't care about our work. We must show them the biological importance of our work to get them here. Several were invited to SAB92, but they all had 'good excuses' [his quotes] for not coming."

I am not telling this to show that biologists did not care about computer-simulated adaptive behaviour or Artificial Life, but to show that ALifers worried about their apparent lack of interest.(24) Their worries point in the same direction as those of Terrence; a worry about not being a "real science" that could contribute to the biological understanding of nature, and that would be part of an - at least relatively - unified science. They shared that worry with the organisers of the third European Conference on Artificial Life, ECAL '95, in Granada, Spain. The introduction to the conference proceedings reads: "It is our opinion, and that of many others, that the future survival of ALife is highly related to the presence of people from biology in the field." (Moran et.al. 1995:V, emphasis in original)

The power of a unified Nature

In Terrence's argument we see a sociological claim - in applied, normative form - reminiscent of Collins and Yearley's (1992) on seeing Nature as the consequence (and not the cause) of social agreement (see chapter 1). Terrence argued that the goal of real science was "knowledge of nature". An implicit, but important, premise of his talk was that he did not understand "nature" to be in plural. He argued, implicitly, for a unified nature. In order to have a unified nature one also has to have a unified scientific community. If science is allowed to come to many diverse and even conflicting agreements, then science will produce many natures. If science fragments, Nature fragments. Two of the most important means of achieving this unity were, as we saw, first to do (biological) scholarship, that is, to read up on biological subjects in order to be aware of current positions and problems in biology, and second, to publish one's results and conclusions in biological journals. Thus ALife would be socialised into biology.

Terrence's insistence on keeping the scientific community (and hence Nature) unified, as I understood him, had to do with maintaining the power of technoscience. Terrence told me that one of the important qualities of Genetic Algorithms - particularly those which showed evolving animats in realistic graphical displays - was their ability to persuade people of the importance of Darwinian evolution. In the USA, he argued, where a large fraction of the Republican Party holds that the Biblical myth of creation should be presented to school children as equally true as (or truer than) Darwinian evolution, the educational role of Genetic Algorithms could be of great importance.

The power of technoscience, as we saw in chapter 1, is dependent to a large extent on the understanding of scientists as trustworthy witnesses of a Nature which is constructed as pregiven. They are spokespersons, representatives of Nature. But if Nature is pluralised, becoming the respective natures of many communities, then it becomes less clear that scientists are representing something independent of themselves. Nature may become the natures - as the religions, economies or political systems - of so many communities. In effect then - if not by intention - by arguing for the unity of the scientific community, Terrence was working to maintain an authority that he and others needed in order, for example, to fight what he saw as undesirable religious fundamentalism.

Finally I should note that the relationship between keeping ALife unified with biology, on the one hand, and securing the hegemonic position of technoscience to represent Nature, on the other, may not have been, and often was not, the reason why ALifers wanted their simulations to make important statements about biology (and thus to biologists). A conservation of the hegemony of technoscience to define Nature is, or may be, an effect of unifying ALife with biology, but it is not necessarily that which was intended. Some ALifers were simply interested in biological problems. Others had less dramatic ambitions than fighting religious fundamentalism in the USA. Researchers of Artificial life who found the views of life and cognition within this field promising - for one reason or another - simply wanted to share this viewpoint with others. As the organisers of ECAL95 write in the introduction to the conference proceedings: "We are convinced that the approaches and methods of ALife are important in understanding life, and we should work to maintain the interest and involvement of biologists, biochemists, geneticists, ecologists, and many other natural scientists in the exiting and important field of ALife." (Moran et.al. 1995:VI, emphasis in original)

Whatever people's intentions may be for inviting biologists: One effect of making ALife into a natural science may be to reproduce the image of Nature as one, and hence to reproduce the privilege of technoscience to represent it.


Representations of Artificial Life as a postmodern science

In his book on Artificial Life, the Danish biologist Claus Emmeche writes (here in English translation, quoted in Helmreich 1995:445):

... artificial life must be seen as a sign of the emergence of a new set of postmodern sciences, postmodern because they have renounced or strongly downgraded the challenge of providing us with a truthful image of one real world, and instead have taken on the mission of exploring the possibilities and impossibilities of virtual worlds (Emmeche 1994: 161)

Helmreich, Emmeche and others within the ALife community (see Helmreich 1995:445-451) have discussed how the making of many artificial worlds may bring about an understanding of truth not as one, describing the one real world, but as many. This may very well be the case. At COGS however, the rejection of objectivism was part of the rejection of GOFAI, a rejection that to a large degree defined the ALife research at COGS. The making of virtual worlds was, as we saw in chapter 2, more a consequence of their relativism than a cause of it, even if speaking of worlds in the plural may continue to develop and change this relativism.

In this section I will focus on two ways in which ALifers at COGS talked about practising ALife, two representations of the ALife endeavour. Both of these were labelled postmodern by ALifers. The first of these representations was a rejection of the value scale in which engineering was seen as a less respectable endeavour than science. The role of the engineer - and the creative process, the process of creation - was given increased value.

Creative engineers

I asked Gregory - the researcher at COGS who most consistently defended the role of the engineer - what he thought about Terrence's recipe for doing real science. He answered: "I avoid that problem by claiming I'm an engineer." He used the word engineering in the sense of making machines (as opposed to understanding nature), and not in the sense of "making and understanding machines by the use of scientific methods". He called himself an engineer, as I understood him, because he did not see himself as discovering anything. He made something. As a part of this picture he described his relation to his evolving robots as a co-evolution. His own understanding of how to make robots developed in interaction with his developing robots, just as his developing - or evolving - robots developed in interaction with his understanding of how to make them. He did not see himself as increasing his subjective "knowledge" about a pregiven, objective world (a knowledge - in the GOFAI sense, see figure 2 - seen as some map, or system of inner "representations" or "symbols" ). He developed his own adaptive behaviour in the context of the robot and the robot lab, and in interaction with the robot's ability to adapt to the presence of Gregory (and the tasks Gregory set the robot to solve).

We will get back to Gregory's engineering later. But first I will turn to another argument that was raised in favour of increasing the value of engineering, or more precisely of increasing the value of the creative process in doing technoscience. This argument was a sociological rejection of Terrence's program for how to unify ALife with biology. The person expressing it - John from now on - was an AI researcher at COGS. He was part of the ALife group, in the sense that he was an engaged and regular participant of ALERGIC, but he was also an articulate and provocative critic of ALife at COGS. We might say that John played the role of the devil's advocate in the ALife group. Part of his argument against Terrence - expressed at the ALERGIC seminar after Terrence's talk, and, on my request, later repeated to me while I took further notes - runs something like this:

"I think doing like Terrence wanted - publishing in scientific journals and integrating our work into "real science" - is a lost cause. It's just too many papers, too much chaos. With the photo copier, the computer and the telefax there is no way to bring things together again. [...]"

I ask: "But what should we do, should we give in to the chaos or not?" ["We" here refers to the AI and cognitive science (including ALife) community.]

"There are two things we could do. We could either do as Terrence said, but I don't think it would work. Or we could do as you said, "give in". But we could not admit that. We would get troubles with the funding agencies if we admitted openly that it was a kind of art or self-expression we were doing, and nothing more. So we will have to claim that it is engineering we are doing. We are, and we will have to be, closet artists clothed as engineers."

In John's argument we see a double rejection of Terrence's real science. First he is arguing that researchers at COGS will have to claim to do engineering rather than real science. This is the way to get funding. Integrating into real science is a "lost cause". But he takes his argument further. He is also, implicitly, criticising the view of engineering as the strict application of scientific methods in order to improve the understanding of how to make machines. Doing engineering is a "kind of art or self-expression".

John was surely exaggerating things a bit in the passage above. He wanted to stir up some debate at ALERGIC, and he wanted to give me a good argument against Terrence when I went home to write about this. But he also expressed a serious opinion, and, at least when it comes to the artistic aspect of ALife, I feel that he is touching on a fundamental characteristic of ALife as I knew it. We will later see that not only were ALifers "closet artists" when presenting fancy and/or advanced simulations, but they sometimes explicitly referred to their simulations as art, and themselves as artists. I will discuss the topic of ALife as art further in chapter 6, which deals with how ALife simulations were presented at conferences. Let me just mention that while agreeing with John about the artistry of ALife, I do not see it as my task to take his side in his controversy with Terrence. Terrence also agreed with John, he referred to this artistry as the construction of "amusing, overly complex systems" and as "postmodern cult". Terrence's point was normative; he did not like it. John's point was pragmatic; we can't avoid it.

Out of control

The second major postmodern way to understand what ALife research was and what it might become, was what I will call the (relative) distrust of scientific analysis. This conceptualisation of ALife questioned the scope and role that the scientific method of rigorously analysing a system could play. The distrust was also, and especially, directed towards engineering - here understood as the ability to do scientific analyses of artificial systems (that is, machines).

Some ALifers at COGS thought that an evolved artificial system could, and should, be analysed in retrospect, others were more in doubt as to the usefulness and future possibilities of doing this. To understand this difference of opinion we first need to have a look at what performing such an analysis actually entails.

Figure 4 pictures an evolved, artificial "brain" which controls the movements of a robot. In a population of 30 brains, those that were better able to guide the robot towards the white triangle, see figure 5, and not towards the white rectangle (in an otherwise black room), were more likely to "survive". After about 15 generations the robot finds its way to the triangle. The network ("brain") thus evolved is not understood in minute detail.

Figure 4 Evolved robot "brain"

Figure 5 Robots finding a white triangle

(Harvey et al. 1994:400), reprinted by permission of the MIT Press.

The exact, causal web linking the different nodes of the network is not understood. Analysing the network means producing such an understanding. A description based on such an analysis may look like the following:

When the signal from receptive field 1 (v1) is high but that from receptive field 2 (v2) is low, the connection from unit 0 to unit 14 generates a rotational movement. When v1 and v2 are both medium high, the inputs from unit 1 to units 12 and 13 tend to cancel each other out whereas unit 14 is strongly activated, again resulting in a rotational movement. When v1 and v2 are both high ...... (Harvey et. al. 1994)

... and so it continues. The detailed interactions between the different parts of the system (the units or "neurons") are described, and these descriptions of causes and effects are again linked to the movement of the robot. An understanding of the detailed workings of the control system, causally explaining the robot's behaviour in its environment, is the result of a scientific analysis of this kind.

Some, particularly Gregory (who "avoided the problem [of doing real science] by claiming [to be] an engineer"), questioned the future fate of such analyses. In a not-to-distant future, he guessed, evolved robot brains will be of such size and complexity that no one will be able to analyse them. As he said: If humans had brains small enough to be analysed, then humans would be too stupid to do it, and if we have big enough brains to do something like an analysis of a brain (natural or artificial) then the brains that we will be able to analyse won't be very big (in which case they will be very far from a human brain).

When, and if, such "un-analysable" systems emerge, then this, some ALifers speculated, will also have important consequences on our ability to rationally control our technology. What cannot be understood in detail, cannot be controlled in detail. And if we cannot control ALife- and other technology, then we should also give up trying to control it. A Ph.D. student at COGS expressed such a concern when I interviewed him:

"[A few years ago] the Californian telephone exchange went down. It caused lots of stir in the computer world, 'cause the whole of California was cut of, I think for a day or so, and it sent shock waves into California. [...] It took the whole programming staff of AT&T or whatever it was, about 500 programmers, they worked for a whole night, going through, I think, something like a billion lines of [computer] code to find the bug.

"They found it! I am amazed, I'm actually amazed, I would never have found it. I thought it was an impossible task, but they found it. They found the bug, and corrected it, the one line of code, the fault in one line of code, in a billion lines of code, which had brought their entire network down, when the conditions had so occurred, when this one thing that had never been tested or never discovered, caused the lot to collapse. They put the one line of code in, and the thing worked again, when the [specific] condition came back. [...] We don't know how many of these bugs there are. [...]

"Are you now at the complete limit of what a team of programmers can cope with?"

The "complete limit" that the Ph.D. student speculates about, is the limit of rational design and analysis of systems like international telephone networks - where every single line of code is designed with human intention and analysed with scientific rigor. "So, what's the alternative?", the Ph.D. student asks. The alternative he proposes is that we, if we want to develop such systems further, "let go of control". He speculates further:

"The human [can] develop an interdependent relationship with the computer in which the human asks the computer to do things for him, but doesn't say how to do it, and the computer evolves to service what the human asks [...] It is a fundamental human, or moral decision if they [or we] are prepared to let go of the control over computers. [... This may sound] unattractive to telephone companies, because you loose control over it, but it may be that the only way that something like [the telephone network] can grow and grow and grow is that it becomes self organising."

His ideas resonate with what science writer Kevin Kelly writes in his latest book Out of Control:

Until recently, all our artifacts, all our handmade creations have been under our authority. But as we cultivate synthetic life in our artifacts, we cultivate the loss of our command. "Out of control", to be honest, is a great exaggeration of the state that our enlivened machines will take. They will remain indirectly under our influence and guidance, but free of our domination (Kelly 1994:329).

In these ideas about the "loss of our command" we recognise Gregory's endeavour to effect a co-evolution between himself and his robot. In such a co-evolution none of the parts determines the other in a one-way fashion. The researcher is not determining the robot by designing it "top-down", nor does the robot determine the knowledge of the researcher by imposing its "facts" upon a distanced, passively observing witness. The two parts are, to paraphrase Kelly, "under each other's influence and guidance, but free of unilateral domination." Neither the subject nor the object pole determines the other.

If Gregory - or someone else - succeeds in evolving (and not designing) for example, a robot brain which no one is later able to analyse, then, as I understand him, he will consider it a success. It will be a success granted him by probably the most important source of legitimacy within technoscience; the performance of a working machine. This hypothetical machine will, however, not produce any new, analytical knowledge that could be generalised to an analytical picture of Real Life (or Nature). By working without being designed or analysed, this machine will point to the limits of what, with analytical methods and in a technoscientific context, one can possibly understand. It will, somehow, transcend the technoscientific context in which it is made. Gregory sometimes called his enterprise a postmodern science. His hypothetical robot could become a postmodern machine.


Conclusion

The relation between representations and the represented is like the relation between a map and a territory (Bateson 1972). Representations - maps - make discrete jumps where there is continuity in the territory. I have presented two positions at COGS relevant to what ALife is, ought to be, or may become. I presented these two positions by describing some quite radical expressions of them; from Terrence's rhetorical division between "real science" and "computer science", to the Ph.D. student's speculations on the future of self-organising, evolving telephone networks.

This chapter is a representation of their representations. The two positions here described represent a plurality of positions. Among these positions I could have presented more nuanced views that better represented a nuanced reality. I have, however, not selected positions on what ALife is and ought to be according to whether I think they are "right" or "wrong" representations and norms of ALife. I have selected two positions that, by virtue of being well articulated and quite radical, defined two important poles among many possible positions.

The "real science" and the postmodern representations of ALife indicate two different aspects of the practice of Artificial Life research. In practice, however, the difference between these two aspects is more blurred than in the conceptualisations of them. We will in the following chapters see how ALife research reproduced a modern, real science, even while giving rise to elements of postmodernity. This will be clear in chapter 6 where we will see how ALife simulations - at conferences - were presented both as scientific results and as products of artistry and skilled engineering.

It will also be clear in the next chapter, where I will discuss how practising ALifers - inescapably embedded in a language - relate humans, life, and machines by the means of associations, metaphors and identities. In this use of language, we will see that the distinction between a real scientific language of pure subjects and objects and a postmodern (or a non-modern) language of impure "quasi-subjects" and "quasi-objects" (Latour 1993) is blurred and in some cases even resolved.


Chapter 4: Metaphors and Identities of Artificial Life Research


I would like to introduce this chapter with three examples of the use of metaphor in Artificial Life research. The first example is the following statement:

"There is a bug in my ALife simulation."(25)

Understood literally, the statement may be taken to mean that there is a small insect inside the simulation. This would have been great to an ALifer, as making artificial insects is one of the aims of Artificial Life research. However, bugs in computers is the normal way to refer to errors in computer programs. It is part of the everyday language of all computer literates and has nothing to do with Artificial Life in particular. It is after the bugs have been removed (debugged) from an ALife simulation (and the program is "up and running") that it may start to produce, say, artificial insects.

My second example is the attempt to evolve robot-controller systems ("robot-brains") at COGS. As we have seen, this project was based on using notions from biology and Darwinian evolution as models for how to make robots. We might say that ideas such as natural selection were moved, in a metaphorical and creative leap, from one context to another, from biology to engineering and robotics. This metaphor, however, was not just a way of talking, it was a way of making a living. It was the set of basic notions that defined one of the research projects at COGS, a project that was funded by the University of Sussex and the UK Science and Engineering Research Council. Whereas bugs had nothing in particular to do with the Evolutionary Robotics Group at COGS - it had to do with computer science in general - evolution had everything to do with this group.

The third example is another one-line statement:

"These guys could be said to be adaptive...."

This was said by a researcher presenting his ALife simulation at a conference. I would like to consider the meaning of the term "guys" in this statement. The term referred to some simple, computer-simulated organisms; a group of animats. But, literally, these animats were not meant to be simulations of guys. They were, explicitly, merely meant to show some special aspects of how simulated organisms can adapt to simulated worlds. These guys, then, was a metaphorical expression. And the audience knew without explanation that it was not meant literally; we all knew that it was just a figure of speech. The phrase, understood as a metaphor, and as it was used and understood there and then, may have had several connotations. First, to mention two, the animats, labelled "these guys", made sense as a group of significant others in opposition to "us guys". It was a peer group, and the phrase stressed the communion between "them" and "us". Second, it may have suggested children. The small animats were like "little guys"; little children.

Now, even if there was a shared understanding of the metaphorical character of these guys in the conference hall - it would have been an absurd situation if someone had asked "in what respect do these animats simulate guys?" - the expression is not altogether innocent. Among AI- or ALife-researchers - and most of the participants of the conference came from AI-labs - the more life-like or human-like you can get a machine to behave, the better. Approaching, even if not attaining, the creation of something like "artificial guys" is one of the aims of this research. These guys, then, is somewhere between the two metaphors above. Like bugs, it is part of everyday language. One may refer to anything as these guys, say, in a given context, a box of nails. But in the case above the phrase also touches upon some of the central features of the discipline of Artificial Intelligence, as it makes "them" - our peer group of machines - a bit more like "us". Using phrases like these guys may be innocent, informal, or even funny slang one day. A couple of days later it may be more serious as when, at the same conference, Rodney Brooks from Massachusetts Institute of Technology (MIT) presented his Cog-project, the attempt to make a full size humanoid; a human-like robot which is thought to be socialised; a machine that is indeed much closer to a "guy".

This chapter deals with how expressions and concepts move from one context to another. I will discuss two kinds of movement. First, I will study how ALifers generalised from the organic world (including human beings) - what I have called Real Life - to the world of computers and robots, and, vice versa, from machines to organisms. So far, I have called these generalisations metaphors. Second, I will study the traffic between informal everyday language and expressions on the one hand, and more formal, scientific models on the other. These two movements questioned - in peculiar ways - some of the boundaries between humans, animals, and machines. For example, when these guys is used to refer to a box of nails we can simply call it an innocent anthropomorphism. But when the same expression is used to refer to robots or animats we suddenly find that the robot in question is actually an attempt to make a human being of some sort. The Cog project is an anthropomorphism in a very literal sense of the word. The inncocent anthropomorphism has suddenly become a technoscientific model.

Using the term "metaphor" as I have done above is a bit tricky. In the above examples I have called both the informal language and the more formal, scientific models metaphor. In use, these metaphors are quite different from one another. One of the differences is what we might call their literalness. Talking about evolving robots is more literally meant than talking about these guys. This literalness is central. It can be seen in the difference between evolution and "brain" - where ALifers at COGS normally added quotation marks (sometimes gesticulated) or other reservations on the literalness to the latter word and not to the former. Sometimes the literalness of a metaphor is contested. The most general example of such a controversy is the debate between what is known as strong and weak ALife. The strong position holds that life in computers can "really be life". Life in computers may be different from life outside it, but nevertheless it is, or can be, a realisation of life. The crux of this argument is that the properties that make something alive or intelligent etc., are aspects of how its parts are related, not of the parts themselves. Hence, the organisation - or "dynamic form" (Langton 1989b:2) - that characterises biological life may also be found in computers. This means that one can make a large class of identical systems and give this class a name. In Langton's terminology this identity is life-as-it-could-be (Langton 1989b:1). Biological life - life-as-we-know-it (1989b:1), which just happens to be made up of carbon etc. - and Artificial Life in silicon, are both instances of this general class. Both life-as-we-know-it and Artificial Life are defined by their dynamical form, i.e. their relational nature, not their matter. Given this frame of reference, the unity of life-as-we-know-it and Artificial Life comes first, and studying Artificial Life in simulations is no more based on the use of metaphor than studying biological life is. They are both particular studies of life-as-it-could-be.

The opposing, weak position - or set of positions - holds that computerised or robotic life should be seen as useful simulations of life, but not as realisations of it. The word "simulation" is a derivation of "simile", which means a figure of speech that equates two different ideas. That is, it is almost synonymous to "metaphor". We might say that a simulation is a metaphor, not expressed in ordinary language, but in the running of a computer program. In effect then, the strong side argues that the relation between biological and artificial life is one of identity, whereas the weak side argues that it is metaphor.

At this point I am confronted with a problem. In using the word "metaphor" as I have done so far, it seems as if I have taken the side of the weak ALifers in this debate. I do however want to be able to talk about debates like the one above - there are several - without taking sides in them. Will the word "metaphor" do as an analytical tool, or will I have to find another?

It is time to take one step back, to have a closer look at the notion of metaphor.

What is a metaphor?

Lakoff and Johnson begin their study of metaphors with the following sentence: "Metaphor is for most people a device of the poetic imagination and the rhetorical flourish - a matter of extraordinary rather than ordinary language." (Lakoff and Johnson 1980:3) In this use of the word, "metaphoric" is opposed to "literal". It is this meaning of the word that I applied above, when I discussed strong versus weak ALife. However, Lakoff and Johnson claim that metaphors are not just poetry, they are "pervasive in everyday life, not just in language but in thought and action." (1980:3) "Our ordinary conceptual system", they write, "is fundamentally metaphorical in nature" (1980:3). They continue: "The essence of metaphor is understanding and experiencing one kind of thing in terms of another." (1980:5, emphasis in original) An example of such a metaphor is the one they call argument is war. We talk about arguments as if they were wars. Examples of this include: "Your claims are indefensible" or "He attacked every weak point in my argument." (1980:4) Other metaphors include what they call personifications, for example "Life has cheated me." (1980:33) Here, the abstract notion of life gets a concrete, personified meaning. The statement, "These guys could be said to be adaptive" is another such personification. One of Lakoff and Johnson's points is that these metaphors are not merely a (poetic) way of speech. They are the way to speak. We constantly understand one thing in terms of another, there is often no alternative.

However, calling, for example, argument is war a metaphor presents several problems. In so-called "cannon boat diplomacy", or, for example, in the USA's insistence that Saddam Hussein not invade Kuwait, the arguments were not only metaphorically a war. Both the threat and the actual use of guns, warships, and armies were meant to carry the argument. Argumentation and warfare had literally, not only metaphorically, become one. Arguments are indeed wars, but we need to distinguish between literal and metaphorical relations of these kinds. We may produce technical definitions of metaphor, extending or altering its everyday use, but we should not try to escape the everyday connotation of the word, namely that it is opposed to the literal.

Karin Knorr-Cetina acknowledges this everyday sense of the word by making a typology of what she calls "similarity classifications". Metaphoric classification is one kind of similarity classification. She writes: "Metaphor can now be seen as the form of similarity classification which involves the greatest distance between the conceptual objects involved, since it would be absurd or false to take the proposed conjunction literally." (Knorr-Cetina, 1981:51) Other similarity classifications include "primary recognition" - the recognition of an occurrence or an object as one thing, as when we recognise a chair as a chair - and "interpretations" which "classify an occurrence as 'actually' an instance of something else. [As when a spot in the horizon is interpreted as a house.]" (1981:51) I mention these two other similarity classifications in order to give a context to the metaphorical classification. My point is, however, not to use this typology to classify associations that ALifers use as either "metaphors", "interpretations", or "primary recognitions". I think such a typology is difficult to use. This is, first, because the boundaries between the different types are fuzzy. If, as I have argued, seeing the Gulf war as an argument is not metaphorical, then what is it, an interpretation or a primary recognition? I do not know. The war and the argument seem inseparable. Second, ALife researchers often disagreed as to whether a particular similarity should be seen as an identity relation or as something more metaphorical. ALifers, as we saw, sometimes contested the literalness of different claims; can computers literally embody life or will life in computers always be simulations, similes? It is not my purpose to decide - by using Knorr-Cetina's typology - what sort of similarity a classification is, but rather to picture how my informants deal with the literalness of the similarities that they see between their machines and life in general.

Let me sum up this discussion with some definitions. On the most general level, I will speak of similarity associations. I use the word "association" rather than Knorr-Cetina's "classification". This is an adaptation to Latour's network theory. Associations - including similarity associations - make associates. To make an associate is to enrol an ally on your side in a controversy. I will return to this later.

Similarity associations can have different degrees of literalness. They may be treated literally, as identities, or more figuratively, as metaphors. I will call a similarity association metaphorical when the context in which it existed ascribed a not-quite-literal quality to the association. Claiming that ALife programs and robots are simulations of real life is to treat their similarity metaphorically. Saying that they can be realisations of life is to treat the similarity as an identity.

The following figure classifies the examples I have used so far in this chapter:

Figure 6 Degrees of literalness

The term bugs has little to do with Artificial Life research. These guys touches some of its essence. Evolutionary robotics on the other hand, is a serious ALife concern. It defines a research project. Strong ALife and life-as-it-could-be define an identity. In the figure above evolutionary robotics, the Cog project, and life-as-it-could-be occupy one box (the grey area), but I have graded them differently along the axis of literalness. In the following section I will show in greater detail why I have placed scientific models and paradigms on the "more literal" side of the axis. I will use the Cog project as the main example, and return to evolutionary robotics and strong ALife toward the end of the section.

Metaphors and Identities; Everyday Expressions and Scientific Models

As a contrast to the Cog project, consider first the following story: At the SAB94 conference a researcher spoke of the vision (the "eyes") of his animat. In order to illustrate the angle of the vision field - how far around the head the animat could see - the researcher made a movement with his hands from his own eyes and out to the left and right in space. The body of the robot was, there and then, understood as the researcher's own body, and the vision field of the robot was illustrated by the researcher's hand movements. The thing explained or illuminated was the robot, the thing used for illumination was the researcher's own body.

In this case the association was of a pedagogic, illustrative, and clearly metaphoric kind. It was not meant to be very literal: The researcher's point was not to say the robot's eyes were "human", that he, as an engineer, had designed (or evolved) a robot with human vision. And in the audience nobody understood his gestures as saying anything like "my eyes and body are really like the eyes and body of the robot." The association was intersubjectively and tacitly understood as pedagogic and metaphorical.

The Cog project at MIT is to make a human-like "animat"; a full scale humanoid, called Cog, from waist and up. It will not walk, but it will wave its arms, move its head and saccade (i.e. move rapidly) its eyes. Cog is the first (major) attempt in ALife to do what AI-researchers have always done, namely make a machine that can tell us something about human cognition specifically. The relation between explanation and explained in the Cog project is in a sense opposite from the ALifer who explained the vision field of his robot: Here the thing to be explained is human beings, not robots, and the thing used for explanation is the robot, not the human body. But the relation is also more complicated: Before the machine may be used to explain something human, human physiology (as it is known by psychologists) is used as a source of inspiration to make the machine. The two domains (joined by a similarity association) are explicitly used to illuminate each other.

The associations between human and machine in the Cog project have both literal and metaphorical qualities. To illustrate this, let us have a look at the following paragraph, quoted from the published project proposal (Brooks & Stein, 1993). In the section called Scientific Question, the authors express their scientific interests. They also show the proper, scientific limitations of "work in progress":

"This proposal concerns a plan to build a series of robots that are both humanoid in form, humanoid in function, and to some extent humanoid in computational organisation. While one cannot deny the romance of such an enterprise we are realistic enough to know that we can but scratch the surface of just a few of the scientific and technological problems involved in building the ultimate humanoid given the time scale and scope of our proposal, and given the current state of our knowledge. [...] Our previous experience in attempting to emulate much simpler organisms than humans [six-legged insects] suggests that in attempting to build such systems we will have to fundamentally change the way artificial intelligence, cognitive science, psychology, and linguistics think about the organisation of intelligence. As a result, some new theories will have to be developed. We expect to be better able to reconcile the new theories with current work in neuroscience. The primary benefit from this work will be in the striving, rather than in the constructed artefact." (Brooks & Stein 1993:2)

In writing that they "are realistic enough to know that we can but scratch the surface of just a few of the scientific and technological problems involved in building the ultimate humanoid", they express awareness of the large differences between Cog and a human being.(26) That is to say, their scientific conscientiousness is expressed when they say that we should not take an association too literally. In effect, what they say is that the similarity between a human being and Cog is a bit metaphorical.

However, despite this reservation, they assert that making and studying Cog - the robot - can help us develop "new theories" within such human sciences as linguistics and psychology. This is possible because of their assumption of a fundamental identity between Cog and a human being. On some level they are not only metaphorically similar, they are (at least potentially) identical. This identity is based on the mechanistic and formalistic premises of technoscience and AI in particular. Brooks and Stein would have had to defend this identity if, say, a Roman Catholic claimed that Cog could not possibly tell us anything about what it is to be human because it lacks a soul. In such a situation Brooks and Stein would have had to defend the mechanistic identity more seriously than the researcher who compared his robot's eyes with his own eyes. It is because of this identity, which we, paraphrasing Langton, might call humans-as-they-could-be, that studying Cog-the-robot might give us new cognitive, psychological, and linguistic theories of humans-as-we-know-them. Without assuming this identity, there would be no Cognitive Science (as least as we know it), and certainly no Artificial Intelligence or Artificial Life research. It would then be absurd to ask for funding to make and study a robot in order to learn something about human psychology.

In the two examples above - the pedagogic analogy between the researcher's own body and his robot, and the Cog project - we have seen similarity associations used in different contexts and for different purposes. In the first (pedagogic) case, there was no deep assumption about identity. The "un-literalness", the metaphorical intent, was taken for granted, both by the researcher and his audience. In the second example, the metaphorical character was used explicitly to show scientific soberness and modesty, whereas the literalness, the materialistic identity, was taken for granted (both by the researchers and the audience - I never heard the "soul-argument")(27).

In figure 6 I classified the evolutionary robotics project at COGS as a bit less literal than the Cog project. The reason for this is as follows: The evolutionary robotics group did not attempt to further the understanding of biological evolution in the same way that Brooks and Stein tried to contribute to the understanding of humans. Therefore, they did not have to defend any deeply founded identity between their artificial evolution and biological evolution. In an act of what we might call metaphorical creativity, they simply borrowed ideas from biology and applied them to robotics as what they called a useful technique. Evolutionary robotics at COGS was pragmatically rather than metaphysically oriented. It did not much matter if someone called the use of evolution a "mere metaphor" as long as it was useful in the attempt to produce smart robots. Evolutionary robotics, then, was closer to metaphor than Cog-the-humanoid. Nevertheless, evolving robots was usually taken quite literally at COGS. Later on in this chapter we will see how this literal language was related to the defence (of the legitimacy) of their research project.

At the bottom of the axis of literalness I have placed life-as-it-could-be. This is simply because this definition is not concerned with possible differences between life-as-we-know-it and life in machines. Artificial Life and Real Life become identical with respect to the properties that define life, they are both instances of life-as-it-could-be.

Together with the weaker (more scientifically "sober") positions of the evolutionary robotics group at COGS and the Cog project at MIT, the strong ALife of Langton (and the Santa Fe Institute) - studying life-as-it-could-be - also defines a research project. This is not a new research project. In 1948 Norbert Wiener wrote: "We have decided to call the entire field of control and communication theory, whether in the machine or in the animal, by the name Cybernetics..." (Wiener, 1948:19) Life-as-it-could-be thus defines the ontological domain of cybernetics. Thereby, it also defines the discipline which studies this domain (even if Langton and many with him prefer to speak of Artificial Life rather than of Cybernetics). The weaker positions of the Cog project and evolutionary robotics are more sensitive to the differences within life-as-it-could-be. But these projects are also on some level dependent on the identity of this domain, because this identity also identifies their research projects as legitimate endeavours, worth their time and money, their efforts and funding.

In the following I will further discuss how the defence of the literalness of an identity is a defence of a research project and its underlying models or paradigm. However, as we saw above, similarity associations such as those created by the phrase evolutionary robotics are also related to creativity. So, before I get back to COGS and the defence of evolutionary robotics and artificial evolution in general, I will make a small theoretical detour and have a look at how metaphors are related to creativity.

Metaphors as more than "flashes of insight"

Metaphor is often not just understood as something that is, but also as something that becomes. We do not only have an explanation of "one thing in terms of another", we make an explanation by putting one thing next to another. In making a metaphor, there is a movement, a recontextualisation, going on. A famous example of this in a Norwegian context is related to the Sámi struggle against the Norwegian attempt (that succeeded) to dam up and develop the Alta river for energy production. One of the political events that the Sámi activists arranged was to place a Sámi herdsman's tent outside the Norwegian parliament building. Thuen (1982) explains this event as a process of metonymisation and metaphorisation. The herdsman's tent is metonymically (as a part of a whole) related to Sámi life and culture. The parliament building is similarly related to the Norwegian State. The conjunction of the tent and the building then becomes a metaphor that stands for the relation between the Sámi people and the Norwegian State. This creates a new meaning which is relevant to fourth world problems; the metaphor tells a story about the relation between a nation state and a group of indigenous people. This new meaning is dependent on the old meanings that the herdsman's tent and the parliament building had in their previously established contexts (Sámi life and Norwegian government).

Semiotician C.S. Pierce focused on such processes of recontextualisation in his explanation of the formation of scientific hypotheses. He called this process abduction. Properties of one domain of phenomena are abducted and brought into a new domain of phenomena (see K.T. Fann 1970, and Bateson 1979:89). Knorr-Cetina (1981:49-50) gives an example of such a creative process in science. The example runs like this:

Two molecular biologists are talking about some particular proteins. Discussing the properties of these particles, one of the biologists compares them to sand; "this protein really looks just like sand". The biologist then goes to her laboratory and arranges an experiment. Guided by the Protein is Sand metaphor she tries to demonstrate some of the "sand-like" properties of the protein in question. She does that by experimenting first with pure sand - to decide what sand-like properties really are like - and then with the protein, to see if sand-like properties can be re-found in this context. New meaning, or insight, is produced when "sand" is put next to "proteins" and the properties of the first carry over to the latter.

We might then ask what the fate of an abduction, a leap of metaphorical creativity, is after this initial movement? First, to make a rough distinction, it may lead somewhere, or it may fail and be forgotten. It is those abductions that lead somewhere, that have a further impact, which interest me here. Victor Turner refers to the philosopher Max Black, who speculates that "perhaps every science must start with metaphor and end with algebra." (Black in Turner 1974:25) Alternatively phrased, this time by Turner himself: "If [the metaphor] is sufficiently fruitful, logicians and mathematicians will eventually reduce the harvest to order." (1974:27) This understanding implies that the thing abducted, for example sand, has connotations to its original context which will eventually be removed, and that what will be left in science is a model free from other meanings than its precise definition and logical deductions. Something like this "mathematisation" may sometimes be the case and in some sciences, but it is not an inviolable rule. In ALife research the case is rather that the similarities that an abduction proposes continue to exist, but that they may change from metaphors to identities - and later, possibly, to metaphors again. The following story illustrates a movement from metaphor to identity. (We will later see examples of the opposite movement.)

Seeing biological heredity as a (linear) genetic code, has become commonplace in biology. Biological inheritance is understood as sequential. Molecular codons are organised into strings, DNA molecules, that are read, transcribed, and translated into amino acids (the italicised words are part of the common vocabulary of biology). When physicist George Gamow, in 1953, suggested that biological inheritance had to do with coding, he recontextualised some of the formal notions of codes and coding that had been developed in a context of logic and human communication, for example in tele-communication and in the making and cracking of secret war codes. So, just as Alan Turing, a central person in the invention of electronic computers, helped the British to crack the German secret code during the second world war, the biological project that started after the discovery of DNA is often described as "the cracking of the genetic code". However, the metaphorical leap from human communication to genetic communication was not abandoned in the fifties to be replaced by a biology independent of this similarity. The similarity - it could be called Life is Code - is still a guiding notion for present-day biology. The Danish biologist Jesper Hoffmeyer, for example, has tried to extend it by exploring how cells and other biological systems can be seen as constituting subjects that (or who) actively interpret the biological codes as context dependent messages (Hoffmeyer, 1993). The very project of Artificial Life - especially the Santa Fe version - is another example of further explorations of the Life is Code similarity. However, to some ALifers (and to many people outside the ALife community) this similarity is no longer a metaphor, it is an identity. Life is code, it is literally communication, processing of information, dynamical forms, etc., and Langton has classified all these identical "dynamical forms" under the general term of life-as-it-could-be (Langton 1989b:1).

Stressing the importance of metaphors as creative devices, Turner also quotes Robert A. Nisbet who writes:

"Metaphor is, at its simplest, a way of proceeding from the known to the unknown. It is a way of cognition in which the identifying qualities of one thing are transferred in an instantaneous, almost unconscious, flash of insight to some other thing that is, by remoteness or complexity, unknown to us." (Nisbet 1969:4)

Similarity associations play too complicated a role in the sciences of the artificial (ALife and AI) to be seen as mere "flashes of insight" which may later be developed by logicians and mathematicians to become algebra. As I have shown above, they continue to be of importance after the first abduction or metaphorical leap (even if the metaphor may eventually move towards an identity). When Brooks and his colleagues build Cog, they first let knowledge of humans guide the engineering of their robot. Their emerging understanding of Cog is then supposed to throw light back on their understanding of humans. (I am not using "supposed" because I do not think this is possible, but because the project is still in its initial stage and has not yet produced new knowledge about human beings.) This interaction is a continual process of movement back and forth between the two domains. There is no clear-cut distinction between the model and the thing modelled. Both domains illuminate each other; they are both models of and models for each other.(28) This takes place several hundred years after the first "flash of insight" suggested that, perhaps, "man is a machine".

In the example above, similarity associations are more systematically explored than merely as flashes of insight. Moreover, in ALife and AI research, the relations between metaphors and established models were more varied than the movement from metaphor to algebra (that Black pictured) or the movement from metaphor to identity (that I pictured). Some researchers might, for example, try to move a similarity - that others had established as an identity - in the direction of metaphor. In the following I will show some of these movements by using examples from COGS.

Contested literalness

As I have said before, there was a distinct difference in the use of the words evolution, on the one hand, and neurone and brain on the other - when talking about computerised versions of these biological processes and objects. In the daily life at COGS evolution was generally used by ALifers without any reservations as to its use. ALifers simply talked about evolving robots, not about "evolving" robots. The words neurone and brain, however, were seldom used without adding some reservation to the literalness of these terms. People gesticulated quotation marks in the air, added so-called, or avoided the terms altogether by speaking about nodes and networks rather than neurons and brains. In the following I call all such reservations quotation marks.

These words - and their quotation marks - were part of many scientific controversies at COGS. I will first have a look at evolution and at some of the controversy surrounding that word.

The literalness of evolution

In one of the first ALERGIC seminars I attended at COGS, there was a debate between an ALifer and an opponent of ALife. At some point in the debate the ALifer said: "We use GAs [Genetic Algorithms] because evolution works." This statement was replied to (a bit later) in the following way: "Surely evolution works, evolution has produced us. But this is not to say that GA works. We do not understand evolution very well, and we do not know whether we have captured the essential mechanisms of natural evolution in the GA."

The first of these statements, in saying that "evolution works", takes the identity between artificial and real evolution for granted. The second statement makes this assumption explicit and questions it. We might say that the first statement talks about evolution, whereas the second adds quotation marks; perhaps GAs are only "evolution". Let me briefly sketch out a context in which the ALifers' omission of quotation marks makes sense.

In chapter 2 I introduced Genetic Algorithms, the set of computer programs that mimic biological, Darwinian evolution. We saw that most of the researchers and Ph.D.-students at COGS associated with ALife used this algorithm in their research. We also saw that the emphasis placed on artificial evolution was related to the Heideggerian and cybernetic, anti-rationalist philosophy of Artificial Life at COGS. In short, some of these notions may be expressed as follows: Human beings lead their lives only to a limited extent according to plans, rules and (internal) representations (of external objects). This is partly because the ecological and social systems in which we live (and are examples of ourselves), unfold over time in ways that are unpredictable, even if they may be deterministic. (This is known as "sensitive dependence on initial conditions", see page 54.) Some non-linear engineering problems are also very hard to design rationally. They are better solved by evolving their solutions (as Phil Husbands did with certain industrial processes (Husbands 1993)). It may be that it is equally difficult to design living (or lifelike) creatures rationally, given that the behaviour and structure of life-as-we-know-it have come about as a result of non-linear processes that are unpredictable.

In this brief outline of some ALife ideas I hope I have shown some of the emic centrality of the concept of evolution in Artificial Life. To sum up, evolution, real or artificial, can produce non-linear systems that cannot be designed.

Seen from a more Latourian network-theoretical perspective, equating artificial and real evolution is useful because it enrols a large number of established biological truths and biological scientists behind one's algorithms. Biological evolution is an established fact, given legitimacy by a large body of data and the entire biological community. If a set of computer programs can be shown to be identical (in some respect) to biological evolution, then the legitimacy of the latter can be brought to bear on the former. In some contexts this enrolment is explicit. Inman Harvey in his Doctoral thesis on evolutionary robotics, for example, applies Charles Darwin's notion of natural selection to his Genetic Algorithm. He gives a thorough presentation of different biological conceptualisations of evolution. Darwin's notions are discussed, and he is explicitly referred to in the text, for example as "(Darwin 1859)" (Harvey 1993:33). Through Harvey's discussion of the different ways in which biologists understand evolution by natural selection, his use of "evolution" and "selection" is given credibility. These concepts are supported by a long biological tradition.

Equating artificial and real evolution in an oral debate is a less explicit way to enrol Darwin (and others). Questioning the identity of the two kinds of evolution (as the opponent of ALife did) is an attempt to "dis-enrol" Darwin and the rest. Similarity associations are made weaker, and credibility is sought removed.

I should note that ALifers, of course, often distinguished real from artificial evolution. Many ALife papers discussed this difference, often in attempts to make more "realistic" (and hence efficient) artificial evolutions. Here, however, I deal with the fact that evolution was - in everyday speech among ALifers - strengthened by being discussed without adding quotation marks. On the other hand, during the controversy referred to above as well as on other occasions, the differences were stressed - quotation marks were added - by opponents of ALife in their attempts to weaken the associations of the ALifers.

The literalness of "brain" and "neurone"

The case of the words "brain" and "neurone" - including their quotation marks - is similar to the one above, but with the tables turned. It was generally quite incorrect, both at COGS and elsewhere, to use brain and neurone without some kind of reservation. Rodney Brooks, in writing about the Cog robot, makes an ironic point about this. He writes: "The brain** of Cog is called pb [...] and is a specially built MIMD [a technical abbreviation] machine." And his footnote reads: "** Somehow it seems scientifically unrespectable to use this word." (Brooks 1994:25) There may be many reasons for the caution that Brooks refers to. Generally, people within cognitive science and AI were aware of the many differences between artificial and real brains and neurons. One reason, and an important reason as I understood it, why ALifers at COGS always used quotation marks, was their need to distance themselves from a growing branch of AI - at COGS and elsewhere - known as Parallel Distributed Processing (PDP).

Briefly stated, this research (like much ALife research) attempts to build computerised artificial intelligence by making collections of artificial neurons. The first, biologically inspired, artificial neurons were made by one of the early cyberneticians in the 1950's. However, this approach to AI was almost forgotten for 25 years as the more logically oriented Information Processing paradigm became the leading trend. Then, in the middle of the eighties Parallel Distributed Processing - PDP - had its revival and became respectable AI. (See Dreyfus and Dreyfus 1990, for a vivid account of this story.)

PDP has normally been seen in opposition to the Information Processing (IP) paradigm - or to Good Old Fashioned AI. While the IP paradigm was inspired by logical formalisms, PDP was biologically and empirically inspired. PDP then, shares some common assumptions with Artificial Life at COGS. They are both concerned with biological realism, and they both reject the notion that cognition is a process in which discrete symbols (like "1" and "0") are processed in the brain. However, PDP, as it became known in the eighties, shares some common assumptions and techniques with the IP-paradigm that (most of) the ALifers at COGS strongly rejected, and that made the ALifers (or at least most of them) classify PDP as just another case of GOFAI. The way in which the PDP networks were related to their "environment" was the most doubtful aspect of these networks. PDP researchers made networks that consisted of some input units, some hidden units, and some output units. Just like researchers of the IP-paradigm, the researcher of PDP provided the input to the machine and made sense of the output. Figure 7 shows such a network. The PDP research was just as brain centred as traditional GOFAI. They studied brains without context. For similarities to GOFAI, or the IP-paradigm, compare the figure below to figure 2 on page 40.

Figure 7 A Parallel Distributed Network

The ALifers at COGS, although they did make such networks, stressed that these networks should be embodied in some animat which again should be embedded in an environment. This larger unit is the proper unit of study for the cognitive scientist.

It was to a large degree this philosophical, some ALifers called it paradigmatic or ideological, difference between PDP and ALife (and between "PDP-ers" and ALifers) that was communicated when ALifers at COGS added quotation marks, one way or another, to the words "brain" and "neurone".(29)

At this point the usefulness of calling the general class of similarities I have been talking about similarity associations rather than Knorr-Cetina's similarity classifications should be clear. To attempt to move, say, brain towards identity (more literalness) or towards metaphor (less literalness) is also a way to strengthen or weaken a scientific position by adding or removing associates. Figure 8 shows this.

Figure 8 Making associates

PDP researchers distinguished themself from GOFAI by emphasising both the similarities a PDP network had with the physiology of the brain, and the similarities that running (or training) such a network had with certain psychological phenomena, for example the fact that they had to be trained. (See for example "The appeal of Parallel Distributed Processing", (McClelland et al. 1986a).) In this way PDP research sought legitimacy in the field of empirical neuroscience and psychology. By enrolling neuroscientific and psychological facts, PDP-researchers gave their research a position of strength. ALifers at COGS weakened this enrolment by stressing the biological implausibility of PDP-nets. They did this partly by adding quotation marks (of some sort) to "brain" and "neurone", moving these concepts away from the model side, and toward the metaphor side on the axis of literalness.

In the next section I will present another set of cases where there is a movement between models and metaphors. This time the focus is not on scientific controversies, but on the interaction between informal, metaphorical everyday language, on the one side, and formal, scientific models on the other. Sometimes these domains are mixed, at other times they are purified by keeping the "dirty" everyday language out of the "pure" scientific models.

Purifications and Anthropomorphisms

At the beginning of my fieldwork a female researcher showed me her Artificial Life simulation, displayed on the computer screen. We saw the emergence of certain self-organising macro-molecules, so called hypercycles, hypothesised by Eigen and Schuster to have existed on Earth in the prebiotic soup, before the appearance of the first biological cell. (Eigen and Schuster 1979, see also Boerlijst and Hogeweg 1992 in the second ALife proceedings.) The computer simulation presented a molecular scenario of how early life might have emerged on Earth. Explaining to me what was going on, the researcher pointed to differently coloured "dots" on the computer screen. In doing so, she referred to the dot in question as she.

This was quite striking to me, and for two reasons. First, because of her use of the feminine gender. This was in contrast to the more commonly used he - that I had heard so far in my fieldwork, and that I also heard a lot later. Second, the "individuals" in her simulation were not meant to be simulations of organisms (like animats). Her "individuals" were more like "atoms" or "molecules" in the artificial chemistry of her simulation. Thus, they had, explicitly, very little to do with sex or gender.

I heard ALife agents referred to as she on two occasions. Both times the speaker was a woman (and not the same woman.) When he was used - and it was used quite often - the speaker was always a male researcher.(30)

This use of pronouns points to an interesting distinction, the difference between what we might call the subjective 3rd person pronoun - he and she - and the objective 3rd person pronoun - it. The following story illustrates an important, and I think general, difference in use.

On a warm summer day in Brighton two friends were visiting me. We had pancakes with honey and champagne for lunch. However, through the open window came unexpected visitors. A growing number of wasps found their way to the honey can. Having grown up in a garden full of wasps, I have learned not to fear them. So I put some honey on my fingers, closed the honey can, and got the wasps to sit on my hand. I then shook them off outside the window. My friends, being normally afraid of wasps, were "amazed". The wasps, of course, came in again, and I was a bit proud to be able to impress my friends.

During this event I suddenly realised that they were addressing the wasp as "it", while I was using "he"/"him". I tried a couple of times to do as they did, calling a wasp "it", but it did not feel right.

In using the subjective 3rd person pronoun with my own gender, I expressed closeness and familiarity with the wasp. I identified the wasps with myself. My friends expressed distance using the objective 3rd person pronoun.

One common way to speak about the way I referred to wasps and the ALifer referred to her artificial molecules, would be to say that these were anthropomorphisms. In the Oxford Dictionary this word is defined as: "Ascription of human form, attributes, or personality to God, a god, an animal, or something impersonal." The word refers to a process - it is related to the verb to anthropomorphize - the process of ascribing human qualities to something. This means that anthropomorphisms can be understood as a special sort of metaphors. They are moving "human form, attributes, or personality" from one context to another, addressing one thing in terms of another. The contexts in question are, on the one hand, the subject or the society of subjects, and, on the other hand, the objects or the nature of objects. To understand something as an anthropomorphism, one has to presuppose these two contexts, that is, one has to presuppose the Society:Nature distinction. This means that anthropomorphism is not a human universal. It can only take place in particular times and places where society and nature, subject and object, have first been separated. Moreover, to say that something is an anthropomorphism is to reproduce the Society:Nature distinction because it takes this distinction for granted. It is a way of saying that this use of language is not literally meant, it is "mere metaphor". Thus it is, some might say, closer to poetry than science. It is, as I understand Bruno Latour, this reproduction of the Society:Nature distinction that he calls purification (Latour 1993). Many Western scholars have contributed - and still contribute - to this purification. Lakoff and Johnson (1980), for example, do it when they say that phrases like "Life has cheated me", are, generally, "personifications" and metaphors, ways of "understanding [...] one thing in terms of another". Not to specify the context in which these phrases may be metaphors, but merely to state, categorically, that they are metaphors, is to treat the separation of the two domains (persons and non-persons) as a universal. It is to make this distinction relevant everywhere.(31)

In his book We Have Never Been Modern, Latour sees the Society:Nature distinction (with their "inhabitants" of Subjects and Objects) as a central element of modernity. The reason that we have never been modern is that we always blur this distinction, although we claim that we don't. Lakoff and Johnson show many examples of how these blurred distinctions are necessary parts of our language use. But rather than defining such language use as "metaphoric", thereby making the Society:Nature distinction relevant everywhere and always, we may simply say that this language pattern assumes identities that do not conform to the Society:Nature distinction. This language use is neither particularly "pre-modern", nor is it "post-modern", it is simply not modern. Hence, "we have never been modern". "[The modern] work of purification" (Latour 1993:11) is a process that takes place when someone, in effect or by intention, deals with a non-modern element in such a way that it does not threaten the Society:Nature distinction. In chapter 7 I will discuss how social scientists purify the Society:Nature (and the related Subject:Object) distinction. Now, I would like to present a couple of examples of how ALifers deal with some of the non-modern elements in their language.

Presenting their work at conferences, some ALife researchers were careful not to anthropomorphize (and here the term is theirs). They were careful to refer to their animats as, for example, it. Sometimes researchers who used this objective 3rd person pronoun slipped into referring to an animat or a robot with a gendered term, for example he, only to correct themselves immediately; "... then he, it turns around..." A couple of times I heard researchers explicitly excusing themselves for having made such a slip, for example, by saying "... excuse me for anthropomorphising."

Addressing an animat as he is different from speaking of these guys. The latter is more innocent, there is an explicitness in this anthropomorphism that makes it a bit funny or even ironic. It is more clearly metaphorically meant. Hence, it is less anthropomorphic. The pronouns she and he are, when used like the woman referring to artificial molecules, or like me talking about wasps, less consciously chosen. Even if neither of us would claim that our molecules and wasps were "really" females and males, she and he are nevertheless identifications that are used without the distance of humour or explicitness. The female researcher who repeatedly used she when she addressed her simulated molecules did not, as I understood the situation, choose the female gender as a result of, say, feminist political correctness - anymore than I expressed male chauvinism using he. We simply identified our molecules and wasps with ourselves, giving them our own gender. To some ALifers, such identifications became troublesome, and they corrected and excused themselves.

These excuses are related to the contexts in which the identifications are made. I would never dream of excusing myself for calling the wasp he. It does not matter. To ALifers however, it quite clearly matters. ALife and AI researchers are, among other things, "professional anthropomorphisers". Their job is to make anthropomorphic machines. Thus it is important, as some expressed it, not to make unjustified anthropomorphisms. Those researchers who corrected or excused their anthropomorphisms have shown an identification with - or a closeness to - their machines that they cannot account for scientifically. The everyday identification of using a subjective 3rd person pronoun has interfered with their scientific project in an illegitimate way. The following figure, comparing he with these guys, shows this.

Figure 9 "Excuse me for anthropomorphising"

To the speaker, he has crossed a subjectively felt boundary between everyday language and the (to the speaker) scientifically legitimate language. These guys does not cross any such boundary, and does not need any excuse. That is because the excuse is implicit in the expression itself, given by the irony and explicitness of the anthropomorphism. Both the explicit excuse of he and the irony implicit in these guys are ways of making non-modern expressions harmless to the scientific project. They are both ways of avoiding unjustified anthropomorphisms.

In this thesis, we have seen many examples of how ALife research challenges some important boundaries. By following the formula that "Life = Cognition" (Stewart, 1992), they challenge the boundary between humans and life in general. Studying cognition is more than studying human beings (or some restricted area of human endeavour, like mathematics). It may even include the study of plants. And by making machines that think (or are instances of life-as-it-could-be), they challenge the boundary between humans (or life in general) and machines. This, however, does not mean that the distinction between subject and object becomes unimportant. On the contrary. One of the main aims of this research (particularly the cognitive science part of Artificial Life) is to make and understand "autonomous agents" (or subjects). In doing this, deciding which creatures to include and which to exclude as autonomous agents is of great importance. Generally, cognitive scientists at COGS were ready to include other creatures, be they animals, plants, or machines, in the set of things that could be said to have some sort of "subjectivity". Where to draw the line between those items that had some kind of subjectivity and those that did not was constantly discussed, but they seldom discarded the distinction between subjects and objects altogether.

However, sometimes discussions about the difference between "autonomous agents" and mere objects also involved a questioning of the very boundary between these domains. That is, the very boundary - not only where to draw the boundary - between society and nature or between subject and object was contested. One way to question this boundary was the phenomenological/cybernetic inclusion of an agent's environment as a necessary part of the cognition of the agent, or, as one critic of ALife expressed it, by "smearing the mind out in the environment."

In the following case ALifers discuss the distinction between agents and mere objects. But this discussion, as we shall see, also included a questioning of the distinction between "impure" (or non-modern) everyday language and purified scientific concepts.

Heretical Engineers

In addition to myself, four people from COGS are involved in this story, William, Luc, Gregory and Keith. There has just been an ALERGIC meeting, and, as usual, some of us have gathered at the IDS-bar.(32) It is a warm afternoon, just before mid-summer eve, and we have our pints outside. There is an informal atmosphere in which positions are held more to see where they may lead than to defend the legitimacy of a research project. The discussion took place between William and Luc on one side and Gregory and Keith on the other. Before presenting the discussion, I will give a brief outline of the people involved in it, as I will use their personal, academic backgrounds to provide a context to this debate.

If their science had been physics rather than cognitive science, then the first two researchers would have been "theorists" and the second two "experimentalists". The two former were writing typically theoretical dissertations on cognitive science. In their daily work they read philosophy such as the works of (among others) Spinoza, Heidegger, and Merleau-Ponty, as well as newer cognitive science and philosophy. They were, as all philosophers at COGS, empirically oriented. But their work related mainly to what others had written, not to experiments they had performed themselves. The second two researchers, in contrast, worked regularly with their robot-simulation in the vision lab at COGS. They programmed and debugged their computers, and they ran experiments. Of course, they were also philosophically inspired, just as William and Luc were to some extent empirically oriented, but the publications of Gregory and Keith consisted largely of presentations of their experimental results. So, even if the distinctions between philosophy, engineering, and science were blurred at COGS, and sometimes also debated (see chapter 3), the following controversy can be understood as a discussion between two philosophers on one side, and two engineers on the other. It is I who make the distinction between philosophy and engineering relevant here, but it is part of the story that the two engineers quite deliberately used their roles as engineers in order to be a bit heretical in respect to some common assumptions of much Western philosophy.

The discussion this evening at the IDS-bar revolves around the distinction between so-called content based versus non-content based behaviour. Content based behaviour is, roughly, a property of "things" whose behaviour is partly determined by their own internal "stuff" (e.g. their thoughts, drives or intentions - the "content" of their minds). Non-content based behaviour is a result of mere mechanistic forces, like balls moved by gravity or electrons in a wire moved by electricity. The main example used in the debate among the four is the so-called "wall following" behaviour of some robots. To design a robot that follows walls is a well-known experiment in AI/ALife. Small ALife robots have been made that do this, and their internal mechanism is not very complicated. If they sense something with their left whisker, they move a bit to the right. If they sense nothing to their right, they move left, and, possibly again sensing something there, turn right, and so on. The small robots oscillate in and out along the wall. They seemingly try to follow the wall.

The philosophers - William and Luc - claim that in calling this behaviour "following" one is invoking a content based description of the robot. The word following assumes an agent who wants to follow the wall, who has the intention of doing so; "following" is a description of a meaningful action, not of just any kind of behaviour. It is a mentalistic term. But can such a robot, with its very simple mechanism be said to behave intentionally? William and Luc do not think so. Keith and Gregory, however, remain sceptical to their arguments. I ask (a bit naively and not quite sure if I miss some subtle meaning of the English word "follow") if to say that "the ball follows the wall", given that someone actually threw it along the wall, is not quite an acceptable description. Gregory adds another example: If you have a conical basket and you drop a ball into it, then the ball will search towards the centre of the cone. Isn't that content based behaviour?, he asks. "Clearly not!", William and/or Luc asserts. This, Luc replies, is a metaphorical use of the word "search". We ascribe content based behaviour metaphorically , he continues, to objects that move merely by mechanical force. A ball has no will. It does not want to reach the centre of the cone, it just ends up there as a consequence of gravity. Keith and Gregory, however, are not satisfied. They want to know - on a scale from balls to humans - where content based explanations stop being metaphorical and start being literal.

It is easy to understand the philosophical problems involved in equating the search of a ball towards the centre of a cone with, say, a human being's search for gold. Aristotle labelled this identity between balls and humans (and everything else) with the general term "the final cause". This causality has been abandoned since the advent of the modern natural sciences. Mathematician and popular science writer Ian Stewart discusses (and demonstrates) the low esteem in which the final cause is held in his description of Galileo Galilei:

He [Galileo] lived in an age that accepted explanations of events in terms of religious purpose. For example rain falls because its purpose is to water the crops; a stone thrown in the air falls to the ground because that is its proper resting place.

Galileo realised that enquiries into the purpose of things give humankind no control over natural phenomena. Instead of asking why the stone falls, he asked for an accurate description of how it falls. (Stewart 1989:30)

The abandonment of the final cause, or teleological explanations in science, is the foundation of the criticism of functionalism in anthropology and sociology, raised among others by Jarvie (1968), for whom the demonstration of the teleology of an explanation is sufficient reason for questioning its validity. The way in which biological evolution is referred to as a purposeful agent has also been much criticised, for example by sociologist Howard Kaye (Kaye 1986). Neither balls, societies, nor biological evolution have purpose, intention, or final causes. They should not be explained teleologically.

In the above discussion, I see William and Luc, as philosophers, defending some version of this "anti-Medieval" or "post-Aristotelian" tradition, and delegating purposeful agency strictly to those things known as subjects.

Keith and Gregory's position also makes sense in relation to their positions as engineers, not philosophers. Building robots and running experiments rather than writing philosophically oriented dissertations, they need not show that much respect for a philosophical tradition. Indeed, one of the major reasons why Gregory called himself an engineer (see chapter 3) was to allow himself the possibility of being a bit disrespectful or even irresponsible towards the received philosophical tradition. During the event described above, I felt that Gregory and Keith quite deliberately played the irresponsible engineer in order to question the boundary between the set of objects that can be explained teleologically (in terms of purpose, content etc.) and the set of objects that cannot. In doing this, they also questioned the boundary between the metaphorical and literal use of content-based, or teleological descriptions.


Summary and Conclusion

In this chapter I started out talking generally about "metaphors". I restricted this term, first by introducing the general phrase "similarity association", and then by saying that metaphors and identities are subsets of the broader phenomenon of such similarity associations. Metaphors and identities differ with respect to the degree of literalness the people concerned (the ALifers) ascribe to them. The science of Artificial Life (and of AI) consists of systematically exploring similarities between machines and life in general. An important means to preserve - and produce - legitimacy and strongholds is to attempt to situate similarities on the literal side of the axis of literalness.

I have presented two ways to achieve these positions of legitimacy and strength. First, I demonstrated how similarity associations that root (like a "root metaphor") or define a research project were treated more like identities. Materialistic and formalistic identities give legitimacy to a research programme such as the Cog project, where it is asserted that studying a robot can teach us something about human beings. Langton's identification of life-as-it-could-be gives legitimacy to a whole research programme (known as Artificial Life or Cybernetics). We also saw how these identity associations give legitimacy to the development and study of certain kinds of machines (like the Genetic Algorithm) by enrolling other scientists and the facts they administer (such as biologists and biological facts) as associates in scientific controversies. Weakening these associations by moving them toward metaphors is a way to dis-enrol associates.

In conjunction with the movement of research-defining similarities towards the identity pole, we have seen a movement of non-modern similarities of everyday language towards the metaphor pole. This gives legitimacy to a modern technoscience by avoiding unjustified anthropomorphisms. Informal similarities, like these guys or the researcher who used his own body in order to explain something concerning his robot, were tacitly and intersubjectively understood not as literal, but as metaphors. When, occasionally, someone felt that a non-modern mix of subject and object (referring to an animat as he) had been applied in a scientifically inappropriate way, we got a correction or an excuse: Excuse me for anthropomorphising. Anthropomorphisms can be used in times and places where there is an awareness of the separateness of the Subject and the Object. Excuses for making them are purifications that testify to and reproduce the local importance of this separateness. They assure us that what we are witnessing is an objective, Modern science that does not give explanations in terms of Medieval final causes or unjustified anthropomorphisms.

Having thus placed ALife-research firmly within modern technoscience, I must remind the reader of the other aspect of Artificial Life that has been present in this chapter, namely, the blurring of boundaries, both between the subject and the object and between non-modern everyday expressions and purified scientific concepts. We saw this most clearly in the last example, when two researchers at COGS, playing the role that I called "heretical engineers", questioned the boundary between intentional beings and non-intentional things. In doing this, they were exploring a position similar to the one adopted by Bruno Latour and Michel Callon, referred to earlier in this thesis. (In Latour and Callon's network theory humans and non-humans are referred to in the same language.) I expressed sympathy with Latour and Callon's perspective and I have applied some of their theories, for example where I have spoken of similarity associations in this chapter. In chapter 2 I also distanced myself from what (following Latour and Callon) I saw as the social determinist position of Collins and Yearley, where Nature is determined by the agreement of human beings, and where a firm basis is established for everything else: the uniquely human intention - the subject - and the capacity of a collection of subjects (a Society) to come to an agreement about, in the last instance, anything they may wish to agree about.

Having made this normative choice, following Latour and Callon rather than Collins and Yearley, I am also close to expressing sympathy with the "heretical engineers" rather than with the philosophers in the case above. I cannot deny such a sympathy. Here the philosophical position of some of my "informants" approaches the position of some of my "colleagues". Expressing such sympathy, however, does not mean that I think that the two philosophers - William and Luc - were wrong in seeking to establish a difference between humans and stones. There are contexts where not asking for this difference is both possible and desirable. But I do not suggest that one can have, or should have, a cognitive (or social) science - which is deeply founded in analytical thoughts and practices - without having some way of defining a subject as opposed to something that is not subject. I am sure that the "heretical engineers" - Keith and Gregory - would agree to this. I do not think that the above discussion pitted arguments for a fundamental denial of any differences between subjects and non-subjects against an equally serious defence of an essentialistic definition of the subject. It was rather an example of a more "playful", disrespectful, and ironic attitude towards some of the central distinctions of modern technoscience.

Before ending this chapter I will make some remarks on the difference, raised in the previous chapter, between what ALifers referred to as real and postmodern science. (In less emical terms this difference may be phrased as one between modern and non-modern technoscience.)

For a group of, for example, physicists who talk about elementary particles, purifying their language in order to do "real science" is relatively easy. They can simply treat all their anthropomorphisms of particles as "mere metaphors". They may avoid all such anthropomorphisms in their formal models, and they may treat all their informal, everyday expressions as "just a way of talking". Calling a quantum particle "Priscilla" is clearly just for fun. The boundary between, on the one hand, modern, purified models where subjects and objects are clearly separated, and, on the other hand, non-modern everyday expressions can be kept firm and clear (and all traffic of meaning across it can be denied). To the AI and ALife researchers maintaining this boundary between pure models and impure metaphors is a lot more difficult. Their scientific endeavour consists of endowing machines with human and life-like qualities. They are "professional anthropomorphisers". Thus, there are anthropomorphisms in both their models and metaphors. From the most clearly metaphorical to the most literal anthropomorphism - from "intentional" balls searching for the bottom of a basket to "intentional" humans - there is a continuum. The engineers above asked where the anthropomorphisms stop being metaphorical and start being literal. Cognitive scientists - at COGS and elsewhere - know that cognitive science (including ALife and AI) cannot answer this question conclusively. The boundary between metaphorical and literal anthropomorphisms is fuzzy, contested, and often defined heuristically.

Yet, ALifers did operate with a boundary between the metaphorical and the literal. This boundary varied with people and contexts, but when someone felt that he or she had crossed it illegitimately, he or she would make an immediate excuse or correction; a "he"-animat was corrected to an "it"-animat. These corrections and excuses are purifications that remove non-modern elements from a real, scientific language of subjectivity and objectivity.

According to Latour (1993) modernity is, among other things, characterised by the following three points:

  1. The separation of subjects and objects. Through technoscientific practices this separation has led to:
  2. A large production of machines with an increasing degree of autonomy and agency (from turning wind mills, through moving steam engines with their own "governor", and to "thinking" computers). These machines have an agency (or a "quasi-subjectivity" (Latour 1993)) that is not second to the agency of most witch doctors' magical objects (see also Tian Sørhaug (1996)).
  3. Yet moderns, contrary to the witch doctors, deny their quasi-subjects (i.e. machines) any subjectivity. Modernity, then, is characterised by an enormous production of machines with some sort of autonomy and agency that the moderns themselves nevertheless deny the right to have any kind of subjectivity.

When ALifers purify their language they fit this definition of modernity (and Terrence's definition of real science): Through technoscientific methods they try as hard they can to endow their machines with human or life-like qualities while at the same time addressing their animats in a language that objectifies rather than subjectifies them.

When ALifers do not purify their language, their practice fits with a definition of "non-modernity" or (in ALife jargon) postmodernity. This non-modernism, however, does not consist of a total rejection of everything modern. The two first points in the definition above are kept: Postmodern ALifers still make autonomous machines by means of a technoscientific practice that separates the subject (the researcher) and the object of study. Only the third point is skipped: the quasi-subjectivity of the machines is acknowledged.

One important reason why it is hard to all ALifers, at least in some contexts, not to be "non-moderns" (or postmoderns) is that the boundary between their scientific model-making (that may be purified) and their everyday language practice (in which ALifers, like the rest of us, have never been fully modern) is impossible to draw.

ALifers, as if they were a group of "witch doctors", have started to acknowledge the subjectivity of their machines. They do, however, not do this by turning to mysticism, but by turning to their own everyday language. In our everyday language we - "moderns" - have always been "non-moderns"; "witch doctors"; we do in practice endow our objects with a lot of subjective properties. Unlike, for example, physics, Artificial Life is a technoscience where it is hard to maintain a clear-cut boundary between everyday language and scientific models. That, I think, is an important reason why Artificial Life is about to become a non-modern technoscience.

In the next chapter I will discuss the laboratory practice of creating ALife simulations. The result of this practice is an object separated from the subject who made it (even if the object - the animat or robot - may be endowed with its own quasi-subjectivity). We will, however, see that in the actual process of making objective quasi-subjects, the boundary between the maker (the subject) and the made (the object), is, again, blurred.


Chapter 5: Intuitions and Interfaces


"I've never written ALife-worlds before."
(Ph.D. student of Artificial Life)

This chapter is concerned with the processes of "writing" ALife worlds, or more generally, with the overall practical process of constructing an ALife simulation. The empirical context of the chapter is the laboratories of COGS. I will first give a general introduction to what a laboratory is, later to move on to draw an outline of the particular laboratories at COGS, and proceed to look at the finer details of the skilful, technical processes - in the ALifers' interactions with their simulations-to-be - that take place before the simulations appear as ready-made worlds at conferences and in journals.

As the initial quotation indicates, ALife worlds are written using a keyboard and a screen. Making an ALife program, however, involves more than verbal interactions with the computer, it involves visual/graphical interactions as well; indeed, many ALifers emphasised that without a good graphical interface for their simulations it would be impossible to make them at all. An "interface" (to give a rough definition) consists of those parts of the running program and computer that are presented to the user. It includes hardware such as the keyboard, the mouse and the screen. And it includes software, such as that which enables the drawing of, say, black letters on a white background. In graphical interfaces the presentation on the screen is a drawing. The Macintosh screen with its "garbage bin" and small pieces of "paper" is an example of a graphical interface.

The ALifers at COGS also emphasised the skills, often referred to as intuitions, needed to make ALife worlds. These intuitions were needed in order to tune the simulation so that it could produce results. The development and application of intuitions (or skills) and graphical interfaces are central to the process of making a simulation. Intuitions and interfaces embody the relationships between the makers and the made (or the knowers and the known). We will see that whenever intuitions and interfaces appear together, they define each other mutually. An interface, a "face between", becomes precisely that - a mechanism between a user and another mechanism - when it is handled skilfully. One can then forget about the interface itself and concentrate on the thing it is a "face" in front of or a "window" into.

The means by which ALife worlds become populated with artificial objects (such as animats) are central to any discussion of the making of such worlds. This amounts to more than merely describing how certain "things" are made. It also includes a description and analysis of precisely how they become "things", how they become objectified. More specifically, we will see how an ALife simulation, in the course of its making, can become a stable technoscientific Nature made up of objects. Seeing this necessarily means seeing how the ALifer becomes a stable member of the Society of subjects (as these are defined in opposition to objects). Hence, we will see how the subjects of a scientific community can become distanced witnesses to an objective world.

Seeing how subjects and objects can become stable, separate entities presupposes a condition in which they are not stable and separate, where the boundary between them (if any) is blurred or moving. The general aim of this chapter, therefore, is first to describe a situation in which the boundary between the (subjective) researcher and the (objective) simulation-in-the-making is blurred and fluid, and then to see how an objective, ready-made simulation results from this situation. In the next chapter we will see how the ready-made simulations were presented at conferences.

Obtaining a detailed impression of what people do when they sit in front of a computer would probably require some sort of mechanical supervision of their actions (e.g. using a video camera). I made no attempt to do that. Instead, this chapter is mostly based on how ALife researchers at COGS, and particularly Ph.D. students of Artificial Life, talked about their skills and practices. At times this was in response to my questioning (which occasionally took place in front of the computer). At other times, these conversations took place in larger groups and were not initiated by me. Yet, the focus of this chapter is less on how people talked about what they did, than on what they actually did. From what people said about what they did, from observing different interfaces into ALife worlds, and from my own superficial experimentation with a Genetic Algorithm, I will attempt to give a picture of the practice of making ALife simulations.

I will start this discussion of the laboratory practices of ALife by outlining the general context in which these practices took place. We will have a look at the laboratories at COGS.

Laboratory life

The laboratories are central in technoscience. In these, facts are produced. These facts may vary greatly - from the vacuum of Boyle and his colleagues, to, say, the safety features of the latest Volvo - or, as we will see below, a marketing firm's experimentally derived knowledge of potential turkey consumers' likes and dislikes with respect to potential turkey products (Lien, 1996). Laboratory experiments are also central to the making of Artificial Life. At COGS, there were two labs in which ALife (and AI in general) was produced. The first was known as the vision lab. In this workshop the robots of ALife were made. The second was known as the D.Phil. lab. In this room there were about 12 computer terminals placed along three walls. Calling this room a "lab" was a bit metaphorical. The work conducted here could equally well have been conducted in front of any of the identical terminals connected to the computer network at COGS. ALife simulations were in fact frequently run from terminals outside the D.Phil. lab. The major laboratory at COGS then, was not a concrete room, it was the computer network.(33) Nevertheless, the use of the word "lab", as in the phrases D.Phil. lab and vision lab, symbolises the importance of the idea - and the practices - of the laboratory at COGS. One Ph.D. student at COGS pointed out that using the term "lab" gave legitimacy and thus resources to the work of those with access to the lab, to the expense of the work of others. The labs were important sites where resources were invested. For this reason there was some dissatisfaction among non-ALifers at COGS when the ALifers, during my fieldwork, moved into the vision lab. They thus acquired privileged access to expensive resources at COGS. (The vision lab, with its easily stolen equipment, was always locked. Only the users of the lab had a key to its door.)

In giving a presentation of the general notion of a laboratory I will start by looking at an example far away from COGS, in the laboratory of a Norwegian marketing firm (reported by Lien 1996).

Eight women from Oslo, born between 1960 and 1975, are invited to give their opinions of potential new turkey products. They meet in a room in the marketing firm's building, sit down around a table, and are served small pieces of turkey prepared in various ways, together with coffee and sparkling water. They give their opinions of the different products. A video camera in a corner records the event. Lien comments on this experiment:

Each woman participates in a occupation distant from her daily life. Together with seven or eight strangers she is invited to partake of something that is not a meal in the cultural sense of the word, but rather a number of isolated, edible pieces of turkey. The pieces, which are served with coffee and sparkling water, are in many ways as disconnected from their culinary context as the participants are from their social context. (Lien 1996:52, my translation)

In this attempt at realising a technoscientific laboratory we see some of the essential features of the institution. These features include, to sum up Lien's account of Knorr-Cetina (1995): first, that a large population of some sort (e.g. all potential Norwegian turkey consumers) can be represented by a few spokespersons (e.g. 8 women), second, that you will not have to seek the objects of study in their own environment, but that they can be studied in a context that can be set according to your goals (the women are encouraged to give their opinions of what they eat), third, in laboratories one does not have to study events when they happen to occur. The researcher can make them happen whenever he or she is present and ready (when the video camera is running and the people outside the "dining room" are watching).

The above description of a laboratory matches the reasons one Ph.D. student of Cognitive Science and Artificial Life at COGS gave for wanting to work with computer simulations. A taped interview with him runs like this:

... I: Why not do biology proper?
Gananath: I'd really want to give back something to ethology. But if I did it at BIOLS(34) I'm afraid I wouldn't do simulation.
I: You could do fieldwork in an ecological setting.
Gananath: I don't know enough ethology to go to, say, Tanzania and study baboons, and I don't know enough robotics to make robots... So, I am really happy to work with simulation. Then I have more control over parameters. You may sit and stare at an animal for six months, and it may just not do what you want, no matter how you try to set them up. Whereas with simulation you've got some ways of manipulating things, such that whatever you are looking for, it will happen. You can say; after I did this, I got what I was looking for. For example, I increased the net [the size of the artificial brain], or ran the simulation longer, or changed the conditions of reproduction...

This passage makes it clear that simulation and robotic experiments have much in common with the laboratory set-up of the marketing firm above. By controlling the parameters of the simulation, that is, by manipulating the context of the set-up, the researcher can make things happen when he is present. If the results are significant, they can be generalised - so that ALife, as Gananath hoped, can "give back something to ethology" (just as the food manufacturer is interested in the Norwegian consumers in general and not in the 8 particular women). We might say, with Latour (1987), that as the simulation is enrolled as an ally in a controversy, it becomes a small representative of a large domain of phenomena.

Making a controllable simulation is geared towards producing results. I have emphasised the importance of producing results earlier. I called it the performativity of COGS (see chapter 2). Results are necessary allies in an argument. No-one at COGS rejected the principles of Darwinian evolution (of some sort), but these principles (and their defenders) would get in trouble within the cognitive science community at COGS unless they produced experimental results, either in the vision lab, or, more generally, from one of the terminals in the computer network.

In order to produce a context in which these results could appear, one needed to have access to a computer, to be able to run it, set up a simulation (or make a robot), and "control the parameters." It is to these skills I now turn.

Programming computers and understanding statistics

When I ask Gananath about his programming skills he tells me that he grew up in a generation where children played with computers as programmable mechanisms. (The first microprocessor-based home computers that hit the market 15 years ago were primarily made to be programmed, either in BASIC or, before that, in machine language.(35)) People older than he is did not have any experience with computers when they were young, and people younger than he - the teenagers and children of today - grow up with ready-made games and word processors with advanced graphics. The mechanisms of these newer computers are hidden. They do not appear as mechanisms, and you do not need to know anything about programming in order to use them.(36) Most of the ALifers at COGS (except for a few seniors) were in their twenties or early thirties in 1994 and therefore belonged to this first generation of computer-literate youngsters. Most of them had had experience programming computers from their childhood or undergraduate years. The ability to program computers, then, was one of the most basic skills of the ALifers at COGS (with the exception of a few philosophers).

A few times during my fieldwork people at COGS (ALifers and others) discussed whether the heavy and rather one-sided emphasis on learning the skills of programming might be a drawback to computer science. People learned a lot about logic, but not so much about the (qualitative or quantitative) methods of the empirical sciences. A young Ph.D. student, Marc, who came directly from his undergraduate studies to study artificial life, is a good example of this. As a preliminary project for his Ph.D., Marc had started to make a program that simulated a car that balanced a pole upright on its roof. The pole could sway along one plane, either "back" or "forth", and the car could drive back or forth. If the pole started falling forward, the car would have to drive forward faster than the pole, to compensate for this movement. But if the car drove forward too fast, the pole would start falling backwards, and the car would have to reverse to catch up. The aim of the simulation was to evolve (using a Genetic Algorithm) a controller system (a Neural Net) for the car's movement so that the movements became as small as possible, keeping the pole as in a vertical position long as possible.(37)

At COGS, I was given a desk in an office that I shared with 3 Ph.D. students. Marc, who sat next to me, told me that he was good at programming. He had been programming quite a lot for the last 5 years (included his undergraduate years). Having decided to study the problem of car-pole-balancing, it took him three weeks to write the simulation. The first week he made the physical set-up, the "world". The car and the pole were given a size and a certain weight. The car was placed on a ground. Newtonian physics endowed the car and the pole with inertia and exposed them to gravity. The next week Marc made the program to control the movement of the car, the Neural Net. It was designed with input nodes (sensing for example the angle of the pole), hidden nodes, and output nodes that controlled the speed of the car. The third week he made the Genetic Algorithm where populations of Neural Nets would evolve their ability to control the car. After these three weeks, a period of tuning and making sense of his simulation started. In this phase of his work, Marc was confronted with a problem: He had never learned statistics. His simulation produced enormous amounts of data, data describing the size and shape of his Neural Nets and correlating this with movements of the car, and data comparing different ways to evolve these nets. He fed his data into one of the statistical programs available on the computer network at COGS. This program produces Cartesian graphs on the basis of rows of numbers. But in order to apply this sensibly, and to make sense of the graphs, Marc had to sit down for a week and study statistics. He also read up on philosophy of science (Popper, Kuhn, Feyrabend) and started writing a paper on scientific methodology. For the next four months Marc ran his program, sometimes correcting "bugs", but spending his time to a large degree on running the simulation and analysing the data that these runs produced. Thus while the three first weeks - making the simulation - were easy, Marc ran into many difficulties during the following four months.

Marc's problem is captured in Gananath's statement (also referred to in chapter 2) that when a conventional program is up and running, then it is also understood, whereas with ALife programs the problem of understanding begins when the program is up and running. The program creates a whole new domain of phenomena to be studied; a world in ALife jargon. As opposed to the Real World, however, this world did not, sometime in Marc's childhood, appear to him as given. It appeared to him as a result of a process in which Marc must 1) learn statistics, 2) apply a program for making graphs, and 3) run repeated evolutions on the computer network at COGS. Let me stress my point by comparing Marc's world with the world I now see outside of my office window. I see trees, a house, the sky, etc. My ability to see all this depends on the process of development I went through during the first years of my life. Given this development - in interaction with my environment - a world appeared to me in my childhood. It appeared to me as given and as "out there". People in general only need their own, developing body in order to perceive the everyday world in which they live. Marc needs a combination of highly specialised skills and tools to be able to both make and experience his simulation, his artificial world.

Running the GA on the network

First of all, Marc needs to be able to run his simulation on the computer network. This often requires considerable time and resources because his Genetic Algorithms need a lot of computer power. He runs his program on the large mainframe.(38) It is a powerful computer, but its computing power is shared by many people, and Marc has to run his GA on low priority, to allow other, more urgent jobs to be done more immediately.(39) So his GA is slow. Then he finds a way to run several parallel copies of his program on the smaller desktop computers in the network. Now, Marc is producing results much faster and is happy, but the system manager, the person in charge of maintaining the network, is not.(40) Marc's GAs are slowing down the whole network, keeping other people from doing their work, so Marc has to find another way to run his program. (A couple of other times, when the computers were slow, it was speculated among the research students at the D. Phil. lab that one of the ALifers had been allowed to run a GA in parallel on the network.) On another occasion, Marc is allowed to use the desktop computer of one of the researchers, who is away for some days. It is a fast computer, a Sparc 10 workstation, and Marc and another Ph.D. student have the computer all to themselves. For another few days, the GA runs at high speed. Most of the time, however, Marc has to settle for running his program at low priority on the mainframe.

When Marc is making arrangements like those sketched above, it is beneficial for him to know people - in order, for example, to get access to a free computer. It is always an advantage to have a good relationship to the system manager, and it is necessary to know, among other things, how to direct the GA through the network. In such situations, the GA exists as a "black box", an item in relation to other items. Marc can see his program by typing a command that makes a list of jobs and their priorities. His program is - as he physically sees it - a number in a list. However, when the GA has finished a run and has produced long files of data, another scene appears. The lists of data are fed into the interface program that produces Cartesian graphs, and a world of phenomena to be studied opens up to Marc. Looking into this world - and making it a world to look into - requires other skills and other tools than those described above. In the following I take a closer look at those skills and tools.

The legitimacy of talking about skills and intuitions

The importance of local skills in practising and reproducing science was first empirically documented by social scientists in the early 70's (Collins 1975). This discovery went against rationalistic philosophies of science. Karl Popper, for example, claimed that simply from reading in libraries, people - without any previous knowledge of science, any socialisation, or any prior access to or knowledge of machines - could reproduce modern science and civilisation as we know it (Popper 1981:123). This thought experiment seems highly unlikely in the light of empirical descriptions of how science is actually reproduced (Collins 1975). The emphasis on local skills, however, does not necessarily contradict practising scientists' understanding of themselves. It was, at least, well in line with the ALifers-at-COGS' conceptions of their own practice.

To a large degree, I got to know about people's skills and intuitions through their own awareness of them. This awareness, quite clearly, had to do with the influence of phenomenological philosophy and the increased emphasis - in the cognitive science milieu at COGS - on embodied knowledge (which is an expression of the same tradition that inspired sociologists to look at scientists' skills). Two influential writers in this tradition are the Dreyfus brothers. In Mind over Machine (1986) they describe the process of becoming an expert as a movement away from formalised knowledge and toward embodied skills. The novice has to sit down and read a textbook in his chosen field, the expert does things right because he or she is guided by long-trained intuition. People at COGS knew the Dreyfus brothers' work well. This means that when they spoke about their intuitions this did not, for them, entail the risk of lapsing into "spiritual insight or perception".(41) To speak of someone's intuitions at COGS was a way of talking about that person's abilities as an expert. This became clear to me when some of the younger Ph.D. students spoke with admiration of the elder researchers' experience and intuitions in running Genetic Algorithms.

Fiddling around with the parameters

In order to make a Genetic Algorithm produce results it is not enough to be able to write the program correctly. Once you have a GA, you also have a set of parameters or variables that need to be assigned their appropriate value. Let me give a brief, constructed, and somewhat simplified example, to show what I am talking about.(42)

It is common, in Genetic Algorithms, that when two gene strings are mixed in order to produce a "child", the mixing is deliberately made imperfect. Some of the genes will "mutate". If the offspring inherits, say, "0101" from one of its (or "his"?) parents, the code may end up as "0001" in the genes of the offspring after it has been through the mutation operator (a subroutine of the GA). This is one of the ways in which change is made possible in evolution. The offspring will be a copy, but not a perfect copy, of its parents. Inside the mutation operator is a random generator (another program). If this random generator is set so that one arbitrary bit out of a hundred mutates, then the mutation rate is 1%. The mutation rate is set by the programmer, and this can be difficult. If it is set too high, then the child generation may be too different from the parent generation, and there will be no continuity or evolution (there will only be arbitrary change). Neither will there be any evolution if the mutation rate is set too low. There will then only be continuity and not enough change. The trick is to set the mutation rate so there will be a balance between continuity and change.

The researcher also has to set many other parameters, for example population size and number of generations. These parameters interact with each other; if you have a large population then you can make do with fewer generations. But the parameters also interact with the external environment of the GA; if you increase the population size, then the program will use more computer resources, and (speaking metaphorically), you may get in trouble with the system manager. If you decrease the population size, to get the evolution going faster, you may find that the mutation rate, that you had also changed, is now too high for such a small population. With both lower population size and lower mutation rate, the overall change in the population is so slow that you need many more generations to produce results. But with more generations, the GA again needs more computer power and time, and you need to find an available Sparc 10 in order to get interesting results in time for the deadline of the next ALife conference.

This story shows that when people at COGS were running a program like a GA many technical variables interacted in complicated (often non-linear(43)) ways, and that tuning them always had to be done within the context of limited time and resources. This tuning process was often described as fiddling around with the parameters. ALifers often stressed the importance of developing a certain intuition or feel for it. I asked Gregory if one could not systematise these intuitions. He answered that there was a lot of work going on to do just that. Many computer scientists study the Genetic Algorithm, not in order to produce artificial life, but simply to find out how this algorithm can be used. For example, they study the effect of different mutation rates. But, the researcher continued, all these studies are restricted to specific contexts.(44) Some general rules of thumb existed. A suitable population size, for example, is normally somewhere between 30 and 100. The ALife researchers used these rules of thumb, but the more formalised knowledge was of little benefit, as the experiments they set up were designed for a special purpose (for example to study aggressive signalling among fish or to develop robot vision). Hence, their work did most often not fit the limited contexts of formalised knowledge on GAs, and non-formalised skills continued to be of great importance.

Having had this initial look at skills needed to run an ALife simulation, I will now focus my discussion still further. To be skilled is to have adapted to some context. This context often includes some items that we call "tools". Tools are means by which we relate to some task or object. When ALifers relate with intuition to their worlds-in-the-making, they treat some tools skilfully. The interface programs (and the related hardware, such as the computer screen) that I mentioned above - producing, as we will see, windows into worlds - are such tools. In order to understand the relations between the ALifer and his or her world-in-the-making, we need to understand the role of these interface programs. This, however, requires a general understanding of tools - as they connect the users of the tools with the objects on which the tools are used.

The mutual definition of skills and tools

Tools occupy a special position in relation to the acting subjects and the objects acted upon. Let the following, imagined example illustrate this. An old, experienced lumberjack chops down a tree. He does not have to pay attention to how to position himself or how to hold the axe, but can devote all his attention to how the axe meets the tree. He sees how every chop changes the cut and how the next chop should best be placed. He sees, feels, and smells the quality of the wood. What does it offer? Will the timber have more or less value? For what purpose can this timber best be used? How has this tree grown for the last 20 years? These are some of the considerations he may make. The axe itself, however, is, in a sense, "uninteresting" to him. It does not play an important part in his conscious attention. And so, too, is his own bodily position and his grip on the axe. But then he stops chopping and turns to have a look at the axe. He examines the way in which the shaft is joined to the blade. Is it still tight? The axe is now the object of his attention. Is it well suited for chopping down this tree? The lumberjack, perhaps, decides yes, and the axe again disappears from his attention as he continues chopping.

For the novice, things are different. He has to focus all his attention on how to position himself and how to hold the axe. To him, the axe is not a tool that he can use in order to chop wood. It is a strange thing that needs all his concentration. His problem as a novice, we might say, is that the axe does not disappear from his attention as he starts chopping.

Generalising from this example we can make the following definition. An object becomes a tool by being handled skilfully. Objects are never tools in themselves, only in relation to a skilled user. We also see the opposite dependency: Logging skills are relative to the physical properties of axes (as well as to the type of wood and to forests). Hence skills and tools define each other mutually.

An axe, of course, continues - in the normal sense of the word "tool" - to be a tool when it lies on the ground and is not handled skilfully. In Heideggerian vocabulary (as it is laid out by Hubert Dreyfus, 1991, chapter 4), an axe on the ground is "available" to the lumberjack, whereas, for example, the blue sky only "occurs" to him. Most things have an aspect of both "occurrentness" and "availableness" (Dreyfus 1991:60); they are there to be seen, and they can be used for some purpose. To the skilled lumberjack the axe loses its occurrency as it becomes "transparent" (Dreyfus 1991:64) in use. The novice can never totally rid himself of the occurrentness of the axe. In the following I am concerned with tools - we might call them "tools-in-use" - as they are to the user when they are transparent. In addition to these tools-in-use, we find "tools" that occur to a novice (but that have not yet become tools-in-use to him or her), and tools that are available to an expert, but not in use. Generally, we may therefore say that an object becomes a tool-in-use by becoming transparent to its user.

In the example above we see that an axe in use is more than a tool for acting, it is also a tool for perceiving. The lumberjack fells the tree and learns about it in the same action, using the same tool. Here the cybernetic input/output model of perception and action breaks down. The lumberjack does not first act and then perceive. The two phenomena are both aspects of the same interaction; the meeting between the person, the axe, and the wood. The world that is revealed to the subject by his action is perceived directly, in the action, not after it (see Gibson 1979).

In the interaction between the blade of the axe and the wood, the axe is, in Heideggerian terms, transparent to its user. In the vocabulary of Henrik Sinding-Larsen (1993) we might say that the perception front and the action front are at the cutting edge of the axe. Seen from the lumberjack's perspective, this is the point at which the interactions that make a subjective difference take place. Objects on the subject side of the perception and action fronts are transparent. The location of both fronts may change from one second to the next, for example, when the lumberjack changes his grip on the axe and thus pays attention to how he holds the axe rather to how he hits the tree.

How does this apply to computer interfaces? They become precisely that - "faces between" - by being handled skilfully so that they become transparent to the user. The better I master the functions on my word processor - the keyboard, the mouse and the mouse operated "buttons" on my screen, the keyboard short cuts that makes it easy to write in italics or bold - the more they become true interface functions that I can forget about in order to pay attention to what I really want to do: produce a text.

To make a computer program into an interface, to make it transparent, is, however, not only a matter of the user's skills. It is also a matter of making a good interface. Some programs - intended to be "interfaces" - easily become transparent, that is, they easily become interfaces-in-use. Other programs require of the user a huge amount of specialised skills, or may resist altogether to become transparent interfaces-in-use. I now turn to look at what a good interface into an ALife simulation is, and to the skills it requires.

Interfaces into worlds in the making

ALife researchers stressed the importance both of having a good interface into ALife worlds, and of creating these interfaces early in the research process. This was also one of the lessons that Eric, a Ph.D. student at COGS, learned from the pilot project of his study.(45) Eric made a Genetic Algorithm that evolved backgammon players. These players played against each other. Those who won games had a better chance of reproducing.

Eric succeeds in getting the evolution going, but is confronted with a problem: The first generations are always quite hopeless backgammon players. Then there is an improvement, but the evolution does not seem to produce really good players. In addressing this problem, Eric is confronted with another problem. His program generates long lists of data about the players, their genetic set-up, their scores etc., but not a picture of a backgammon board where he can see the progress of a game. Eric observes that some strategies survived, while others died out. He could observe this by looking at lists of binary strings - the "genes" - that represented certain behaviours. Eric, however, knows backgammon from playing it, and in order to see what goes on in the evolution, he needs to see the moves of the players "live", not only as codes that generate the game. Making such a backgammon-board interface is a bit too much work for his initial project (partly because Eric did not come to Artificial Life from computer science, and had limited experience with programming). But he has learned the importance of interfaces. Starting on his major Ph.D. project, one of the first things he does is to learn to use a program known as an interface generator.

An interface generator is a program in which other programs can be viewed. It allows a programmer to make a graphical representation of his program in one or several windows while variables in the program are displayed as mouse operated "buttons" or "sliders" on the screen. In short, it is a program that makes it easier to create interfaces for other programs. It is particularly useful in the development of other programs. Plate1 (page127) is an example of what an ALife simulation may look like when it is viewed through an interface generator. Some of the techniques needed in order to make such a window - for example how to draw a slider bar on the screen so that it looks like real slider bars - have been developed by others. They have, in cybernetic jargon, become black boxed as pieces of programs in the interface generator.(46) The user of these black boxes need not worry about their internal workings. He or she can just feed them with certain data, and out comes the desired result.

Ph.D. student Jean-Pierre had got a bit further in his study than Eric. The way he designed his simulation, as he described it to me while we were sitting in front of his computer, illustrates both the usefulness of having a good interface and the mutual dependency of skills and tools.

First of all, Jean-Pierre has a graphical window. In this window we see what Jean-Pierre refers to as the simulation, that is, we see a two-dimensional world, a square in a bird's eye view. We see a number of small circles - animats - that move around. The simulation is made for studying aggressive behaviour, and on the screen we see animats that do things like, in Jean-Pierre's words, "dominate, squabble, fight....". In a taped interview I asked him about the importance of this window, and he answered:

That's very important. I mean, if you can't see the behaviour, because that's what we are after, ... we're after ... from simple kinds of mechanisms that each of these animats have, between them, as they progress over time, you get behaviours, you get aggression, you get various things that just emerge from it, and of course you have to have a good, clear display to be able to see these things, [...] you want to be able to see the movements. [...] Because it is all completely kind of subjectivist, you watch them and decide it's "aggression" or "fear", [...] I could make figures that prove they are doing certain things, [...] but if you see them on the screen you do get a feel for whether this is what you are looking for, ... (quotation marks were mimicked during the interview)

The ability to see behaviour as behaviour and not as figures was what Eric needed when he evolved backgammon players. The subjectivist argument of Jean-Pierre is a way of stressing the role of the observer. In other contexts Jean-Pierre (and others) also used this subjectivist argument to avoid the unjustified anthropomorphisms I mentioned in chapter 4. (That is, Jean-Pierre does not make any strong claims about his animats "having" feelings like fear or aggression. He is adopting a weak, relativistic position, making reservations about the literalness of the identifications.) In this chapter my point is that Jean-Pierre, when he recognises something as something, is stressing the importance of both the good graphical interface and the way you "get a feel for whether this is what you are looking for" - that is, his intuitions. Jean-Pierre then goes on to talk about his interface (see plate 1):

I used Pop [a programming language developed at COGS], because I had little time and Pop is a very easy language, very forgiving, so you can just throw things together, and then there is Prop Sheet [a kind of interface generator]. [Prop Sheet] is a screen where you can set up slider bars and buttons for all the variables, so for all variables of the system I had buttons and slider bars, so then I could spend days tweaking things a little bit, up and down, just to get it working properly. [...] That's very convenient. Just having a list of numbers is hard to play with. Actually, having sliders you get a feel for the position they are in visually, so you can move them up and down, and remember the setting not from the numbers, but from the position, roughly, where the sliders were.

[...] When you run the program, you get this big screen up with all these slider-bars, and buttons and things, and when you hit "go" with the mouse on the button [called "apply", plate 1], a graphic screen comes up beside it and runs the simulation. (my emphases)

Here we see a simulation's dependency on both tools and skills, interfaces and intuitions. The feel that Jean-Pierre has developed is an intuition for how his simulation works. This intuition is developed in interaction with a specific interface that Jean-Pierre has also developed (even if some of the techniques for making such an interface are already black boxed in a ready-made interface generator). Jean-Pierre's interface and intuitions enable him to pay attention to what he calls the simulation, and not to the interface as such. (Plate 2 shows the simulation as it is seen on a lap-top computer screen.) He does not need to concern himself with how the slider bars get drawn on his computer screen or with how his animats are displayed. But, with respect to his ability both to perceive and to act, he has his intuitions for this simulation as it is seen through the graphical display, the buttons and the slider bars. He knows (intuitively) how to "tweak things a little bit" by having "a feel" for the visual position of his virtual slider-bars. While the perception and action fronts of the lumberjack are at the interaction between the axe and the tree, Jean-Pierre's "fronts" are somewhere "inside" the computer, at the interaction between his skilfully handled interface and his simulation.

Some time later, when Jean-Pierre has his simulation running as he wants it to, he rewrites the simulation in another programming language, called C. C is faster than Pop, and he needs this speed to be able to add more functions (like a Genetic Algorithm) to his simulation. But he does not add a window with the old buttons and slider bars to his new program. This, he says, is because "[It is] not so necessary to play around with it that much anymore." The interface and intuitions that Jean-Pierre developed for setting the variables right are no longer needed, because these variables now have the values that make the simulation work. The old Pop simulation as a whole has now become black boxed as a reliable object that can be used in the new C simulation without Jean-Pierre having to worry about its internal workings. Jean-Pierre now has to play around with his new simulation, to get the Genetic Algorithm to work properly, etc. But that is another story. The previous interaction between a skilled user, tools, and an unstable simulation (in need of getting its parameters set) has been replaced by a stable object that works independently of that interaction. A simulation in the making - existing in interaction with Jean-Pierre's skills and tools - has become a ready-made object, fit to be seen by a large audience who need not handle these tools.

The experienced difference between The Simulation and the I

The relation between a researcher and his simulation can be divided into two "sub-relations": first, the relation between the researcher and his tools (or his interface), and second the relation between the interface and the object of attention (the simulation).

The first of these sub-relations can be further examined by asking the following question. What, or where, is the boundary of the self?

There are (among many possible responses) two radically opposed answers to this question. First, one might say that this boundary reside in some definite place, e.g. (following Descartes) the pineal gland in the back of our skulls. Our body, including our brain and our nervous system, is a totally mindless machine, an input and output device for our soul. This machine informs the soul (or the self) through the pineal gland (Descartes himself used "inform" about this interaction (Kirkebøen 1993b)). All other bodily interactions (internal or external) are purely physical. Selves can be informed, bodily matter only moved by force. The brain-situated, logical self of the Information Processing paradigm (or Good Old Fashioned AI) is related to this Cartesianism (see figure 2, page 40). The boundary of the self in the GOFAI case corresponds to the physical boundary of the brain (or some subsection of it).

In opposition to this Cartesian standpoint we find cybernetics, as voiced by, among others, Gregory Bateson. Sinding-Larsen (1993) notes that Bateson did not want to speak about selves at all. He replaced the notion of the self with a cybernetic notion of "mind". The mind always consists of the larger ecological (including social and technological) systems of which the individual organism is part. As a parallel to this position we find ALife-at-COGS's insistence on studying cognition as embodied and embedded (ecologically and socially).

One of the premises of this thesis (outlined in the introduction and in chapter 1) is an allegiance to post-Heideggerian philosophy. In chapter 1 I argued against giving a transcendental position to the subjects or the society of subjects. Implicitly, I have therefore already rejected Descartes' alternative (and by implication also the GOFAI alternative). But where does the radical opposition to Descartes' soul - Bateson's concept of "mind" - take us? Sinding-Larsen (1993) points out a problem with this position. When we pay attention to something (as we often do) then we experience that there exists an "I" that pays attention, and that the "something" to which we pay attention is "not-I". This difference seems to disappear in Bateson's ecological mind.

Sinding-Larsen proposes to call this something that is "not-I" alterity (from "alter", "other"). This alterity is "the otherness" to which we pay attention. The entity that pays attention is the subject (or the self). Moreover, the boundary between self and alterity lies at the subjectively experienced action and perception front.

This understanding includes the tool as part of the self. When an experienced lumberjack chops wood, then the "I" that acts and perceives extends from the lumberjack into the axe. The alterity to which the "I" relates is the tree.

This means that the interface that Jean-Pierre used - with intuitions (or feel) in order to perceive and act upon what he called the simulation - was a part of him as a subject. When Jean-Pierre spoke about what he did, he did not tell me how he held the mouse or how hard he typed on the keyboard. Nor did he tell me how he made things happen in the interface generator. He told me how he related to the simulation. He used his interface generator in order to manipulate and observe the movements of the animats. Jean-Pierre told me that "I could spend days tweaking things a bit up and down". He did not say that "I" hold the mouse, and the mouse directs an arrow on the screen that tweaks variables. Parts of his interface were thus also parts of the "I" that acted, perceived, and got "a feel for it". Which part of the interface that at any given moment were included in the "I", was of course variable, depending on how his attention moved, as different variables were "tweaked". Eric (who made the backgammon simulation), however, played around with the interface generator in order to learn to handle this software. He showed me how he made one window with "buttons" and another window, a graphical window, where the animats could move. He also showed me a small program that made an animat move straight across the screen. But he did not, at that stage, study this movement. It was trivial and uninteresting. The point was simply to have something that moved so that he could learn to use the program that displayed this behaviour. For Eric, the Interface Generator was still alterity (even if some interface functions, like the mouse, the keyboard, and screen had already became transparent tools-in-use for him), whereas Jean-Pierre had passed beyond this stage - the Interface Generator had become transparent, and alterity was now the simulation itself, with, as he said, "animats that dominate, squabble, fight...".

An interface, then, a "face between", may be said to lie "between" the human body and the object to be perceived and acted on, but it does not lie between self and alterity; it is a part of the self.

Now, let us turn to the other "sub-relation" of the larger relation between a researcher and his simulation, namely the relation between the interface and the simulation.

The first thing to note when turning to this relation is that when I discussed the relation between the user and the interface above, I could not avoid treating the relation between the interface and the simulation as well. When the interface has become part of the acting and perceiving self, then the simulation is alterity, it is the object to which the "I" relates.

The general implication here is that the boundary between the interface and the program "behind" the interface is fluid and dependant on the user. It is, as I have defined "interface", not a technical boundary between two pieces of a program, but a subjectively experienced boundary between self and world. It moves as the user changes his or her attention, and it depends on the skills and interests of the user.(47)

The second point to note is, as I also discussed above, that ALifers developed the interface early in the process of making a simulation, and that they stressed its importance. Simulations were made through interfaces, the latter were therefore often developed together with the former. The interface of an ALife simulation was often an integrated part of the simulation itself.

From these two points we see the intimate relation between the interface and the simulation. You cannot have one without the other, and you cannot decide unequivocally where the interface ends and the simulation starts. The interface, or tool - which is only analytically and heuristically distinguishable from the object of attention - is part of the acting subject. We may therefore conclude that when an ALife-world is in the making, the boundary between self and alterity can only be "subjectively" defined. The boundary is the actor's experienced and fluid action and perception front. Thus we cannot draw a clear-cut, "objective" boundary between self and alterity, as, for example, the skin of our bodies. There is no objective subject-object distinction.

Limits to thinking in terms of "inside" and "outside"

I would now like to discuss explicitly a point that has been implicit in much of what I have said above. We saw that Jean-Pierre, who mastered the interface program, was able - by both acting and perceiving - to relate to something that he called the simulation. This entity was made up of animats that chased each other, ran away, "squabbled and fought". I would like to consider the following question: Where is the simulation? I will approach an answer to this question by considering an example with which most of us have some familiarity; the writing of a text using a computer and a word processor. I explained above how I, when I knew intuitively how to handle the interface functions on my word processor, was able to concentrate on writing the text. In such a position, what is my alterity, the thing to which I relate? In some sense it is every single word on my screen, for example this word. But it is also the emerging meanings of this whole sentence, the paragraph, and eventually the whole thesis. The better I master the written, English language, the more I can pay attention to these meanings. Quite often, when I do not need to pay attention to, say, how to spell "anthropomorphism", these meanings are my alterity.

But where is this alterity, where are these meanings? Are they in the text file called "THESIS.DOC"? This would amount to saying that the meaning of a text is objectively present in the text. We all know that this is not the case. The meaning of a text depends in part on the reader. But could the opposite be true? Is the meaning to which I relate while writing, following Jean-Pierre, "totally subjectivist"? Is it all in the eyes of the beholder, "inside" the entity called "me"? If this were the case then the transparent interface-in-use which is a "window" into the alterity would have to be a window into something that goes on inside my own head (or body, or body + interface). "I" would end up being on both sides of the perception and action front. This makes the subjectivist position just as problematic as the objectivist position. My point is that the emerging meanings that my skilled use of this word processor allows me to express, are neither subjective nor objective. My skilled handling of the word processor creates a new being, a new "I" - made up of my body and my computer tools. It also constructs a new world to which this being relates. We might say, with Heidegger, that it creates a new meaningful "Dasein". Dreyfus introduces Dasein thus:

"Dasein" in colloquial German can mean "everyday human existence," and so Heidegger uses the term to refer to human being. But we are not to think of Dasein as a conscious subject. Many interpreters make just this mistake. They see Heidegger as an "existential phenomenologist", which means to them an edifying elaboration of Husserl. [...] Heidegger however, warns explicitly against thinking of Dasein as a Husserlian meaning-giving transcendental subject: "One of our first tasks will be to prove that if we posit an 'I' or subject that is primarily given, we shall completely miss the phenomenal Dasein".

In 1943 Heidegger was still trying to ward off the misunderstanding of Being and Time dictated by the Cartesian tradition. He reminds the reader that he was from the start concerned with being [how human beings are living in a world], and he then continues:

But how could this ... [concern] become an explicit question before every attempt had been made to liberate the determination of human nature from the concept of subjectivity. ... To characterize with a single term both the involvement of being in human nature and the essential relation of man to openness ("there") of being as such, the name of "being there [Dasein]" was chosen. ... Any attempt, therefore, to rethink Being and Time is thwarted as long as one is satisfied with the observation that, in this study, the term "being there" is used in place of "consciousness." (Dreyfus 1991:13)

I take "Dasein" to mean a human-being-in-a-world. It is to this being-in-a-world, neither the subject nor the object pole of it, that the meanings of a text belong.

In the imagined case of the lumberjack we saw something similar. The skilled handling of the axe made it possible for the lumberjack to relate to the tree. But what he felt, smelled, and saw using his axe was not simply a physical tree. It was a set of affordances (Gibson 1979) such as "good value timber", "fire wood", or perhaps "building materials". These affordances are neither purely subjective nor, I may add, purely Cultural. Nor are they objectively given as Natural. They emerge in the interaction between a skilled body, a tool, and a given object.

So, too, with the ALife simulation as a meaningful object. The squabbling animats are neither subjective nor objective. As Jean-Pierre rightly points out; the squabbling is meaningful behaviour that cannot be understood independently of the observer who recognises it. But it is also seen "through" skilled use of an interface. So it cannot be reduced to something "inside" the subject. The simulation, like a text, can be a meaning. It, too, can be part of an ever-emerging Dasein - made up of a skilled researcher, his tools and his alterity.

When the meaning that someone (writing a text or tuning a simulation) is paying attention to is not on the other side of the perception and action fronts, we have to rethink these latter concepts. When Jean-Pierre tells me about his feel, his interfaces and the simulation, he is talking about an I and a simulation (the latter is alterity). In such a state of mind there is a subject, an object, and a "front" between them (even if, in adjusting a simulation, this front lies somewhere "inside" the computer and cannot be physically tracked down). But, I suggest, when an ALifer is immersed in, say, squabbling and fighting animats and in his fiddling around with the parameters, as when I am immersed in my writing, then neither he nor I are self-aware, nor are we directing our attention toward an entity "out there". We are, in those moments, selfless and not intentional.

Many anthropologists have written about this selflessness. I have mentioned Bateson's attempts to describe it in cybernetic terms. Victor Turner's notion of "communitas" is perhaps the most famous anthropological attempt to grasp selflessness (Turner 1974). Communitas is opposed to structure, sometimes more specifically social structure. We can see expressions of communitas when a group of people express their unity (as opposed to their differences). When the ski-jump hill "Holmenkollbakken" in Oslo is packed with people with painted faces and waving Norwegian flags - and applauding a good jump even if the jumper is not Norwegian - then communitas is expressed. (Whereas the difference between the King's tribune and the communal tribune is an expression of social structure within that communitas.)

Anthropologists have noted that communitas is often expressed in rituals (Turner 1974, Kapferer 1984). I think the correspondence between communitas and rituals is related to the fact that both requires a large amount of (embodied) expertise on behalf of the participants. You cannot be part of neither a communitas nor a ritual by reading what to do from a textbook. Eugen Herrigel, who worked as a teacher in Japan in the 1930's, has described the dependencies between skills, rituals and selflessness (or communitas) in his book on his own practice of the highly ritualised Zen art of archery (Herrigel 1971). His 6 years of practice in this art consisted of a gradual removing of his attention and will. After long practice he was at last able to stretch the bow and let go of the arrow without thinking consciously about it or even wanting it. Only at this point did he start aiming the arrow at the bull's eye of the target. But as he practised hitting the target he was repeatedly told not to try to hit it. He practised for years - partly by including the archery within a larger meditative ritual - in not to be an "I" (including bow and arrow) that was in opposition to an alterity (the target).

Buddhism is one of the traditions in which one intentionally (by sitting down for meditation or starting a practise like archery) aims toward arriving at an unintentional state of mind. But Buddhist practices are not the only practices in which people actually are unintentional. Unintentionality and selflessness are, I think, the mark of all devoted, skilled practices and states of immersion. Robert Pirsig has explored the relation between Zen-selflessness and technical skills in his Zen and the Art of Motorcycle Maintenance (Pirsig 1974), and ALifer David Goldberg follows Pirsig in his paper Zen and the Art of Genetic Algorithms (Goldberg 1989). Contrary to Goldberg's suggestive title, I should note that I do not see the art of Genetic Algorithms (and ALife simulations in general) described in this chapter as "Zen" (that is, "meditative"); it is not necessarily an awareness of selflessness. When practised it is just selfless, a communitas between those elements that becomes "I" and "the simulation" when talked about afterwards.

Foreshadowing one of the themes of the next chapter, I would like to briefly draw attention towards the relation between ALife simulations, communitas, and performances. In chapter 2 I argued that the performativity of ALife needs to be seen as more than a "game pertaining [...] to efficiency" (Lyotard 1979:44). Rather, the presentation of a working simulation is something like a "cultural performance" (Turner 1981). A performance can be seen as a kind of ritual. It works when there is a communion between the artist (or the artistry) and the audience. This communion is achieved, in part, when there is a shared body of skills between the "artist" (an ALifer presenting a simulation at a conference) and the audience; they "know how difficult it is". ALifers meet at conferences in a communitas of skilled, intuitive computer engineering.


Conclusion: The emergence of subjects and objects

I cannot provide intersubjective proofs that a practising, skilled, and immersed ALifer is selfless and unintentional. What we have seen is, first, that when there is a self, tools-in-use are part of this self. In the case of Jean-Pierre the parts of his program that I called "interface" were part of the I that related to the simulation. Moreover, the simulation - seen as the alterity of the subject, as a physical process "out there", a running program - is only analytically and heuristically distinguishable from the interface, and hence the self. From this I concluded that if there is a subject (or self) opposed to an object (or alterity) when making an ALife simulation, the boundary between them is fluid, it varies with varying attentions and interests. We cannot define objectively where one ends and the other starts.

Second, we have seen that the simulation as something meaningful can neither be located in the subject nor in the object. It is part of Dasein.

Now, meanings are also part of Dasein when the simulation is a ready-made world, displayed on a large screen in a conference hall, and observed by a large audience. However, as the simulation becomes a ready-made product something important happens. We saw this when Jean-Pierre rewrote his simulation in the faster programming language C: The old Pop simulation became black boxed, and the interface and intuitions needed to fiddle around with it became redundant. As this happens, or to the degree that it happens, the simulation also looses its dependence on Jean-Pierre. It seems that its objectivity becomes clearer. In the same movement, the subjectivity of the observer is stabilised. The fluid condition disappears as the skill-dependent interface-in-use becomes redundant. Hence, the technoscientific distance of the witness is established. During the process of making an ALife simulation as here described, there is no clear distinction between subjects and objects; neither is stable. However, the result of the process is, or I should rather say may be, a stable Nature (of objects) and a stable Society (of subjects). Observing the "communitas of the conference", a communitas that includes the simulations one may say; "there, behind the screen, is the stable (if artificial) Nature of objects, here, in the conference hall, is the ALife community."(48)

In chapter 1 I introduced Donna Haraway's notion of the cyborg, the messy mix of the machinic and the organic. The laboratory condition I have described here is such a cyborg. When ALife is in the making, the researchers and their machines are entwined as messy cyborgs. When ALife is ready-made, however, the researchers have become autonomous subjects and the machines have become autonomous objects with agency; they have become robots. Thus, the movement here described is one from a world of messy cyborgs to one of clearly separated human subjects and robots.

In the next chapter I turn to the ALife conference and the presentation of ready-made simulations. We will see how, and to what degree, these ready-made simulations fill the role of the Nature of technoscience - and hence, to what degree the ALifers occupy the role of the distanced witness of the scientific community.


Chapter 6: The Objectivity and Enchantment of Artificial Life


In chapter three of this thesis I discussed two opposing emic representations of what Artificial Life research is and ought to be. First, I discussed how ALifers at COGS understand, and want their endeavour to be a real science. I then presented some alternative notions - held by ALifers at COGS - of what ALife is and may become. These latter notions were classified, both by me and by ALifers themselves, as postmodern understandings of ALife.

The present chapter is organised along the same division. But whereas chapter 3 is concerned with how ALifers at COGS talked about - or represented and normatively judged (Holy and Stuchlik 1983) - their enterprise, this chapter deals with a similar dichotomy in the practice of ALife. The arenas for the practices I have in mind are the international conferences on Artificial Life. In this chapter we will move out of the laboratories of COGS and into the larger ALife community. I will begin with a brief look at what such a conference is.

Between 200 and 400 people from Europe, Japan, and North-America normally gather at ALife an conference. 50-60 reviewed papers are presented. The presentations are quite short, 20 -25 minutes long, with 5 or 10 minutes set aside for discussion afterwards. The conferences may be organised into one session lasting 4 or 5 days, or be divided into two parallel sessions that last a bit shorter. A few prominent, invited speakers are given more time, both to give their talks and for the discussions afterwards. One afternoon is normally reserved for the "poster and demonstration session". In the entrance hall and the other rooms of the conference centre robots, computers, and short papers ("posters") pinned on cardboard walls are exhibited.

The normal presentation at an ALife conference differs from that at social science conferences in one important respect; practically no one reads a manuscript. Talks are guided by reference to overhead- or dias slides. Quite often video recordings of computer simulations or moving robots are shown on large screens. On these occasions there may be two large screens behind the researcher, one showing the computer simulation, another showing overhead slides. The speaker also uses the slides as visual cues to guide himself through what he or she wants to say.

To a student of anthropology, with only limited training in computer science and math, such a conference can be quite a tough experience. People often discuss topics far beyond my understanding. This means that I missed a lot of the details of the content of what was said, content that was meaningful to the people involved, and that was important in order for them to further their comprehensions of how to make and understand computers, robots, life, or cognition.

However, my lack of professionality had the advantage of making me more aware of the context of these presentations. Certain aspects of this context are the topic of this chapter. More specifically, the chapter deals with some specific "meanings" of the presentation of computer simulations. These meanings are not the actual content of the presentations (that is ALife proper), nor are they reflections of how general, cultural images appear in the simulations. The "meanings" I am looking for, and I am not sure if "meanings" is a good word, are those aspects of a presentation that defined or helped define the relationships between the audience and the presentation. How was a simulation presented so that it became a legitimate object of scientific study and the audience became distanced witnesses of this object? And how was it presented so that it became something more than merely a distanced object - possibly a piece of artistry or a product of skilled engineering - thus placing its inventor in the role of "closet artist clothed as engineer" (as the ALifer at COGS put it, see chapter 3), and the conference attendants in the position of enchanted spectators rather than trustworthy, distanced witnesses of technoscience.

We might say that the meanings I am talking about are the overall contexts of the presentations of computer simulations. I am looking for those contexts that made a simulation a good simulation. This chapter can thus be seen as the answer to the following questions: First, within the context of the conference, what computer simulations were "good simulations"? And second, how did these simulations define, or help define, the relationship between the presented simulation and the audience?

My first answer to both of these questions is that good computer simulations are simulations that are presented as objective worlds. I now turn to show and explain this.


Artificial Life as Science: The Objectivity of Artificial Worlds

In chapter 2 we saw that ALifers (at COGS) construct artificial worlds - simulations of social or ecological systems where several animats interact with each other and with a (simulated) physical environment - in order to be able to study life or cognition (or simulations of these phenomena) in machines. These worlds make it possible to study intelligence or cognition as a social or ecological phenomenon - as adaptive behaviour in relation to some context - within a technoscientific frame. We also saw that these created worlds allow the relations between an agent and its environment to be studied with objective distance. The relations that make up the adaptive behaviour are wholly external to the researcher. (See figure 3, page 50.) The artificial agent does not have to adapt to a context that includes the researcher, for example his ability to speak English. In the previous chapter we saw that this objective distance is a property of a well-tuned simulation. In the early stages of making a simulation, of tweaking it or fiddling around with the parameters, there is no such distance. The distance I am talking about here is therefore the distance of a ready-made, well-tuned simulation.

An important element that allow the relationships within of a well-tuned simulation to be studied scientifically is the development or evolution in the simulated world. This evolution produces so-called emergent properties, properties that could not have been predicted beforehand. If there had been no emergent properties in the system, then the researcher would not have been able to read more out of the system than he himself had programmed into it. His or her science would thus have been tautological. There would have been nothing in the "data" that the running program produced that was not present in the premises of the designed program. As such, his science would have been more like mathematics than an experimental technoscience. (Good Old Fashioned AI was, as I argued in chapter 2, legitimated by logic rather than by the experiment. We might say that GOFAI was more like mathematics than experimental science.) But as we have seen, by studying interactions between objects that 1) are wholly external to the researcher, and 2) produce emergent phenomena, these objects (or agents or animats) become legitimate objects for a technoscientific study.

Making artificial worlds is a way to construct what ALifers see as an important aspect of life and cognition, namely some sort of autonomy. There is an inherent tension between the autonomy of an agent and its dependency on its environment (discussed by ALifer Margaret Boden in a lecture at COGS (Boden 1994)). Nevertheless, the idea of autonomy is important among ALifers; life is autopoietic (Maturana and Varela 1987), it creates itself; intelligent beings are autonomous agents. This autonomy is related to the practice of, as one ALifer put it, "getting the humans out of the loop", of making worlds in which animats will interact independently of human beings.

Here I argue that the relative autonomy of artificial life simulations is an important reason - by effect if not by intention - why the animats and their interactions could be "enrolled" by scientists as "allies" (Latour 1987) in scientific controversies. If we, staying within the juridical metaphor, think of the researcher not only as a witness of observed phenomena, but as an advocate defending a certain scientific position, then the proofs that he enrols for his case will have to be independent of himself. The proofs will have to have a certain autonomy in order to count as anything else than the scientist's own wishful thinking. (And, to complete the picture, when the allies he enrols are his own data, then he is a witness in his own case.) So, when ALifers produce autonomy in computers they do more than reproduce in computers what they see as an important aspect of life and cognition, they also produce legitimate scientific allies.

In the following I will look at the context that defines or helps define a phenomenon - or a person - as a legitimate ally. Let me begin with a familiar example. When I add "(Latour 1987)" or "(Lyngør 1990)" after a sentence I enrol a writer to give strength to my position. There is additional strength in the first reference: Many will know that (Latour 1987) is written by Bruno Latour, a Professor of sociology. But also the second reference works, even if no one has heard of Lyngør (which, in fact, is a small village in Norway). The reason why this reference works is not its content, but its form; (Name Year). This form has one important meaning. It tells us that the enrolled ally is a subject of the academic community. (That is why "(Lyngør 1990)" sounds quite strange to Norwegians.) The form is, as it is conventionally understood, subjectivising the ally, and the ally is given strength precisely because he or she is a member of the (respectable) Society of Subjects. Data are seldom enrolled using this form. Writing anthropologically about academics is one exception. For example, I give my data legitimacy by referring to the works of ALifers thus: "(Cliff 1990)" and "(Harvey 1993)". However, the data of ALife research, produced by well-tuned simulations, may be presented in other legitimate forms. These forms have other meanings. Notably, they objectify rather than subjectify. Some of the forms in which ALife simulations were presented had the effect of emphasising that they were actually worlds "out there".

I will call the first of these forms "everyday nature". This is a way of presenting simulations that makes them look, in a photo- or TV-realistic way (and like the linear perspective in the art of painting), like everyday nature, the three-dimensional space and the time that normally surround us.

Everyday nature

Everyday nature simulations have a foreground and a background, and they present us with seemingly solid objects - things or bodies that throw shadows and bump into each other. The plates 3 and 4 are examples of such presentations. These simulations were presented at the ALIFE IV conference in Boston. Both had been given a central position in the first, plenary session, after an introduction by Chris Langton. Sims' and Terzopoulos' papers are also reprinted in the ALife journal - because of their "overall quality and/or significance" as the chief editor writes (Langton 1994:iii). After people from COGS had seen Sims' work, they invited him to present his simulation at the Simulation of Adaptive Behaviour conference in Brighton later the same summer.

Sims' simulation was one of the most popular and well-received simulations during my fieldwork. People laughed during the presentation (recognising strikingly lifelike creatures that nevertheless did unusual things), clapped loudly afterwards, and talked favourably about it in the corridors and the dining room after the presentation. In the following I will therefore concentrate on this piece of work. Let me begin by having a look at what Langton calls the "truly astonishingly realistic physical [properties]" (Langton 1994:iii) - the "everyday nature" in my terms - of the simulation.

In Sims' simulation "arm-like" creatures evolve a morphology - a shape - and what Sims calls a control system or a brain. These creatures may have one or two arms with one or more joints. They live in three-dimensional space, and have to relate to (simulated) gravity, surface friction, and their own materiality; their own weight and inertia. Several techniques help to produce a realistic image: The ground is a bit lighter in the foreground than in the background and is equipped with rows of squares. This helps us see that the ground disappears into the horizon. The creatures have lighter and darker sides and throw shadows on the ground. The presentation of the simulation was in the form of a video, played on a large screen. In selected video clips the box-creatures moved in real time (the initial computer generation of these "films" - involving enormous amounts of computations - may have taken considerably longer), and although their shapes may seem strange from the pictures in this thesis, their movements were strikingly lifelike and familiar. During the presentation of the simulation the observer's point of view moved. The observer's "eyes" were like a camera, zooming in and following the animats.

ALifers are not the only scientists to present scientific facts or results in this realistic way. Plate 5 shows a computer generated picture of a large molecule (the AIDS virus). The computer programs for making such three dimensional pictures are well developed and are used quite commonly. I had a talk with a computer expert at a bio-technology lab at the University of Oslo. He showed me how molecules could be seen in different modes, showing their primary or secondary structures, and how they could be turned around in virtual space to enable the researcher to see them from different perspectives. He emphasised that these graphical presentations are absolutely necessary for bio-engineers if they are to do the desired kinds of molecular analysis. Here, however, I am not concerned with the practical necessity of such presentations but with their rhetorical power.

As with Sims' simulation, these pictures give molecules a quality of convincing realism by objectifying them. The computer expert recognised that this objectification was a matter of symbolic communication. He talked about the "conventional style" of these pictures, e.g. their black background and their atoms pictured as balls of different colours. But we should also note that the biological textbook from which plate 5 is taken does not say anything about the symbolic realism of the picture. My point here is not to say that the AIDS virus is a "social construction". There are surely aspects of this picture (such as the structure of the virus? [what do I know]) that are given by something external to society. The point is that the photographic realism of the picture is, first, a conventional construction (this is not what a molecule actually "looks like"), and, second, a construction that suggests that it is not a construction at all, but rather a photographic representation of nature.

To a young student of biology the photographic representation of the molecule may be taken to be a photograph of a molecule. Few ALifers would be "fooled" by Sims' realistic simulation (and Sims did not attempt to fool anybody). ALifers knew that the realism of the simulation (in addition to the rest of the simulation) was a construction, because they made, or knew how to make, such images themselves. However, in effect, the result of such realism is to give the impression that the simulation is something "out there".

Here I should add that when I asked people (and I mostly asked people from COGS) why they liked Sims' simulation, they did not stress the realistic form of the presentation. They stressed an aspect of its content, namely, the co-evolution of the species within it.

There are at least two reasons why the concept of co-evolution is popular among ALifers (especially at COGS, but also elsewhere). First, co-evolution fits into the relativistic and holistic philosophy of ALife at COGS (see chapter 2). Co-evolution is a process that involves interaction between a population and its environment. (This environment consists of other species. Hence you get co-evolution.) Second, (artificial) co-evolution makes the ALife-system more autonomous. The evolving population does not optimise its performance relative to some task specified by the programmer (a process called optimalisation by Genetic Algorithm programmers). Rather, the task is specified by another population of artificial creatures.

Sims' simulation had an element of optimalisation; Sims had decided that the box-creatures were going to catch the green box. But it also had an important element of co-evolution, because the difficult task was not just to catch the green box, but to catch it in the presence of another box-catcher. Hence, the co-evolutionary process helped, as the previously quoted ALifer said, "to get the human out of the loop", to produce autonomy. Once Sims had set the context, the various species of box catchers co-evolved in each others' presence. This co-evolution ascribed objectivity (through autonomy) to the content of the simulation, not only to its form.

I will get back to Sims' simulation later, but first I will look at another form in which simulations were presented. This form, or context, or frame, also had the effect of ascribing to the simulation a certain "out there-ness". I have called this form "scientific nature".

Scientific nature

"The outstanding fact that colors every other field of [the] age of the Newtonian world is the overwhelming success of the mathematical interpretation of nature. We have seen how Galileo found that he could explain and predict motion by applying the language of mathematics to the book of nature, and how Descartes generalized from his method and its success a universal principle of scientific investigation." (Randall 1976 [1926]:255)

Mathematics - statistics, numbers, and quantities - has not become less important since the days of Galileo, Descartes, or Randall. It is still central in all technoscience, even if statistics have often replaced the precise equations of Newtonian physics. Quantum physicists claim that it is futile to speculate about the spatial geometry of the thing "out there". A quantum particle is a quantity, not a shape. The quantum world is, in a sense, a world of pure mathematics (mathematics that works in technological contexts, I should add).(49)

Figure 10 A "world" of quantities in a graph

(Lindgren 1992:302), reprinted by permission of Addison-Wesley Publishers Company.

Often the "book of Artificial Nature" is also written in the language of mathematics. ALifers frequently present the facts of their artificial worlds as quantities, and they present aspects of their machines by using mathematical equations. In the following I will take a closer look at how quantities describing artificial worlds were presented. One important means in such presentations was the Cartesian co-ordinate scheme. In figure 10 we see an example of how quantities that describe a simulation are presented in a graph. Time is plotted along a horizontal axis, and the percentages of each of the evolving species are plotted vertically. (The upper horizontal line of the graph shows "100%". If one species hit it, all the others would have become extinct.) The different "species" are digital "gene strings" that code for different strategies for playing a version of the Prisoner's Dilemma(50). They play against each other, and, through natural selection, evolve their strategies in response to the other species' strategies. The details of the Prisoner's Dilemma and its use in Lindgren's simulation are not relevant here. The point is that graphs such as the one in figure 10 are one common frame in which ALife simulations are seen. In Lindgren's paper, as in the papers of many others, the Cartesian graph is the only way in which we see the simulation. The graph is not separated from the simulation as, say, the map from the terrain. The graph - as the audience at a conference or a reader of a journal see the ready-made product - is the simulation.

However, graphs - in ALife or elsewhere - normally refer to something. In ALife this reference is double; first, graphs refer to the running of a particular simulation, and, second, the simulation results are often used to say something about Real Life. In chapter 4, I discussed the latter type of references as an instance of metaphor and identity association. Here I will discuss the first type of reference, namely how quantitative facts - which Latour and Woolgar have called inscriptions (Latour and Woolgar 1979) - are inscriptions of something, namely, the simulation.

The "something" that graphs and quantities refer to is, in the natural sciences, generally understood as Nature. The magazine Nature is not about the quantitative facts it presents. It is about "nature" - "out there". "Nature", the referent of the natural sciences, is seen through Cartesian co-ordinate schemes. Hence, Cartesian graphs act as "windows" onto something beyond themselves. This "something" is described in a language which is, typically, freed of subjective interpretation, it is the "language of mathematics", the language of statistically significant differences and countable objects and events. Hence, the "something" which graphs are windows onto is not subjective, it is objective. Graphs objectify.

The two general forms of presentation that I have referred to above as everyday nature and scientific nature, are related in important ways. Their relatedness can be seen in ALife presentations which blur the distinction between them. In all ALife simulations the "world" is, technically, either a two-dimensional plane with specific X and Y extensions or a three-dimensional box with X, Y, and Z dimensions.(51) In many simulations this quantitative, mathematical aspect of the simulation is more visible than in the photo-realistic simulation of Sims (where it becomes invisible because of the high resolution). In these cases the simulated worlds look like both a Cartesian graph and a simplistic everyday world. They may be presented so that it looks as if the animats are moving around in a Cartesian graph. (See plate 2.) In effect, a simulation such as the one in plate 2 is an example of what everyday nature and scientific nature have in common; they present worlds beyond windows. They both give the viewer the impression of looking into something; a world which the audience themselves is outside of.

Simulations were, as is common in computer science, often talked about as windows. These windows could be opened and closed, they were presented on computer screens, as photographs (in papers), or projected onto the large screens of the conference hall. Let us have a look at the window as a general context of ALife worlds.

Windows and television

A first parallel between the ALife practice of seeing worlds beyond computer windows, and related, more general practices, may be seen in the flat, two dimensional, ordinary windows with an X and Y dimension that allow a view from within our homes out into the external world. Windows often act as border planes between our subjectively familiar, private sphere and the public sphere. We see the world outside our homes through square planes of glass that, like computer screens, give us a view of this world (and as little as possible of its smells, sounds, winds, or temperatures). Stefan Helmreich made me aware of how ALifers giving talks at conferences often pointed at a window in the lecture room to illustrate that they were speaking about Real Life as opposed to the worlds inside their computers. So, it seems that ALifers have some embodied knowledge of windows as borders to the world "out there". Having seen this gesture several times at ALife conferences, I was amused, on coming home to Norway, to see, at one of the first lectures I attended, the speaking anthropologist illustrate "verdenen der ute" ("the world out there" - outside the anthropological community or discourse) by pointing at the window.

The second social practice and technical device of importance to this discussion is television. The television screen is - in a sense - a concrete, Cartesian co-ordinate scheme with a finite extension in the X and Y dimensions (a normal colour TV-screen is made up of something like 1200 x 1000 pixels). This screen brings into our homes the world "out there" - pictured realistically as everyday nature. Television, like the simulation in plate 2, mixes everyday nature and scientific nature by projecting one onto the other. This means that there already exists an institution, embodied both in the habits of watching television, and in the hardware of TV screens, where worlds appear to us visually (with no smell or taste) in two-dimensional quasi-Cartesian co-ordinate schemes.

The "window" into a computer is a television screen. Thus, ALifers reproduce in another context the notion that there are worlds - and a practice of seeing worlds - beyond screens. Stefan Helmreich also comments on this: "Worlds begin when machines are turned on", he writes, "when light flickers forth from the computer screen." (Helmreich 1995:163) Thus, seeing worlds beyond Cartesian television screens is not a radical move for ALifers.

Distance

When results from the running of a simulation are blown up on large screens and presented to the attendants of a conference, witnesses are multiplied. A larger part of the ALife community can observe, with their own eyes, yet at a safe distance from the conference-hall projector, the phenomena produced by the experiment. When these results are presented either as realistic, graphical pictures, or as quantities plotted on graphs, the distance is further established. These interface forms illustrate that they refer to a world "in there", and they suggest that this world is as objective as our everyday nature and the quantitative nature of experimental science. We may take this argument a bit further and ask as follows: Does the fact that these interfaces are illustrations also mean that they create illusions? The answer to this is probably both yes and no. In chapter 5 we saw the quite intimate relation between ALifers and the simulation-in-the-making. This "intimacy" was given by the process of "fiddling around with the parameters", a process that required a set of intuitions and interfaces that were well tuned to each other (as skills and tools they defined each other mutually). Much of this intimacy is left behind in the laboratory when the simulation leaves for the "larger world". At an ALife conference or in the journals we are often presented with images like the ones we have seen so far in this chapter, but without the interfaces or intuitions used to tune the simulations. We see simulations in the form of everyday nature or scientific nature but without signs of the intimacy required in the laboratory processes. The ready-made simulation now works independently of this intimacy.

However, I will argue that at least one kind of "intimacy" still remains when the simulations are presented at conferences, namely the tight relation between the interface, the program used for seeing the world, and the world - or program - to be seen. As I pointed out above, referring to the Cartesian graphs; the only way to see the simulation is through the interface. This interface, as we saw in chapter 5, is developed together with the simulation. The distinction between the interface and the "thing" it is an interface into is blurred. This means that if we think of Cartesian graphs and graphical realisms as signs that signify something, as illustrations that illustrate, then the sign and the signified are two aspects of the same thing, the same program. In a sense, then, the illustration of "out-there-ness" can be seen as an illusion because the thing "out there" is an integral part of the illustration itself. On the other hand it may not be an illusion, because, to take Sims' simulation as an example, the co-evolution between box-catchers resulting in morphologies and behaviours that Sims had never thought of did in fact occur, and would have occurred even if Sims had chosen a more simplistic and less realistic interface. But then again, remembering the case of Jean-Pierre tuning his simulation (chapter 5), it is important to have in mind that unless Sims had had a quite realistic interface, where he could study the movements of the animats as movements in time and space, he would never have had the opportunity to see all the times the evolutionary process did not lead anywhere, he would not have had the possibility to tune it properly, and the simulation would never have produced anything of interest.

The interface forms presented above - "everyday nature" and "scientific nature" - make invisible both the blurred boundary between sign and signified, and the intimate relation between the researcher and the simulation during the construction process. Thus they have the effect of purifying the relations between the observing ALifers and the worlds observed. They help to make the ALife data into proper technoscientific data, and hence into legitimate allies. This legitimacy is, unlike the reference "(Latour 1987)", not given by telling us that the ally is a respectable member of society, but by telling us that it is a member of objective reality.

I have here described the purification of ALife simulations into objective reality as an aspect of how simulations are presented at conferences. I have only limited knowledge of to what degree this effect was intended. Some researchers showed that they knew that graphs and maths could make a message look "scientific" by using these means rhetorically. For example, graphs showing data could be shown briefly at conferences with the shared understanding between audience and speaker that no one in the audience would have time to understand their content. "And yes, here are more results", one researcher said with a smile, showing us 10 overhead slides with Cartesian graphs in 10 seconds. Recognising what we might call "the graphness" of the graph, its very quality of being a graph, we got a glimpse of some technoscientific "results".

Many researchers, however, presented their scientific results as, to put it a bit crudely, "dead serious science". This, it seemed to me, was particularly the case when the researcher was young and when English was not his native language. With little experience in giving talks to large audiences, and having to speak a language that they did not master (English), many were, and I understood them well, quite nervous. But in their nervousness they revealed something important. They tried hard to give a good talk, and this "good talk" was almost always a typically "serious scientific talk". The serious science consisted in ignoring their own uncertainty and their own engineering of the results, by presenting their results as something they had found, often giving excessively detailed, technical descriptions of their worlds, robots or animats, and displaying their graphs and graphical presentations as something that pointed to these findings. The audience was invited to follow the researcher as distanced witnesses of these findings. In these talks what we might call "playfulness" and "subjectivity" were played down, "objectivity", "technicality" and "soberness" were emphasised. The purification of language, for example the correction of an "he-animat" into an "it-animat", were often made by these researchers.

Having given this picture of how ALife simulations were presented as objective realities it is time to complicate the picture by telling an opposite story. I will now discuss how ALife was something more than just "serious science".


Artificial Life as Art: The Technology of Enchantment

Most conferences on artificial life have one demonstration or talk which is labelled art and not science. This label is specified by the person giving the talk or demonstration, who also stresses that he is an artist and not a scientist. During these presentations some computer simulations or robots are presented which do not look very different from other ALife machines. Ata couple of occasions, the speaker has told us that these machines are not meant to be an experiment designed to explain anything, but rather an experience to be enjoyed.

By making the distinction between experience and experiment, these presentations show the significance of the difference between the English terms "art" and "science". One of the founders of "cultural studies", Raymond Williams, has traced the etymological origins of these words. He is worth quoting at length:

...science, in late 18th century, still meant primarily methodical and theoretical demonstration, and its specialization to particular studies had not yet occurred. The distinction between experience and experiment, however, was a sign of a larger change. Experience could be specialized in two directions: towards practical or customary knowledge, and towards inner (subjective) knowledge as distinct from external (objective). Each of these senses was already present in experience, but the distinction of experiment - an arranged methodological observation of an event - allowed new specializing emphasis in experience also. Changes in ideas of nature encouraged the further specialisation of ideas of method and demonstration towards the "external world", and the conditions for the emergence of science as the theoretical and methodological study of nature were then complete. Theory and method applied to other kinds of experience (one area was ... feeling and the inner life, now acquiring its new specialised association with art,) could then be marked off as not science but something else. (Williams, 1976:233-234)

The term "art", Williams writes, may mean "any kind of skill" (1976:32), but it has from the late 18th century come to be associated with creative and imaginative endeavours. And, as Williams writes, "there was an early regular contrast between art and nature: that is, between the product of human skill and the product of some inherent quality." (1976:34) This coincides well with how art was presented at ALife conferences. The artist did not claim to discover "some inherent quality (of nature)" (ibid.), but presented his machines, to a large degree, as "the product of human skill" (ibid.).

The distinction between art and science is a typical example of the modern purification. Art is concerned with experiences, with creativity and imagination, in short, with the subjective side of our existence. Science is concerned with the experimental quest for the inherent qualities of nature, in short, with the objective side of things.

However, the way art, as opposed to science, was presented at ALife conferences, did not reproduce this distinction faithfully, but rather blurred it. To see how this blurring worked I will first present a case where the distinction was not blurred. As part of the program of an international conference on Artificial Intelligence (not Artificial Life) in Chambery, France, 1993,(52) an artistic performance was scheduled. It took place in the largest lecturing hall, but was clearly identified as an artistic event. It started at 8 o'clock in the evening, after the audience had gone home to their hotel rooms and dressed for the social events of the evening, and the artist, playing "four handed" on a shiny, black piano hooked up to a computer(53), wore a tuxedo. The event, even if it was related to AI (the computer improvised around themes that the artist first played) was nevertheless set apart from the scientific sessions that had ended a couple of hours earlier. It was clearly social and/or artistic, and not scientific.

Compare this event with the following one.

One of the regular talks at the ALIFE IV conference in Boston 1994 was called "Explorations in The Emergence of Morphology and Locomotion Behaviour in Animated Characters". Neither from this title nor anything else, could I tell that it was not a normal talk. But when the talk starts, the speaker tells us that we are about to see an "artist's use of artificial life techniques", and that the "stars of his show" are a group of animats that he calls biomorphs (Sims animats (plate 4) are example of biomorphs, they have an evolved, biology-like morphology or form). His simulated biomorphs have some constraints: They have to have a head, and some of them have evolved with a "fitness pressure for head height" (Ventrella, 1994), something which means that those who walk with their head up high are rewarded. These biomorphs tend to have a human look; they are tall and up-right walking. The artist calls his simulation "expressive motion art" (Ventrella, 1994).

This artistic presentation is also represented by a paper, in form a quite normal scientific paper, published as part of the conference proceedings. In this paper we read:

... I have [...] shown that adding secondary fitness terms pertaining to motions and positioning of the head can contribute to the emergence of familiar animal forms and motions, as well as some unfamiliar (but funny) characters.

This paper offers an artist's use of artificial life techniques and concepts as applied to an expressive medium - character animation. In the Disney tradition, animation is the illusion of life. In adopting bottom-up, emergence methodologies, character animation research adds to this the simulation of life. The explorations described in this paper are an example of taking this approach towards enriching the art form. (Ventrella, 1994:441)

Whether this is "art" or "science" is not entirely clear. It is presented as art and by an artist, but it is also presented as part of the scientific program of an ALife conference, and as a quite conventional scientific paper, with subheadings such as Abstract, Introduction, Conclusion, and References (Ventrella 1994). It blurs the distinction between "art" and "science", the domains of subjective expressions and experiences, and of objective experiments.

During the second European Conference on Artificial Life, ECAL '93, there was an art presentation that also worked with the difference between the subjective and the objective, but differently than Ventrella. This presentation changed the role of the conference attendants from that of distanced witnesses to that of participating observers. The Dutch artist or electrical engineer Felix Hess, (he called himself both) showed us 25 small robots that he compared to frogs. These "frogs" could produce a sound - they "quack" - and they reacted to this "quacking" with certain movements and with new "quacking". During the presentation the robots moved around on an enclosed section of the floor, "quacking" to each other and moving in relation to each other. Hess describes the motivation for his work thus:

Enchanted by the frogs of Australia, I acquired sound recognition equipment and managed to capture many frogs on stereo tape. However, listening to a recording to this is not the same as listening to live frogs, obviously. A live frog chorus is interactive, it is sensitive to the circumstances and to the behaviour of the listener. This live, interactive quality is lost in a recording. (Hess 1993:453)

Hess recreated this interactivity by letting his robot-frogs react to loud sounds, such as a human voice. When people spoke or made other noises, the robots, like frogs in the night, sat still and became quiet. In order to observe any interactions between the "frogs", the human audience had to tip toe and whisper to each other. This made the atmosphere in the room very different from that of a conventional ALife presentation. Sound became important, our own sound. We, the audience, could not distantly watch worlds behind computer screens, we were - with our noisy bodies - involved in the production of the interaction between the robots. Hess concludes his paper:

Through actually building machines such as the "sound creatures" one can get a "feel" for the relationship between sensitivity and intelligence. This work has only increased my respect for the frogs, who taught me to sit still in silence and listen. (Hess 1993:457)

This kind of involvement on the part of the audience was also emphasised in the World Wide Web announcement of an ALife conference in Japan. In the "First Call for Papers. Artificial Life V"(54), we could read, under the heading "ALife-related Events"; "In virtual reality or music or art events, you will be able to experience the ALife-related world, not just as a looker/listener, but as a more embedded ALifer."

In this announcement, to be an embedded human is related to art events (and, by implication, not to "scientific events"). Hence, it may be said to stress that subjective, human things are "art" and that objective, distanced, non-human things are "science". However, I think there are important aspects of the artistic presentations and events of ALife conferences that break with such a clear-cut definition. First, these art events are integrated parts of the scientific program at the conferences. They are not merely evening entertainment. Second, the content of the events is often similar to the content of regular, scientific presentations. This meant that I, as a conference attendant, was not only surprised when attending a "scientific" talk to find the "scientist" calling himself an artist, but that, when attending a talk that was presented as science, I was entertained by a show that clearly was doing more than just presenting distanced, scientific results. The presentation of the strikingly realistic fish (plate 3) in Terzopoulos' talk, for example, was accompanied by the soundtrack of a man speaking English with a parodied French accent. I did not understand this reference, but many others did, and laughed. It was, I learned later, a parody of a famous French scientist and diver, Jacques Cousteau, who has made more than 120 natural history documentaries, particularly making use of underwater filming (Gibbs 1996). The simulation did not only simulate Real Life, it simulated Real Life as it is seen in TV-documentaries. But instead of having someone "really" commenting on the simulation, we got a parody of Cousteau's voice commenting on the simulation, telling us that the simulation was not really Real Life, it was an (artisitic) parody of Real Life.

This means that there was an artistic aspect in many ALife presentations, not only in those labelled art. I will now turn to this aspect of ALife presentations. By talking about this artistic aspect, the term "art" has become "mine" and not "theirs". It is, form now on, my analytical term, not the ALifers' own labelling of their practice. The way I will use it is not in the sense of "fine arts", but more in the sense of, as Williams wrote, "any kind of skill", and I may add, any kind of creative agency.

The Technology of Enchantment and The Enchantment of Technology

In thinking about the performative aspects of ALife presentations I am particularly inspired by Alfred Gell's discussion of the relation between the maker and the made - the artist and the art - in his article The Technology of Enchantment and The Enchantment of Technology (1992). Before looking closer at the artistic aspect of ALife I will present Gell's line of thought.

Gell first discusses how applying the term "art" anthropologically may be a bit tricky: Western philosophers and others have used the term normatively to distinguish a particular domain of "culture" from something of less value. The philosopher T. W. Adorno, to take a radical example, makes a clear distinction between, on the one hand, art as a Good (European) Thing and, on the other hand, as entertainment - including jazz and Hollywood - as a Bad (American) Thing. Relative to such normative schemes, Alfred Gell argues in his paper for what he calls a "methodological philistinism" in the study of art. In ways similar to how religion can be studied anthropologically, by letting go of assumptions about any absolute truth or falsehood of religious dogmas, etc., art can be studied anthropologically by not passing judgement on its Truth or Beauty, that is, by being a bit aestethically ignorant, a bit of a "philistine". In broad outlines, Gell's main argument is as follows.

In explaining the role of art objects (and Gell focuses especially on the kind of art that produces an artistic object), whether they are emically recognised as "art" or seen as art by an outsider, we have to understand why these objects are cherished, why they are valuable. Gell claims that art objects achieve their particular value not by what they are in themselves, but by how they have become what they are. The agency of the artist is crucial. One of Gell's examples is a painting known as Old Time Letter Rack. It is an enormously detailed and naturalistic oil painting of a letter rack, "complete", as Gell writes, "with artfully rendered drawing pins and faded criss-cross ribbons, letters with still-legible, addressed envelopes to which lifelike postage stamps adhere, newspaper cuttings, books, a quill, a piece of string, and so on." (Gell 1992:49) This picture, Gell continues, is

... as beloved now as it ever was, and has actually gained prestige, not lost it, with the advent of photography, for now it is possible to see just how photographically real it is, and the more remarkable for that. If it was, in fact, a colour photograph of a letter rack, nobody would give tuppence for it. But just because it is a painting, one which looks as real as a photograph, it is a famous work, which, if popular votes counted in assigning value to paintings, would be worth a warehouse full of Picassos and Matisses." (1992:49)

So why is this piece of art so popular? Because, Gell answers, it is made by a human being, an artist. The artist has transformed "oily pigments into cloth, metal paper, and feather." (1992:49)

Gell calls the transformation of something into an art object a magical process. The artist is a bit of a magician, possessing unique skills of transformation, and the magic of the making is objectified into the art object itself. Gell writes about Old Times Letter Rack:

"The magic exerted over the beholder by this picture is a reflection of the magic which is exerted inside the picture, the technical miracle which achieves the transubstantiation of oily pigments into cloth, metal paper, and feather. This technical miracle must be distinguished from a merely mysterious process: it is miraculous because it is achieved both by human agency but at the same time by an agency which transcends the normal sense of self-possession of the spectator." (p. 49)

Gell's point, as I understand him, is that the value of an art object to a large degree stems from the - to the observer - magical process of making the object. This magic need not entail the belief in any super-natural beings or qualities. The point is that it is "super-rational" because the observer feels that "I can't really understand how it is done".

However, to be able to respect such an artistic agency one does not have to be altogether unfamiliar with the techniques of the artist. On the contrary, some familiarity with the artist's work is necessary. Gell tells us about his own childhood fascination with a matchstick model of Salisbury Cathedral. The model, he tells, enchanted him because he had some experience with building matchstick models, but far from enough experience to be able to build a cathedral. By, on the one hand, having some knowledge of, and some familiarity with, the skills of the artist, the young Gell was able to appreciate the matchstick model as a piece of art. On the other hand, by not having sufficient skills in matchstick model-building to be able to build a cathedral, Gell was also enchanted by its magic. Being skilled in some kind of human practice leads to an increased appreciation of other's skills because the observer him- or herself "knows how difficult it is". We might say that the artist (through his or her art object) and the audience of a performance meet in a communitas of shared familiarity with the creation process.

The enchantment of "High Tech"

When the conference attendants, the audience, clapped vigorously after Karl Sims had presented his simulation, they credited him for his work and his skills. They were able to feel and to show this respect because they themselves made computer simulations, or at least were aware of the difficulties involved; the insight, skills, intuitions, and creativity needed to program the simulation and get it up and running. Sims did something no other ALifer had thought of or mastered before him.

I think, however, that the audience was impressed by more than just Sims' skills. Sims is employed at Thinking Machines Corporation, a firm that makes some very fast computers known as "Connection Machines®"(55). (These machines are made up of many - often several thousand - small computers operating in parallel; this is known as "massively parallel computing".) Sims' simulation had been running on such a computer. Hence, it demonstrated the capacities of the latest news in high technology. If we think of Sims' presentation as a performance, then it might have been a performance in two senses of this word. "Performance" may, according to the Oxford Dictionary, mean "the action of performing a play, a part in a play, a piece of music, etc.", or it may mean "the capacities of a machine, especially a motor vehicle or aircraft". I will call these two performances "human-performance" and "machine-performance", respectively. An anthropologist who has written extensively on social life as performance is Victor Turner. One place he writes:

When we scan the rich data put forth by the social sciences and the humanities on performances, we can class them into "social" performances (including social dramas) and "cultural" performances (including aesthetic or stage dramas). As I said earlier, the basic stuff of social life is performance, "the presentation of self in everyday life" (as Goffman entitled one of his books). (Turner 1986:81)

My use of "performance" is more restricted in scope than Turner's "social performances", which refers to the processual aspect of all social life. I use the term more in the sense of what Turner calls "cultural performances". It has to do with aesthetic and/or staged dramas at the conferences, and it has to do with how these performances are based on a communitas of shared expertise. However, as I indicated above, I will not only discuss human-performances, but also different kinds of machine-performances (we will se that these are related). Let us first look at one important way in which machines "perform" in computer science.

Computers change all the time. They rapidly and reliably become faster, cheaper and smaller. Their programs grow in size, complexity and graphical sophistication. People involved in computer science are, generally, quite "hooked" on this technical surge. In computer magazines future computers are often discussed, both long term, new, and radically better computer architectures (such as massively parallel computing), as well as soon-to-be-launched, better and higher performance PCs. The computer industry strives, very literally, to make the future arrive as fast as possible.

In the first of the ALife proceedings (Langton 1989) there was a paper that took the rapid change in computer science as its starting point. It was written by Hans Moravec from the Robotics Institute at Carnegie Mellon University. Moravec uses the last 50 years of development of computers to discuss when we may expect a Genetic Takeover, that is, a change from the evolution of carbon based (normal) life to the (coming) evolution of technologically based life ("real artificial life"). He writes: "It once took 30 years to accumulate a thousanfold improvement [in computer capacity]; in recent decades it takes only 19. Human equivalence [of our "computer capacity"] should be affordable very early in the 21st century." (Moravec, 1989) Chris Langton has at most talks I have heard him give, stressed that he is absolutely confident that we - not too far in the future - will get real artificial life. That is, we will soon see "technology", maybe as tiny "robots" (made by so-called nano-technology) that breed, reproduce and evolve independently of human control. Stefan Helmreich writes about one of his ALife-informants: "He wished he could live long enough to see [the informant said:] 'all the cool things that would happen in the future.' " (Helmreich, 1995:152) Stefan's informant did not suffer from "future shock" (Toffler 1970), he thrived on it.

At ALife conferences the enchantment of computer progress was often apparent. This enchantment of progress fits Lyotard's definition of performativity as a "game pertaining [...] to efficiency" (Lyotard 1979:44). There is however, more than Lyotardian performativity that is valued in what I have described above. When I, in the first half of this chapter, wrote that the objectivisation of Sims' simulation was an effect of its photographical realism, I was "experience distant" (Geertz 1983). A more "experience near" effect of this realism was the fascination of the audience with how both a powerful "Connection Machine®" and a skilled Karl Sims had been able to actually produce this "truly astonishing" (Langton 1994:iii) realism. The enchantment of the audience was a product of both a human- and a machine-performance.

The following case further exemplifies ALifers' fascination with both human- and machine-performances. The case also exemplifies the other aspects, discussed so far in this chapter, of how Artificial Life was presented at conferences.

A synthesising example

In chapter 5 I discussed how Jean-Pierre, a Ph.D. student at COGS made his simulation. We saw how a skill-and-tool dependent simulation, existing in tight relation to Jean-Pierre, became a ready-made product, a black box that produced results. I will now show to how this simulation was presented at the Simulation of Adaptive Behaviour conference in Brighton. The results of running the simulation were written up in a paper, co-authored with William, another Ph.D. student at COGS. William presented the paper.

On one large screen William presents his overhead slides. Having studied the relevant biological literature thoroughly, he emphasises the biological - or scientific - problem to be studied. (A problem that has to do with the communication between animals who have their respective territories to defend and respect.) Behind the scenes Jean-Pierre prepares the computer, William tells us that "we are going to go live simulation", and a few minutes into the talk the simulation appears on another large screen behind and above William. (Normally, when we were presented with computer runs in plenary talks, we got to see an edited video recording of a simulation, and not the simulation itself. This allows for more control. Certain events, illustrating the speaker's points, may be shown, whereas uninteresting ones are skipped. And time may be condensed. That is, a computer run that may have taken all night may be shown in a few minutes.) William jokingly emphasises the "live simulation" with something like this: "I told someone yesterday that we would show the simulation live. He told me that that was quite stupid. But I said that we were quite confident it would run. He replied that it was probably no better to be both stupid and arrogant."

In this comment William tells us that we are to be entertained by a "machine-performance". We will observe computer technology at the frontier of technological progress. It's Artificial life - "live". Computers have (almost) become small, fast and reliable enough to make this presentation possible. But we will also see a human-performance: The machine-performance is uncertain, but Jean-Pierre is skilled enough to bring it off. We are, so to speak, to be enchanted by computers and the accompanying skills at the edge of high technology.

During the rest of William's talk the simulation runs on the large screen behind his head. The simulation is presented to us in the forms that I discussed in the first part of this chapter, in the form of everyday nature and scientific nature. It is shown in four computer windows (see plate 2). Three of these windows show small Cartesian graphs that are drawn as the artificial life in the simulated world unfolds. The axes of these graphs are not named, and it is not possible to see exactly what the graphs are saying, but the "graphness" of them is unmistakable. They look like "scientific nature". The fourth, main window shows a two-dimensional overview of the artificial world. This world combines everyday nature and scientific nature in the way that I discussed in the first half of this chapter. Not unlike watching TV, we watch a simplified version of everyday nature through a window that looks like a Cartesian graph. (Though, compared to a TV-program, the "graphness" of William and Jean-Pierre's world is much more apparent than its realism.) In this window we see moving animats. From the interaction between them territoriality, the observed fact that the animats stick to their respective territories, emerges. (William, like Jean-Pierre in chapter 5, stresses that this emergence does not take place independently of the fact that we see it. We ascribe "territoriality" to the animats, as we do to animals.) Sometimes William points at one of the windows to illustrate a point he is making, but most of the time the simulation is just running in the background, providing his talk with an aura of technological realism.

The fact that William's talk, with Jean-Pierre behind the scenes keeping track of the computer, was a performance, both of machines and humans, was quite evident - they joked about it and called it a show themselves. But the talk was also more than this. The paper was based on some of the criteria that Terrence, in chapter 3, listed as defining "real science". It was based on thorough biological scholarship, William and Jean-Pierre had read up on biology, and the results from running the simulation tended to support the position of some biologists and not that of others. The simulation had the potential to make an impact on a controversy within biology, it had reference to the Real World.

During his talk William stressed this biological relevance, and John Maynard-Smith, the invited biologist, had earlier made a favourable reference to their work in his talk. The machine that had produced the biologically relevant results was now running on the large screen behind and above William. It was an effective demonstration of what I in chapter 2 called "the performativity of Artificial Life". The working machine, with animats that defended their territories, was the physical proof that could make an impact on the truths or falsities of scientific theories about a more general phenomenon (how animals produce and defend their territories). Jean-Pierre and William's performance, or as they called it, their show, effectively showed this performativity.

The enchantment of machines with agency

Above I made a distinction between machine-performance (a fast computer) and human-performance (a skilled, creative engineer or artist). I would like now to take a closer look at a particular kind of machine-performance. When we say that a car performs well it mainly means that it "obeys" the orders of the driver. It does what he or she wants it to do, it speeds up, slows down and turns only when the driver wants it to. It is predictable. However, when an ALife program such as a Genetic Algorithm performs well it is not predictable. It evolves something which the ALifer had not thought of. In the first half of this chapter we saw that one of the things ALifers liked about Sims' simulation was the co-evolution of the different species in the simulation. Behaviours and body shapes that Sims did not design emerged as a result of the interactions of this co-evolution. That is, the machine-performance of the simulation had some of the properties that we normally associate with human-performance (e.g. a performing musician); a certain degree of creativity and autonomy. The creativity and autonomy of the evolving species were, I think to most of the audience, one of the most fascinating aspects of Sims' simulation.

The "creativity" and "autonomy" of ALife simulations are, as we saw in chapter 2, often called emergence. Chris Langton writes about non-linear systems (of which a well-tuned, evolving GA is an example), that in these systems "...the whole is more than the sum of the parts." (Langton 1989b:41) That which is "more" is often said to emerge.

In an indirect sense, Alfred Gell talks about "emergence" in his paper on the "enchantment of technology". I will now show how.

"Emergence" can be said to be an instance of what Bateson called explanatory principles (Bateson 1972:38). Explanatory principles name phenomena without explaining them. When Newton named the attraction between physical bodies "gravity", he did precisely that, he named it, and he classified it, but he did not explain it. When the relation between, on the one hand, the interacting parts of a system, and, on the other hand, the properties of the system as a whole, is said to be one of "emergency", then this relation is named, but it is not explained.

ALifers, to be sure, did sometimes explain the emergent properties of their systems. They analysed, in hindsight, the systems that they had evolved. In Chapter 3 I showed an example of this. The (emergent) behaviour of the robot with the evolved (or emerged) "brain" was explained by reference to the causal interactions between the "neurons" of the "brain".

At other times, however, ALifers were not able to explain the emergence they talked about. Some researchers, for example, speculated that consciousness was an emergent property that arose out of interaction between the parts of the brain. But they were no more able to explain this emergent process than anyone else. They named it, but they did not explain it.

On a general level, then, the role of "emergence" as an explanatory principle can be said to fill the gap between a more or less understood(56) - but in principle rational - mechanism and an experienced phenomenon that somehow transcend this rational mechanism.

This use of the term "emergence" coincides with how Gell talks about the magic of art objects. Gell, using the wood carvings in the stem of Trobriand Kula-canoes as an instance of art objects, writes:

"It seems to me that the efficacy of art objects as components of the technology of enchantment--a role which is particularly clearly displayed in the case of the Kula canoe -- is itself the result of the enchantment of technology, the fact that technical processes, such as carving canoe boards, are constructed magically so that, by enchanting us, they make the products of these technical processes seem enchanted vessels of magical power. That is to say, the canoe board is not dazzling as a physical object, but as a display of [human] artistry explicable only in magical terms, something which has been produced by magical means." (1992:46, my emphases)

Then consider this rewriting of Gell:

It seems to me that the efficacy of Genetic Algorithms as components of the technology of enchantment--a role which is particularly clearly displayed in the case of the Sims simulation--is itself the result of the enchantment of technology, the fact that technical processes, such as co-evolving species of "box-catchers", are constructed emergently so that, by enchanting us, they make the products of these technical processes seem enchanted vessels of emergent power. That is to say, the Sims simulation is not dazzling as a physical object, but as a display of [computerised] artistry explicable only in emergent terms, something which has been produced by emergent means.

Comparing these two paragraphs, Gell's and mine, we see that magic is an anthropological explanatory principle which (among other things) labels the same phenomena that ALifers label emergent. They both refer to an agency which, relative to a specific point of view, is slightly mysterious.

In the (first) quotation above, Gell is saying something about the relations among three parts; the person who has made the art object (the artist), the art object itself, and the observer of the art object. The observer is the one who is enchanted. Hence, "the enchantment of technology" denotes a condition or a state of the observer. "The technology of enchantment" is the external object that brings about this subjective condition. So, when Gell writes that "the technology of enchantment [...] is itself the result of the enchantment of technology", he is saying that people reify their own enchantment into the art object. The observers objectify the enchantment, whereas Gell, analytically, subjectifies it. The reason why the observers are enchanted is their relation to the artist. They know that there is a person who has made the art object, and they respect or are impressed by his or her skills. The artist possesses "magical means". The reason why observers objectify their enchantment into the art object, is that this object, as Gell writes, is the "something which has been produced by [these] magical means". In objectifying their own enchantment into the art object, people also objectify the magical powers of the artist; "by enchanting us," Gell writes, the artistry of the artist "make[s] the products of these technical processes seem enchanted vessels of magical power." Just as with the enchantment, Gell subjectivises the magic - it belongs to the skilled artist - whereas people generally, according to Gell, objectivise the magic into the art object. By subjectivising magic, Gell explains it, and in doing so the magical objects become "less magical".

It is, with respect to the relation between the art object and the observer, interesting to compare Gell's understanding of art to how emergence was talked about at COGS. Generally, the ALifers at COGS were a bit critical of how this term was used by other ALifers. In agreement with their general, relativistic philosophy, they did not want to say that the emergent properties of a system were necessarily objectively present in the system. "Emergence", Inman Harvey writes in his dissertation, "is in itself nothing magic as a phenomenon, if it is considered as emergence-in-the-eyes-of-the-beholder." (Harvey, 1993:22) Harvey refers to another ALifer at COGS, Michael Wheeler, who sees the technical use of "emergence" in relation to the everyday use of the term. An example of the everyday use of the term that Wheeler often quoted (and that Harvey refers to) was the sentence "the moon emerged from behind the cloud". This statement is observer dependent; to observers elsewhere, the moon would not yet have emerged or it would already have emerged. The technical, "ALify" use of emergence, according to Harvey and Wheeler, is also observer dependent.

If we compare Gell's understanding of the magic of art objects with Harvey and Wheeler's understanding of the emergent properties of ALife systems, we see that they agree quite well. There is nothing magical either in the art object or in the ALife system. Both Gell, Harvey, and Wheeler are subjectivising the magic and the emergence.

From this, two problems with Gell's argument arise. First, people do not necessarily objectify the magic or the enchantment of an art object into the object. The ALifers at COGS were clearly "enchanted", or fascinated, by emerging (or "magically arising") creatures of Sims' simulation (and of GA's in general), even if they were as relativistic as Gell. We should not take it for granted that people objectify their own enchantment or the magic of the artistry.

From this first point my second problem arises: Why should we, analytically, subjectify - or objectify - the enchantment and the magic? Maybe it is as true that (rewriting Gell) "the enchantment of technology [...] is itself the result of the technology of enchantment" as that (as Gell writes) "the technology of enchantment [...] is itself the result of the enchantment of technology"? That is, maybe the enchanting object and the enchanted subject define each other reciprocally? Here, there is neither time nor space to argue that this is the case. I only adopt the agnostic position (see figure 1) of not deciding whether the subject pole or the object pole should be credited.

Having discussed the relation between the art object (or the emergent object) and the observer, I now turn to the other important relation, that between the artist and the art object. Here there is a striking difference between the artistry that Gell describes and the emergent properties of ALife systems. In the creative, magical process that Gell describes, the skilled artist is a human being separated from the art object. In the creative, emergent process of a running simulation, the "artist", or the agent, is the simulation - the art object - itself.

Ascribing this agency to the simulation is a concrete form of "objectivisation". ALifers do not only "understand" this agency to be immanent in the machine, they work long hours in their laboratories, programming their computers, negotiating with the system managers, debugging their programs, "fiddling around with the parameters", etc. in order to tune their simulations so they will start, creatively, to produce their own life forms.

We might refer to this process as "subjectivisation" as well. But, again, it is not only a process in which ALifers "understand" their machines to be some kind of subjects (which, as we saw in chapter 4, they did in many different ways). It is also a process whereby they actually try to make their machines into "subjects" (described for example as autonomous agents).

Concluding remarks on ALife as art

When ALifers are enchanted by a computer simulation there are several things going on. First, being computer scientists or at least quite familiar with computer science, ALifers know how to appreciate a product of skilled computer engineering. They are enchanted by the performance of their colleagues. Second, a good computer simulation also shows the performance of computer technology at the edge of "high tech". They are enchanted by a "Connection Machine®" from Thinking Machines Corporation, or by a "live" simulation on the large screen at the conference hall. Third, they are enchanted by an agency which is neither that of their skilled colleague nor that of a efficient machine, but that of an artificial evolution creatively producing emergent properties.


Chapter 7: Conclusion


Irony and Engagement

In large parts of this thesis I have presented the field of Artificial Life as a combination of real science and postmodern science. In chapter 3, I showed how some of the ALifers at COGS related these concepts. In the postmodern conceptualisation of Artificial Life, the search for one truth, one image of Nature given by the unification of ALife and biology, was rejected in favour of attempts to understand ALife as a more playful and artistic endeavour. As the "real science" alternative matches the conventions of modern technoscience, and as "postmodern" science must follow "modern" science, these two terms seem to imply an evolutionary movement from one to the other. The difference between the modern and postmodern ways to understand and practise ALife may be seen as comparable to the difference between an old and a new way to understand and practise a technoscience. One might say that the new was about to replace the old, that "postmodern" would replace "modern". One might think of this change as a paradigmatic change.

Kuhn's notions of changing paradigms include the idea of incommensurability. Two different paradigms will differ not only with respect to the theories chosen, and answers given, but also with respect to what counts as a problem:

Must a theory of motion explain the cause of the attractive forces or may it simply note the existence of such forces? Newton's dynamics was widely rejected because, unlike both Aristotle's and Descartes' theories, it implied the latter answer to the question. When Newton's theory had been accepted, a question was therefore banished from science. (Kuhn 1962:147, my emphasis)

Going from one paradigm to another is a "revolution". New questions are raised; Newton's theory was sufficiently open-ended to allow for further inquiry along the lines that he sketched out, but with Newton some questions were also banished from science; gravity became an unquestionable explanatory principle (Bateson 1972:38). That is, a "revolutionary" change in interests had occurred.

It may be that the difference between the real science and postmodern ways to represent and normatively judge Artificial Life is "paradigmatic"; that these two representations of ALife are in some sense incommensurable. They are based on different interests. When I asked Gregory (see chapter 3) what he thought about Terrence's wish to do real science, he shrugged his shoulders and answered "I avoid that question by claiming I'm an engineer". Here, conflict ceases because of lack of common interests.

However, when it comes to the different ways to practise Artificial Life research, I think the difference between the modern and the postmodern elements is less clear-cut. The two ways to practise ALife that I discussed at length in chapter 6 are, I think, not so much a matter of "old" versus "new" but more a matter of continuity versus novelty. The old ("modern" technoscience) is not replaced by the new ("postmodern" technoscience). Rather, the old is reproduced in new ways.

When ALife researchers relate to their simulations they both reproduce and challenge the technoscientific distance. They reproduce the distance between the researcher and a scientific nature that (as all nature) appears to us as pregiven, despite the fact that they construct every single digital bit that they put into the experiment. In extending Boyle's technoscientific constructivism (who, as Latour put it, "extended God's constructivism", see chapter 1), ALifers extend the construction of objectivity. But they also challenge the distance of technoscience. ALifers at COGS and elsewhere adopt a relativistic philosophy rather than an objectivistic one. They know that their Nature is a construction and relax their technoscientific distance by revealing that it is such a construction, sometimes calling it "art", even if it is a construction of an "objective" world. It seems to me that what ALifers cherish - explicitly as well as implicitly, as an artistic, human expression - is their own construction of objectivity. Thus the tension between, on the one hand, reproducing technoscientific distance and objectivity, and, on the other hand, challenging it by a relativistic philosophy and a celebration of the engineer as a creative artist, is resolved.

In coming to this conclusion I have resolved a tension that I have created in this thesis by building my presentation up around the binary opposition between real science and postmodern science. I have produced a "synthesis", a "Conclusion". But I have not resolved this tension merely in order to conform to a style of writing (even if I have also done this). The important point is that ALifers, by making their endeavour into an "art of objectivity" also resolve a tension. They program their natures, and they are inspired by philosophies that reject objectivism. Still, they continue to work within technoscience. One of the ways in which they deal with this tension, is by becoming ironic, by adopting a distance towards their own roles as "scientists". Perhaps the best example of this was the presentation (accompanied by the laughter of the audience) of the highly realistic underwater simulation with a parody of Jacques Cousteau commenting (Terzopoulos et al. 1994). This presentation emphasised with irony the artifactual character not only of the simulation as such, but of the realism of the simulation. Moreover, the presentation utilised the conventions of the TV genre known as natural history documentary to create a "documentary" of a simulated world, thus suggesting that the realism of "real" documentaries is also a construction, a simulation.

There is a similar irony present in talking about animats in terms such as "these guys" (see chapter 4). At COGS, the AI researchers (including the ALifers) often spoke about the AI enterprise as a "history of broken promises". They referred to the many optimistic promises - made by AI researchers - of what AI systems would soon be capable of doing, the subsequent failures of these promises, and the "victories" thus cashed in by the critics of AI. Some researchers at COGS noted that the promise of the 60's that AI systems would soon become Grand Masters of chess (something that would prove their intelligence) is actually coming through in the 90's. However, today no one claims that these systems are "intelligent" any more. Among AI researchers this was described as a process of "chasing a moving target", and they speculated that it might continue to be the case that when AI-researchers make a machine that performs a "human intelligent action", the definition of "human intelligence" (or of intelligence in general) will change so as to exclude the new, "intelligent" machine.

In addition to the more careful attitude that followed the realisation of the AI-history of broken promises, many AI researchers - particularly those calling themselves "ALifers" - had adopted a phenomenological philosophy that went against the notion that "intelligence" could be disembodied, abstract patterns of information, possibly existing inside computers.

Despite these attitudes ALifers continue to construct anthropomorphic or lifelike creatures inside their computers. The language and representations used to refer to and make sense of these lifelike creatures - the animats and robots - are taken from the Real Life of the scientist. They include words and phrases such as these guys, he, and she, and they include icons - symbols that resemble their reference - such as the researcher's own body (used pedagogically, as we saw in chapter 4, to explain something concerning an animat or a robot). These expressions establish similarities between Artificial Life and Real Life, but in order to avoid making unjustified anthropomorphisms, researchers often transformed similarities into metaphors rather than identities. One way to give the similarities a metaphorical quality was by being ironical, for example by addressing animats as these guys. This irony could also be expressed by picturing, on the computer screens, the animats as "little guys". Plate 6 (page 129) is an example of how animats are made into "little guys" graphically rather than verbally. These cute animats were presented by Dave Cliff at the ECAL95 conference (Cliff 1995) and at his Internet site (Cliff 1995b). At the Internet site (and at the conference) Cliff makes it clear that the interactions between the animats took place in 2 dimensional space (the animats and their world were flat), and that, as he writes, "the 3D structures, surface physics, and shading are all purely for illustrative effect" (Cliff 1995b). In a paragraph that Cliff has called Graphics techno notes we get a technical description of how the 3D movie was generated. The informal heading tells us that this is not very serious science, this is about having a bit of fun with computer graphics. The fancy 3D illustration of these animats and the Graphics techno notes also make it clear that the animats, no matter how realistically they are presented, perhaps even because they are so realistically presented, are not really alive.

Rather than seriously constructing life-like or intelligent machines, AI researchers, at least some of them and sometimes, construct such machines with ironical role distance, a bit for the fun of it. By adopting this role distance as AI researchers, the ALifer can continue to do AI-research after the demise of GOFAI. Likewise, ALifers can continue to practise objective technoscience after objectivism - according to themselves - is obsolete.

As we have seen, ALife research reproduces modern technoscientific practice in its own way. It produces new technology by means of the technoscientific professionality of the engineer. That is, ALifers produce new technology by situating themselves as subjects and their machines as objects. Or, conversely, ALifers situate themselves as subjects and their machines as objects by producing life-like and partly autonomous technology (for example by evolving species of genetic algorithms).

But ALifers also reject the Cartesian and modern understanding of the relationship between the thinking subject and the objective world, and they have to a certain extent replaced Boyle's distanced witness with a more playful engineer. Through these changes ALife research has become a postmodern technoscience. GOFAI was a truly modern technoscience. It thrived on the unabashed optimism of progress (that in hindsight has become "the history of broken promises"), and it was based on the rationalist and objectivist conceptions of human beings and the world that grew out of the Enlightenment. Lyotard defines postmodernism as "incredulity toward metanarratives" (Lyotard 1979:xxiv). Beliefs in the progress of the West and the rationality of science are such metanarratives. Artificial Life, in rejecting GOFAI, also rejects the metanarratives on which it was based. Among the ALifers at COGS I found few that legitimated their science as part of the great Western Progress or Rationality. What I found was a practice geared towards many smaller performances. These performances could be commercial, like the Lyotardian "performativity" of a "Connection Machine®" or of an ALife based computer game, or they could be artistic and ironic performances of humans and machines - and at times, some might say, have elements of the irresponsibility of "boys" who play with the latest high-tech devices. It could be the slightly magical performance of the autonomously evolving species of a well-tuned GA.

Finally I should note that having these elements of "irresponsibility", of playfulness and irony, does not mean that practising ALife was a nihilistic endeavour. ALifers also had serious intentions with their science. At COGS this engagement was perhaps most clearly displayed in the rejection of GOFAI. This was a rejection of the "speciesism" and scientism of Man-the-Scientist. When I asked Ph.D. students why they studied ALife, many gave an explicit political or ethical answer. Some argued that to get away from the anthropocentrism of GOFAI and towards a more systemic (or holistic) understanding of life and cognition was a more ecologically sound way to understand thinking and acting creatures and their environments. Established researchers of ALife were careful to make explicit political statements. They talked rather in terms of performativity; they talked about "fruitfulness" (in relations to results) and "how to better make robots". However, underneath these expressions there could be other motivations. Gregory, at one occasion, told me with emphasis that "[internal] representations [the hallmark of GOFAI] are evil because they make us do the wrong things!" The context in which this was said suggested that the explicit reference to "doing the wrong things" was "making robots the wrong way". The temper, however, and the use of the word "evil", suggested a wider reference. Rejecting GOFAI, to him as well as to others, had to do with making a saner world.

Monstrous technology or letting go of control?

The first anthropologist I met who knew about Artificial Life research - he had attended a lecture where biologist Richard Dawkins had presented his Genetic Algorithm (Dawkins 1989) - told me that he had left the lecture hall "shocked" by the fact that leading biologists did not see any reason, in principle, why computers could not contain life. One of the last social scientists to whom I presented my findings on ALife research (who had worked on AI-issues in the 80's) commented with emphasis that "life is life, machines are machines, these two can never be the same. I hope you take a critical stance."

In this last section I reflect upon how we as anthropologists and sociologists ought to go about studying intelligent machines. My reflection is based on a recurrent experience that I have had throughout this project, namely that most social scientists I have met who have studied ALife or AI, have also been critical towards it. They have been quite convinced that computers cannot really be alive or intelligent, and/or they have seen intelligent machines as a threat. Most of these anthropologists or sociologists have written about AI/ALife. Here, however, I will comment on their oral statements, as it is in these that their moral concerns have been most explicitly expressed. By doing this I will, as with the oral statements of the ALifers, anonymise these opinions. Hence, I regret, I will remove the holder of these opinions from the Scientific Society of Subjects in which "(Latour 1987)" etc. belong. They will thus loose some of their subjectivity, and the following story will loose the strength given by the enrolment of Subjective allies.

During my fieldwork a visiting sociologist gave a talk at COGS. The main line of the talk established a difference between humans and machines. Humans are thinking and intentional entities, machines - and computers in particular - are not. His argument followed the criticism of AI raised by Dreyfus and Searle. In short, he argued that we have to distinguish between action and behaviour. A wink is an action - a flirt or a confirmation of friendship. A blink is an involuntary movement of the eyelid, a "mere behaviour". The former is intentional, the latter is not. In order to act (as opposed to merely behave) one has to be socialised into a (Wittgensteinian) "form-of-life" - a particular cultural background of meanings. One has to be a socialised part of a human community.

After the talk some of us - three AI researchers (more or less involved in the ALife group), the sociologist, and I - gathered at a campus bar for a drink. The discussion revolved around the Cog-project - the building of a humanoid - at Massachusetts Institute of Technology.(57) The AI-researchers asked the sociologist if he thought that Cog, some time in the future, may become "intelligent". The sociologist denied this possibility on principal reasons, and he kept on denying it after the AI-researchers explained the rationale behind the Cog project. An important point here is that the Cog project is trying to meet the criticism of AI raised by (among others) Dreyfus and Searle. The Cog project is an attempt (and the degree to which it may succeed is not a topic here) to make an "embodied" AI system that is to be "socialised". The difference between the principles behind a Good Old Fashioned AI system and the principles guiding the construction of Cog was identical to the principal difference that the sociologist used to distinguish GOFAI systems from human beings. According to the sociologist's own principles, then, Cog became quite human. The sociologist did not see this, and in the course of the conversation it became clear to me that he did not want to see it, even as the AI researchers tried to make it clear to him. At one point in the conversation one of the AI researchers also became aware of this, and something interesting happened. The AI-researcher "sociologised" the sociologist: He asked the sociologist something like the following: "It seems that you have some moral reasons for not wanting Cog to be intelligent?" The sociologist openly admitted that this was the case. His first reason was what he himself called "professional imperialism"; he wanted to keep human beings as the sociologist's object of study.(58) His second reason was that he feared that if machines become human-like, humans will become machine-like. The AI researcher nodded quietly, and the discussion on Cog came to an end. None of the AI researchers commented on or opposed the sociologist's moralism. AI researchers, it seemed to me, are often confronted with such moral objections to AI. They have learned to handle them simply by avoiding them.

As anthropologist Tian Sørhaug has recently argued, ascribing personhood strictly to human beings is a particular Western idea (Sørhaug 1996). People around the world identify canoes, houses, gifts, mountain tops, trees, or animals with personal or human qualities. Contemporary anthropologists write easily about this without moralising, without, for example, telling the Balinese that their holy, "personalised" mountain is "really" nothing but a result of volcanic activity. Most of the anthropologists and sociologists who have shown interest in AI/ALife, however, seem to have had precisely such an agenda; to establish as a fact that computers are "really" not living or intelligent beings.

This agenda is sometimes said to be "critical" or even "radical". I think it is neither. Rather, I see in it a reproduction of an old theme in Western modernity. The romanticism of early 19th century, with its reaction to the materialism of Enlightenment science and with its individualism, is one example of - in Marshall Berman's vocabulary - the "modernist" reaction to "modernism" (Berman, 1982). There is, as Berman pictures so well, an ambivalence inherent in modernism. Modernism bears its own critique - anti-modernism - as an established part of itself. German philosophy of the 1930's, with the "critical school" and Martin Heidegger, is yet another instance of this anti-modernism.

Let me take Heidegger as an example, since he has often been referred to in this thesis, and since there is an interesting movement in Heidegger's thoughts. Michael Zimmermann writes as follows about Heidegger's view on typewriters:

Surely, we may think, this device is simply a more efficient means for writing. According to Heidegger, however, efficiency was the wrong measure to use in evaluating writing. For him, writing was essentially handwriting. It was his experience that his own thinking occurred through his hand while he was writing. Typewriting undermined both thought and language, he argued, because in typewriting the word no longer "comes and goes through the writing and authentically acting hand, but rather through its mechanical pressure [on the typewriter keys]. The typewriter tears the writing from the essential realm of the hand and, that is, of the word. (Zimmermann 1990:205, reference omitted)

There are valuable insights in this passage. The idea that "thinking occurs through the hand while writing" has played an important role in this thesis (and, indeed, it is a Heideggerian realisation that has inspired both the ALifers and me). Furthermore, a more "efficient" technology of writing is not necessarily "progress". It may be a step "backward" - as it was to Heidegger. However, to contemporary writers who have grown up with keyboards, typing is not necessarily "mechanical", it may be as "organic" as Heidegger's handwriting because we have become experts in typewriting. As Heidegger with his pencil and paper, we think through our hands, our typewriting capabilities, and our keyboards and computers. This probably makes us think and write differently from Heidegger, but not necessarily worse than he. The fact that Heidegger conceived of technological change as a movement from something "authentic" to something "mechanical", is an example of his alienation from modern technology. The authentic was to Heidegger related to the "natural" and to the "homeland", and technology - be it typewriters or cities - imposed itself between human beings and the authentic:

In the technological era, space and time are no longer understood in terms of what Heidegger defined as an authentic homeland, a place in which the destiny of a people can work itself out within a familiar natural context. His personal attachment to his own Swabian soil was widely known, as was his antipathy toward big cities. His close friend Heinrich Petzet noted that Heidegger would become almost physically ill when approaching a big city, so upset was he by the combination of urban pollution and social dislocation. (Zimmermann, 1990:210)

I do not hold the sociologist who visited COGS responsible for Heidegger's alienation. However, the theme - fear of modern technology - is recognizable. Where did the fear that "as machines become humans, humans will become machines" originate? COGS was populated by seniors who had been involved in making intelligent machines since the 60's, and by students who got their first computers when they were 13. There was no observable lack of "humanness" at COGS compared to my own anthropology department. I have not seen any other reports that document that people who think that computers are humans start to behave as machines.

There is, however, one sense in which there may be some truth in the sociologist's statement. Let me give an example. A couple of years ago the local supermarket next door was bought by a large chain store. I knew the old store and the people working in it. The lady behind the cash register always smiled to me and decided that I could get yesterday's strawberries at half price. In the new store this is not possible. The price is determined by the new cash register that is connected to a central computer. All prices are set centrally and the interaction between me and the person behind the cash register is reduced to the mechanical operations of getting the strip codes of the items I buy read by the computerised cash register. Applied computer science, and the basic computer research once performed at places such as COGS, have enabled the construction of a centralised, "intelligent" cash-register system where human interaction is reduced to impersonal, "mechanical" movements.

I will, however, argue that the intelligent computer does not determine this change. This is also exemplified in the chain store. All the employees working at the cash registers in the store have attended a "smile-course" where they have learned to say "Hei" ("Hi") the first time they meet the eyes of the customer. The employees of the store apply this lesson quite consistently, and greetings have effectively become reduced to mere mechanical behaviour. This is not determined by computer technology, but by a policy that the leaders of the store think of as "rational". A similar policy determines how the producers of cash registers design their systems. A computer scientist pointed out to me that there is no problem - in principle - in designing computerised cash registers so that they suggest a price, but leave it to the person handling it to finally decide the actual price. Responsibility could, from a technical point of view, easily have been delegated locally, and a more vital and respectful interaction could have been retained between the person behind the cash register and the customer, even if the cash register was the latest and most "intelligent" one on the market.

It may be however - and here we see one type of technological determinism - that no such intelligent cash register is available on the market. It may be that all cash register systems are based on the philosophy of delegating very little control locally. That is to say, the centralised policy of chain stores may have been black boxed into cash register systems. In order to reopen this black box one would need a laboratory, a research department, and a system of sales and distribution of cash registers. (See (Latour 1987) for a discussion of the closing and reopening of black boxes.) A relatively small Norwegian chain store does not have the resources to do this. The policy of this chain store, then, has been partly determined in the research departments and committee rooms at National Cash Registers Inc., years before they buy their new cash registers.

What we need to ask, then, is if the fact that AI/ALife researchers conceptualise and make "intelligent" or "living" machines makes later users of commercial ALife inspired machines into machines (or in some other way impoverishes their lives). I have seen no argument that demonstrates such a causality. The ease with which one - in principle - can make intelligent cash registers that leave room for local responsibility (and thus for dialogue between customer and employee) suggests that there is no such determinism between intelligent machines and poor lives.

I should, however, note that particular ways of understanding intelligence can make a difference to applied machinery and to people's lives. To stay with the cash register example: The centralised cash register system has an architecture that conforms to a GOFAI system (with its centralised "cogniser"), whereas the hypothetical, decentralised system of cash registers has an architecture more in line with the way ALifers understand intelligence (namely as decentralised and parallel processes). Decentralised ALife systems may be black boxed and thus spread to applied computer systems. Indeed, Philip Agre, one of the early proponents of ALife ideas at both COGS and MIT (Agre and Chapman 1989), gave a talk at the University of Oslo where he argued for the use of more decentralised computer systems, for example in the State administration.

One argument - when discussing cash registers - in favour of a certain technological determinism needs to be mentioned. New, and more intelligent cash registers, no matter how they are designed, are nevertheless designed to enable faster processing, more customers through the cash register, and hence less time for "human" interaction. However, when one, as a Norwegian is coming to USA, one is surprised to find that even in large, modern supermarkets the lady behind the cash register is talking cheerfully with her customers. The high-tech cash register is determining neither the Norwegian silence nor the American sociality.

Behind the fear of the sociologist who visited COGS that "as machines become humans, humans will become machines" there seems to be a particular kind of pessimistic technological determinism. I have also heard anthropologists defend such a determinism explicitly: "ALife is a bad thing because it leads to bad technology, such as advanced weapons." These arguments imply, as I see it, that we are slaves of an autonomous technological system that deprives us of our human worth, that "makes us into machines". Heidegger, according to Zimmermann, also understood modern technology to be such an autonomous system: "...from Heidegger's point of view, no [rational] human discourse could have any effect on the outcome of the technological era. Modern technology was not the consequence of human action, and thus could not be essentially changed by such action." (Zimmermann 1990:218) However, "After World War II, ... when [Heidegger] realised that the German 'revolution' had completely failed, he had to come to terms with the fact that the technological epoch was not nearly at its end, but instead was only just beginning. Given that this epoch might last five hundred years, Heidegger's later remarks became more tempered and reserved." (1990:218) Heidegger's less pessimistic attitude is interesting. His later reasoning on the question of how we ought to relate to technology was formulated as a Zen koan. A koan is an apparently paradoxical saying that, according to Zimmermann, "cannot be solved by merely rational means, ... but requires an existential breakdown of the rational ego's rational way of framing things - and thus a breakthrough to a less constricted, more expansive way of being in the world." (1990:219-220) In Zimmermann's words the meaning of Heidegger's koan is as follows:

Instead of trying to "solve" the problem of modern technology by furious actions and schemes produced by the rational ego, ... Heidegger counseled that people learn that there is no exit from that "problem". We are cast into the technological world. Insight into the fact that there is no exit from it may, in and of itself, help to free us from the compulsion that characterizes all attempts to become "masters" of technology - for technology cannot be mastered. Instead, it is the destiny of the West. We can be "released" from its grip only to the extent that we recognize that we are in its grip: this is the paradox. (Zimmermann 1990:220)

When sociologists or anthropologists, as the sociologist who visited COGS, try to oppose what they fear as an autonomous technological system (that makes us into machines) by posing a social constructivistic theory (as the sociologist also did), then we end up, as I described in chapter 1, by violently alternating between natural determinism and social determinism: Either "it" (Nature or Technology) controls us, or we must take control over it. We must take a "critical stance" and not grant AI-systems any human-like agency, because then "it" - the technological system - will turn us into machines. It is as an alternative to such attempts to "master" a monstrous technology that post-war Heidegger (in Zimmermann's interpretation) proposes a more Buddhist attitude of letting go of control. This alternative resembles that of the ALifer I quoted in chapter 3, with the difference that the ALifer took the Buddhist idea of "letting go of control" even further (and the idea was an emically Buddhist idea to him; he was a practising Buddhist). The ALifer advised us that if we wanted to continue to live in a technological society (he himself had retired into a Buddhist monastery) we should not only let go of control mentally or bodily, but technically, by making telephone networks etc. that evolved like the virtual species of Genetic Algorithms. This evolving technology will not "control" us any more than we will "control" it. It will co-evolve together with us.

Several ALifers talked about the relationship between themselves and their ALife-creatures as a co-evolution. I think this makes sense. I think, however, that we should not only think of our relationship with ALife creatures, but with all technology, as co-evolution. I described a special case of such a mutual dependency when I, in chapter 5, described the relationship between tools and skills. It may thus be useful to think of our technology as something that actually can co-evolve. That is, we need to think of it as life, as creatures with their own agency and quasi-subjectivity. In practice we already do this. We constantly address our cars, computers, mix-masters, and pencils in "anthropomorphic" terms. That, as Latour points out, is why "We have never been modern" (1993). What we need, is to openly acknowledge this, and not, as an anthropologist or as an ALifer, try to hide it, excuse it, or purify "things" into pure things and "humans" into pure humans. There has never been such a thing as a pure, modern "thing" or a pure, modern "human" (Latour 1993:138). And as the engineers of Artificial Life try with all their genius to actually endow their computers with life, the "critical" attempts to purify these quasi-subjects into pure things will become increasingly difficult.


References

Agre, Philip E. and David Chapman
1989 What are plans for? A.I. memo no. 1050a Artificial Intelligence Laboratory, MIT.

Barth, Fredrik
1989 "Sosialantropologen i arbeid med egen kultur. Fredrik Barth i samtale med Ottar Brox og Marianne Gullestad", in Ottar Brox og Marianne Gullestad (eds.): På norsk grunn. Sosialantropologiske studier av Norge, normenn og det norske, p. 199-224, Ad Notam, Oslo.

Bateson, Gregory
1956 "Toward a Theory of Schizophrenia" in Bateson 1972.
1972 Steps to an Ecology of Mind, Ballentine Books, New York.
1979 Mind and Nature, A Necessary Unity, Bantam Books, Toronto.

Berger, P. & Thomas Luckmann
1984 [1966] The social Construction of Reality, Penguin Books, London.

Berman, Marshall
1982 All That Is Solid Melts Into Air, Simon and Schuster, New York

Boden, Margaret
1990 The Philosophy of Artificial Intelligence. Oxford University Press, Oxford.
1994 "Autonomy and Artificiality" in AISB Quarterly, Spring 1994, no 87.

Boerlijst, Maarten and Pauline Hogeweg
1992 "Self-Structuring an Selection: Spiral Waves as a Substrate for Prebiotic Evolution" in Langton et al., (eds.) 1992.

Bourdieu, Pierre
1990 [1980] The logic of practice, Polity Press, Cambridge.

de Bourcier, Peter, and Michael Wheeler
1992 "Signalling and Territorial Aggression: An Investigation by Means of Synthetic Behavioral Ecology", in Cliff et al. (eds.) 1994.

Brooks, R. A. & Stein, L. A.
1993 Building Brains for Bodies, A.I Memo No. 1439, MIT Artificial Intelligence Laboratory.
1994 "Coherent behavior from Many Adaptive Processes", in Cliff et al. (eds.) 1994.

Callon, Michel
1986 "Some elements of a sociology of translation: domestication of the scallops and the fishermen of St Briuc Bay" in John Law (ed.) Power, Action and Belief, Routledge & Kegan, London.

Callon, Michel and Bruno Latour
1992 "Don't Throw the Baby Out with the Bath School! A Reply to Collins an Yearley" in Andrew Pickering (ed.): Science as Practice and Culture, The University Of Chicago Press, Chicago.

Cliff, Dave
1990 Computational Neuroethology: A Provisional Manifesto, CSRP 163, University of Sussex.

Cliff, Dave, Philip Husbands, Jean-Arcady Meyer, and Stewart W. Wilson (eds.)
1994 From animals to animats 3, MIT Press, Cambridge, Massachusetts.

Cliff, Dave and Geoffrey F. Miller
1995 "Co-Evolutionary Simulations" in (Moran et. al 1995)
1995b CoEvolution of Neural Networks for Control of Pursuit and Evasion, http://www.cogs.susx.ac.uk/users/davec/pe.html .

Cole, John W.
1985 "In a pig's eye: Daily life and political economy in Southeastern Europe" in John W Cole: Economy, Society, and Culture in Contemporary Romania, Research Report Number 24, Department of Anthropology, University of Massachusetts, Amherst.

Collins, Harry M.
1975 "The Seven Sexes: A study in the Sociology of a Phenomenon, or the Replication of Experiments in Physics", in Sociology, London, p. 205-24, vol. 9.
1990 Artificial Experts, Social Knowledge and Intelligent machines, The MIT Press, Cambridge.

Collins, H. M. and Steven Yearley
1992 "Epistemological Chicken" in Andrew Pickering (ed.): Science as Practice and Culture, The University Of Chicago Press, Chicago.

Churchland, Paul M.
[1984] 1993 Matter and Consciousness, The MIT Press, Cambridge, Massachusetts.

Dawkins, Richard
1989 "The Evolution of Evolvability" in (Langton 1989).

Dreyfus, Hubert L.
1991 Being-in-the-World, A Commentary on Heidegger's Being and Time, Division I The MIT Press, Cambridge, Massachusetts.

Dreyfus, Hubert L. and Stuart Dreyfus
1990 "Making a Mind Versus Modelling the Brain: Artificial Intelligence Back at a Branch Point" in Margaret A. Boden: The Philosophy of Artificial Intelligence, Oxford University Press, Oxford.

Durkheim, Émile
1976 [1915] The Elementary Forms of the Religious Life, London.

Eigen, M. and P. Schuster,
1979 The Hypercycle: A Principle of Natural Self-Organisation, Springer, Berlin.

Emmeche, Claus
1991 Det Levende Spil. Biologisk form og kunstig liv, Nysyn, Munksgaard.

Fann, K.T.
1970 Pierce's Theory of Abduction, Martinus Nijhoff, The Hauge.

Forsythe, Diana E.
1993 "The Construction of Work in Artificial Intelligence", in Science, Technology, & Human Values, Vol. 18, no. 4, Autumn 1993.

Gell, Alfred
1992 "The Technology of Enchantment and The Enchantment of Technology" in J. Coote & A. Shelton (eds.): Anthropology, Art and Aesthetics, Clarendon Press, Oxford.

Geertz, Clifford
1983 Local Knowledge, New York.

Gibbs, Phil
1996 Jacues-Yves Cousteau http://ntdwwaab.compuserve.com/homepages/phil_gibbs/cousteau.htm 

Gibson, James J.
1979 The Ecological Approach to Visual Perception, Houghton Mifflin, Boston.

Goldberg, David E.
1989 "Zen and the Art of Genetic Algorithms", in Schaffer (ed.) 1989.

Haraway, Donna J.
1989 "A Cyborg manifesto: Science, technology, and Socialist-Feminism in the Late Twentieth Century", in Haraway 1991.
1991 Simians, Cyborgs, and Women, Free Association Books, London.

Harvey, Inman
1993 The Artificial Evolution of Adaptive Behaviour, The University of Sussex.

Harvey, Inman, Phil Husbands, and Dave Cliff
1994 "Seeing The Light: Artificial Evolution, Real Vision", in Dave Cliff et al. 1994.

Helmreich, Stefan Gordon
1995 Anthropology inside and outside the looking-glass worlds of Artificial Life, Volume 1, Ph.D. thesis, Department of Anthropology, Stanford University.

Herrigel, Eugen
1971 Zen i bueskytningens kunst [In English translation: "Zen in the art of archery"] Gyldendal, Oslo.

Hoffmeyer, Jesper
1993 En snegl på vejen : betydningens naturhistorie, Omverden/Rosinante, København.

Holy, Ladislav, and Milan Stuchlik
1983 Actions, norms and representations, Cambridge University Press, Cambridge.

Husbands, Phil
1993 An ecosystems model for integrated production planning" in Int. J. Computer Integrated Manufacturing, vol. 6, nos. 1 & 2, 74-86.

Hylland Eriksen, Thomas
1993 Små steder, store spørsmål, Universitetsforlaget, Oslo.

Janik, Allan and Stephen Toulmin
1973 Wittgensteins Vienna, Touchstone Book, Simon and Schuster, New York.

Jarvie, I. C.
1968 "Limits to the Functionalism and Alternatives to it in Anthropology" in R. A. Manners and D Kaplan (eds.) Theory in anthropology : A sourcebook, Aldine, Chicago.

Kapferer, Bruce
1984 "The ritual Process and the Problem of Reflexivity in Sinhalese Demon Exorcisms", in John Mac Aloon (ed.) Rite, drama, Ritual, Spectacle, Ithaca.

Kaye, Howard L.
1986 The Social Meaning of Modern Biology, Yale University Press, New Haven.

Kirkebøen, Geir
1993a Psykologi, Informasjonsteknologi og Ekrspertise, Institutt for Lingvistikk og Filosofi, Det historisk-filosofiske fakultet, Universitetet i Oslo.
1993b "Informasjonsteknologi, bevissthet og psykopatologi: Et historisk perspektiv på hvorfor og noen eksempler på hvordan informasjonsteknologien har formet psykologiens menneskebilde» i Impuls, nr. 4.

Kelly, Kevin
1994 Out of Control, Reading, Massachusetts.

Knorr-Cetina, Karin
1981 The Manufacture of Knowledge, An Essay on the Constructivist and Contextual Nature of Science, Pergamon Press, Oxford.
1995 "Laboratory Studies. The cultural approach to the study of science" i Hamdbook of Science and Technology Studies, Sheila jasanoff et al, London: Sage Publications.

Kuhn, Thomas
1962 The Structure of Scientific Revolutions, University of Chicago Press.

Lakoff, George, and Mark Johnson
1980 Metaphors We Live By, The University of Chicago Press, Chicago.

Langton, Christopher G.
1989 Artificial Life, Addison-Wesley Publishing Company, Redwood City, California.
1989b "Artificial Life" in (Langton 1989).
1994 "Editor's Introduction: Special Issue on highlights of the Alife IV Conference" in Artificial Life Volume 1, Number 4, Summer 1994, The MIT Press.

Langton, Christopher G. et al.(eds.)
1992 Artificial Life II, Addison-Wesley Publishing Company, Redwood City.

Latour, Bruno
1987 Science in Action, Harvard University Press, Cambridge.
1988 "Mixing Humans and Nonhumans Together: The Sociology of a Door Closer" in Social Problems, Vol. 35, No 3 June 1988.
1993 We Have Never Been Modern, Harvester Wheatsheaf, New York.

Latour Bruno & Woolgar
1986 [1979] Laboratory Life, The Construction of Scientific Facts Princeton University Press, Princeton.

Lévi-Strauss, Claude
1963 Totemism, Boston.
1966 The Savage Mind, Chicago.

Lewis, Ricki
1992 Life, Wm. C. Brown Publishers, Dubuque.

Lien, Marianne
1996 "Kunsten å skape et unikt produkt" in Samtiden, nr. 3, 1996, Aschehoug, Oslo.

Lindgren, Kristian
1992 "Evolutionary Phenomena in Simple Dynamics", in Langton et al. 1992.

Lübcke, Poul
1982 Vor tids filosofi: Engagement og forståelse, Politikens Forlag, Copenhagen.

Lyotard, Jean-Francois
1979 The Postmodern Condition: A report on Knowledge, Manchester University Press, Manchester.

Maturana, Humberto, and Francisco Varela
1987 Kundskapens Træ, Den menneskelige erkendelses biologiske rødder, Ask, Århus.

McClelland J.L., D. E. Rumelhart, and G. E. Hinton
1986 "The Appeal of Parallel Distributed Processing" in McClelland et al.: Parallel Distributed Processing, Explorations in the Microstructure of Cognition, Bradford Book, Cambridge, Massachusetts.

Martin, Robert M.
1994 The Meaning of Language, The MIT press, Cambridge, Massachusetts.

Miller, Geoffrey. F.
1993 Evolution of the human brain through runaway sexual selection: The mind as a protean courtship device, Ph. D. thesis, Department of Psychology, Stanford University.
(in press) "Artificial life as theoretical biology: How to do real science with computer simulation." in Margaret Boden: Philosophy of Artificial Life, Oxford Readings in Philosophy. Oxford University Press.

Mitcham, Carl
1994 Thinking Through Technology, University of Chicago Press, Chicago.

Moravec, Hans
1989 "Human Culture: A Genetic Takeover Underway", in Langton (ed.) 1989.

Newell, Allen and Herbert A. Simon
1990 [1976] "Computer Science as Empirical Enquiry: Symbols and Search", in Boden (ed.) 1990.

Nisbet, Robert A.
1969 Social Change and History: Aspects of the Western Theory of Development, Oxford University Press, London.

Merleau-Ponty, Maurice
1962 Phenomenology of Perception, Routledge and Keagan, London.

Moran, F. et al. (ed.)
1995 Advances in Artificial Life, Third European Conference on Artificial Life, Granada, Spain, June 4-6, 1995 Proceedings, Springer, Berlin.

Oppenheimer, Peter
1989 "The Artificial Menagerie", in (Langton 1989).

Pirsig, Robert M.
1974 Zen and the Art of Motorcycle Maintenance.

Popper, Karl
1981 Fornuft og rimelighet som tenkemåte, Dreyer, Oslo.

Radcliffe-Brown, A.R.
1968 Structure and Function in Primitive Society, London.

Randall, John Herman, Jr.
1976 [1926] The Making of the Modern Mind, Columbia University Press, New York.

Reynolds, Craig
1992 Boids Demo. Artificial Life II Video Proceedings. Christopher Langton ed. Addison-Wesley, Redwood City, California.

Risan, Lars
1996 Artificial Life: A Technoscience Leaving Modernity? An anthropology of subjects and objects Oslo: Hovedfagsoppgave, Institutt og Museum for Antropologi, Universitetet i Oslo.

Russell, B. and A. N. Whitehead
1910-13 Principia Mathematica, 3 vols., 2nd edition, Cambridge University Press, Cambridge.

Schaffer, J. David
1989 Proceedings of the third international conference on Genetic Algorithms, Morgan Kaufmann Publishers, Inc. San Mateo, California.

Searle, John R.
1990 "Mind, Brain and Programs", in Margaret Boden (ed.) 1990.

Shapin, Steven and Schafer Simon
1985 Leviathan and the air-pump, Princeton University Press.

Sims, Karl
1994 "Evolving 3D Morphology and Behavior by Competition" in Artificial Life, Volume 1, Number 4, 353-372.

Sinding-Larsen, Henrik
1993 "Informasjonsteknologi, ansvar og jegets grenser", in Rasmussen og Søby (eds.); Kulturens digitale felt, Aventura, Oslo.

Stewart, Ian
1989 Does God Play Dice?, Penguin Books, London.

Stewart, John
1992 "Life=Cognition The epistemological and ontological significance of Artificial Life", in (Varela and Bourgine 1992).

Sørhaug, Tian
1996 "Hvordan ting blir sagt med ting", in Tian Sørhaug Fornuftens fantasier, Universitetsforlaget, Oslo.

Terzopoulos, Demetri, et.al.
1994 "Artificial Fishes: Autonomous Locomotion, Perception, Behavior, and Learning in a Simulated Physical World" in Artificial Life, Volume 1, Number 4, 327-352.

Toffler, Alvin
1970 Future Shock.

Traweek, Sharon
1988 Beamtimes and Lifetimes, The World of High Energy Physicists, Harvard University Press, Cambridge.

Thuen, Trond
1982 "Meaning and Transaction in Sámi Ethnopolitics", Stensilserie A, Sammfunnsfag nr. 41, Tromsø.

Turkle, Sherry
1991 "Romantic Reactions: Paradoxical Responses to the Computer Presence", in J. Sheehan and M. Sosna (eds.): The Boundaries of Humanity, University of California Press, Berkeley.
1996 Life on the Screen: Identity in the Age of Internet, New York: Simon & Schuster.

Turner, Victor
1974 Dramas, Fields, and Metaphors, Cornell University Press, Ithaca.

van Gelder, T. J.
1992 What might cognition be if not computation? Technical Report 75, Indiana University, Cognitive Sciences.

Varela F. J. , Evan Thompson & Eleanor Rosch
1993 The embodied Mind. Cognitive science and human experience, The MIT Press, Cambridge, Massachusetts.

Varela, F. J. and Paul Bourgine
1992 Toward a Practice of Autonomous Systems, Proceedings of the first European Conference on Artificial Life, The MIT Press, Cambridge, Massachusetts.

Ventrella, Jeffrey
1994 "Exploration in the Emergence of Morphology and Locomotion Behavior in Animated Characters", in Brooks et al, 1994, Artificial Life IV, Draft proceedings of ALife IV, MIT Press.

von Neumann, John and Oskar Morgenstern
1944 Theory of Games and Economic Behavior, Princeton University Press, Princeton.

Williams, Raymond
1976 Keywords. A vocabulary of culture and society, Fontana Press, London.

Wiener, Norbert
1948 Cybernetics, John Wiley & Sons, inc., New York.

Zimmermann, Michael E.
1990 Heidegger's Confrontation with Modernity, Indiana University Press, Bloomington.


Notes

1. ALife-conferences have also been visited by chemists who attempt to synthesise life-like structures (such as cell-membranes) chemically.

2. What interested me most was the re-discovery of cybernetic ideas and practices, the holism and the systemic thinking within established and/or main stream scientific institutions (as Cybernetics, with the advent of AI, had been expelled to quite marginal pockets of science).

3. "The sciences" constitutes what in Norway is called the natural sciences and engineering. "The arts" roughly covers the social sciences and humaniora.

4. The combination of the universal with the particular in anthropology has led anthropologist Tim Ingold (in Hylland Eriksen 1993) to define anthropology as "philosophy with the human beings in it", whereas Fredrik Barth (1989) has characterised anthropologists as researchers who study the dynamics in a glass of water in order to say something about Niagara Falls.

5. Using this double reference means that I have had to separate the contexts that refer to their written works from those that refer to more informal events and utterances.

An exception to this can be found in chapter 6, "A synthesising example", where I have had to omit the reference to the publication in question in order not to mix the real name with the pseudonym of one of the central persons I am writing about.

6. The word "machine" is in need of a clarification. In computer science the machine is an abstract notion. A specific computer program is a specific machine – a so called "Turing machine", named after Alan Turing, who first abstracted the idea of the machine into a logical algorithm. In addition to specific machines (or programs) there are the general purpose machines, the computers. A Macintosh is a general purpose machine, whereas a Macintosh running the text editor Word 5.0 is a particular machine. In this thesis, "machine" refers to a specific machine, for example a computer running an ALife simulation, or a robot.

7. A branch within biology that studies the interaction between hormones and the nervous system.

8. SSK: "Sociology of Scientific Knowledge" – the term that Collins and Yearley use to label their own science studies.

9. Anthropologists trying to enter the three-lingual world of some New Guinean people may question both whether alternating in science studies is more difficult than elsewhere in sociology and anthropology, and whether the average "John Does" that Collins and Yearley distinguish themselves from, are generally unable to do this. At this point, however, merely notice Collins and Yearley's position.

10. Collins, Yearley, and the mathematician probably has a point about this realism. One has to treat the things that one relates to as "out there" when one relates to them. If not, the mathematician, for example, would stop doing mathematics (when attempting to do it), turn reflexive, and start worrying about how he did mathematics. He would thus not study mathematics anymore, but "sociology of math" or maybe, as Russell and Whitehead, the logic of math. (Russell and Whitehead 1910-13)

11. The scientists also had to negotiate with the fishermen, to get them to stop fishing, because the sea farm was to be populated by the remains of the natural population. After several years of problems with getting the scallops interested in hooking on to the collectors, the fishermen eventually lost their interest, and, one Christmas Eve, fished the experimental farm empty. (Callon 1986:220) "They preferred, in the famous aphorism of Lord Keynes, to satisfy their immediate desires rather than a hypothetical future reward." (op.cit.)

12. I remind the reader that in this thesis a particular simulation running on a computer is understood as a particular machine. (See note [6] on page 15.)

13. Human-computer interaction is however a field within computer science, studied, among other places, at COGS, but not by the ALifers.

14. In NLP research computers are programmed to understand "natural languages", like French or English, but unlike classical AI research the point is not to make an "intelligence", but to manipulate strings of language in useful ways. One of the big challenges within this field is to make programs that translate from one natural language to another.

15. The representations that Thomas refers to here are equivalent to the symbols in Newell and Simon's terminology.

16. Rodney Brooks at MIT, the most famous "insect builder" in the AI community, has recently challenged the "insect approach" by starting to make a humanoid, a human robot, called Cog. This project, when it became known in COGS, received a lot of attention and was fiercely criticised, by ALifers and by non-ALifers alike. AI has throughout the years been much criticised for making "wild" claims about what their machines will soon be able to do. At COGS during my fieldwork AI researchers (including the ALifers) related to this criticism by stressing the "modesty" of their claims. They did not want to make a human being, they merely simulated aspects of, say, a fly's behaviour. Brook's Cog project, especially the very optimistic time schedule – consciousness was to be achieved by the end of 1997 (Brooks and Stein 1993:15) – seemed to be another example of the extravagant claims of AI, possibly making the whole discipline, people at COGS feared, look ridiculous.

17. http://robocop.modmath.cs.cmu.edu:8001/htbin/mjwgenformI 

18. First: An Artificial Life system that behaves like this may not only be a world of interacting animats, it may also be a system of "neurons" that make up a robot "brain", or it may be other more abstract simulations of ecosystems. Second: The butterfly effect is often understood such that "a butterfly flapping its wings in Hong Kong may cause an hurricane over New York". This is not quite correct. The butterfly effect is not about complex systems in themselves, it is about such systems relative to some observer. The point is that the – by an observer – unpredicted movement of a butterfly in Hong Kong (if you had predicted everything else) may cause an unpredicted hurricane over New York.

19. As a comparison we might say that Ceausescu tried to re-design Romania "top down", whereas liberalists state that social insitutions of society should emerge "bottom up" as a result of the free choice of individual agents.

20. With the term "emergence" I have introduced the last of what at COGS was known as "the four E-words of ALife", a phrase often used a bit jokingly to characterise the field. The four words were embodiment, embeddedness, evolution, and emergence.

21. I have put this phrase in quotes for the following reason: In one Email message an Alifer suggested that Artificial Life was life "made by man rather than by nature". A (female) researcher replied: "man made. Think twice."

22. To remind the reader of something we saw in chapter 2: These three processes – biological, social, and cognitive – did not need not to be separate. One could, in order to study cognition, study the social behaviour of animals.

23. Simulation of Adaptive Behaviour 1994, or SAB94. This conference was one in a series of conferences, SAB90, SAB92, SAB94, and so forth.

24. The lack of biologists at SAB94 may be due to several things. On one occasion some of the ALifers had invited a couple of biologists to have lunch and to discuss the latest ALife conference (The Artificial Life IV in Boston, a month before SAB94.) Both of the biologists were very critical of what they heard and saw (the conference proceedings from ALIFE IV). The first had some criticism that to me seemed informed (even if I am not in a position to say that it was fair). He claimed that he, by looking at fossil records 20 years ago, had made the same discoveries as Stuart Kauffman recently had made by using computers (Kauffman is a biologist and a famous ALifer.) "These guys just don't bother to go out and look [at Real Life], and that worries me."

The other biologist, however, seemed to me to be critical without being very well informed. He glanced briefly at the proceedings, hardly looking at them at all. It seemed to me that he felt his professional interests threatened by non-biologists who were starting to have opinions on topics that belonged to the biologists' academic realm of interest.

One of the organisers of SAB94 also pointed to a more logistic reason that there were so few biologists at SAB94. The organisers of SAB94 contacted people mainly by using E-mail. Biologists however, did not generally have E-mail addresses in 1994, and, if they had, they did not use them. "We did not know where to find them," the ALifer said (knowing that he was talking to a social scientist), "we're about to become a class of Internet users."

25. This statement is my own construction, but statements to the same effect could be heard daily at COGS.

26. The "ultimate humanoid", I guess, must be something like the humanoids in the film Blade Runner (resembling Daryl Hannah and Harrison Ford, who played the humanoids?), with the exception from the film that you cannot recognise their humanoid identity from their eye movements.

27. A couple of times I asked ALife researchers if there might not be some essential, perhaps undiscovered, aspects of life that were dependent upon carbon based chemistry, or water, or something else, and that this might be something that could not be realised, or even simulated in computers made of silicon, metals and plastic. One researcher denied such a possibility. According to him, the essential mechanisms of life had been discovered by molecular biology. These mechanisms had no such quality, they could be realised in other materials, and it was just a matter of time before Artificial Life would become Real Life. Another researcher had a more pragmatic answer: "Yes, that may be the case, it may also be the case that we have a soul, but we are working under the assumption that this is not the case."

28. There are a couple of things to note here. First, the somewhat idealised movement between the two domains that I have pictured here was not the goal of all ALifers. As we saw in chapter 3, some researchers were less concerned with assuring that their results had a bearing upon controversies in biology (or psychology).

Second, in one sense it is not wrong to picture the Cog project, and AI in general, as a movement towards algebra, as the knowledge produced is of a formal character. But it is not necessarily a movement away from metaphor, as the formal knowledge produced by running and observing ALife/AI systems is related to human beings by similarity. Sometimes, and by some people, this similarity will be considered as a metaphor (or as a kind of simulation). At other times it may be seen as an identity.

29. It should be noted that also people involved in PDP-research (and not in ALife-research) had become aware of the difference between biological neurons and artificial ones. (The "neurons" in figure 7, for example, are called "nodes".) This, however, did not prevent ALifers from further stressing the difference between biological neurons and artificial ones.

30. This is not because men observably used gendered personal pronoun more frequently than women, but simply because most ALife researchers are men.

31. Durkheim (1976) and Radcliffe-Brown (1968) start their discussions of totemism with a presentation of the phenomenon that primitives all over the world classify some people together with some natural items and symbols, and other people together with other natural items and symbols. This fact, rather than our own Culture:Nature distinction - that is, the fact that most other people do not classify all humans in one category (persons) and all non-persons into another category (things) - is the "strange" phenomenon to be explained. Lévi-Strauss has commented that the totemism debate is the result of Western researchers' projection of their own Culture:Nature distinction out into the world (Lévi-Strauss 1963:3).

32. A bar on the campus. IDS = Institute of Developmental Studies.

33. The quite abstract labs of computer science are parallel to the equally abstract computer mechanisms known as programs (see note [6] on page15). Neither is greasy or noisy, wet or dry, dirty or sterile. (If a computer lab is not "sterile", then it is infected by "computer virus").

34. The biology department at University of Sussex.

35. The machine language of a computer consists of two-digit commands made up of the numbers 0 to 9 and the letters A to F (the hexadecimal code, e.g. '34', 'A1', or '2B'). My first computer (in 1979) had to be programmed using such a code. To get this computer to perform the very basic function of displaying on the screen the letters typed on the keyboard involved a major amount of programming. (Evidently, I belong to the generation Gananath is talking about.)

36. This development started with the Macintosh computer and its "desktop" interface. Sherry Turkle (1996) also discusses this development away from computers as mechanisms. She has called the Macintosh the first postmodern computer. It is "postmodern" in the same sense as the hypothetical, "postmodern" robot – whose brain is too complex to be analysed – that Gregory wanted to evolve. (See chapter 3.) In both these machines the actual mechanism is unintelligible to the analytical, "modernist" mind of the user (of the Macintosh) or the designer or "breeder" (in the case of the hypothetical, evolved robot).

37. If you have problems imagining the set-up, think of the car as some sort of circus artist (you could add to the difficulty by placing a plate on top of the pole).

38. A "mainframe" is a large, central computer in a network, used by many people for many purposes.

39. The more urgent jobs notably include the programs that interact with a user, for example text editors. When someone writing a document types a command (or merely a letter), he or she should not have to wait, but get an immediate response from the computer. The GA, once it is started, runs independently of the person who started it.

40. We may say that a system manager is a representative of the community of those who use a computer network. His or her job is not to use the computers for any particular purpose, but to make sure that the resource of computer access and use is fairly well available to the various users.

41. According to the Oxford Dictionary one of the meanings of "intuition" is related to scholastic philosophy and means "spiritual insight or perception; instantaneous spiritual communication".

42. A real example would require a large technical context which I have neither the space nor the competence to cover here.

43. Non-linear interaction means (among other things) that variables are each other's context. See note [18] on page 54.

44. A series of international conferences is specifically devoted to the study of the Genetic Algorithm. The following is a brief example of what one may study at these conferences.

At the third international conference on GAs one computer scientist presented a work in which he was, as the heading of his paper goes: "varying the probability of mutation in the genetic algorithm." (T. C. Fogarty 1989) This was done within "an industrial optimalisation domain". (1989:104) He did not study industrial optimalisation (how to improve the efficacy of an industrial process) per se. He studied systematically how the GA could be used to do this. Hence, he produced systematic knowledge about the GA, but this knowledge was restricted to the domain of industrial optimalisation.

45. Most of the Ph.D. students who made and studied ALife simulations as part of their degree, started their study by making a smaller simulation, to learn some basic ALife techniques.

46. Norbert Wiener (1948) was one of the first to write about "black boxing". It has become a common term in engineering, and has also been adopted by sociologists of science (see for example Latour 1987:2).

47. Eevi Beck at Department of Informantics at University of Oslo (former Ph.D. student at COGS) gave me valuable corrections on this theme. She pointed out (and exemplified in person) that computer scientists working with Human Computer Interaction (a branch of computer science) are highly aware of the problem of defining the boundary between the interface and what they speak of as the "functionality" of a program. They need to define this boundary, but stress that their definitions always are a matter of heuristics, not of separating objective subroutines or parts of programs.

48. By including the simulations (or artifacts in general) in "communitas", this concept becomes almost synonymous to Heidegger's Dasein (as I am able to understand Turner and Heidegger). This inclusion of artifacts and "objects" in communitas is contrary to Turner's definitions of the concept. In comparing his dichotomy of communitas versus (social) structure to the Buddhist concepts of prajńă and vijńăna he writes:

I would probably differ from [the Buddhist philosopher] Suzuki in some ways and find common ground with Durkheim and Znaniecki in seeking the source of both these concepts in human social experience, whereas Suzuki would probably locate them in the nature of things. For him communitas and structure would be particular manifestations of principles that can be found everywhere, like Yin and Yang for the Chinese. (Turner 1974:48)

When Turner restricts "principles that can be found everywhere" to something strictly human and social he is performing that purification of the Social and the Natural that I keep coming back to – and try to avoid – throughout this thesis.

49. When Rutherford made his modern model of the atom – the big core with electrons orbiting – he used the solar system as a metaphor. Quantum physicists deliberately avoid such metaphors. They give their particles names as "up-quark", "down-quark", "Priscilla" etc. These particles are defined solely by quantities and equations, and are given their fancy names to avoid all iconic similarity with familiar physical systems. This does not mean that physicists cannot be objectivist or in some other way think of quantum particles as, one way or another, being "out there", only that the thing out there is, if I have understood this right, a mathematical quantity. This has made some philosophers of science suggest that there are strong elements of Platonism – here; the idea that mathematical forms or "objects" are more real than anything else – in quantum physics (Helge Kragh, personal communication).

50. A game-theoretical experiment originally designed by von Neuman and Morgenstern (1944) to study the effects of interaction between maximising agents in economic theory.

51. Some abstract models may have more than three dimensions.

52. IJCAI '93: International Joint Conference on Artificial Intelligence, the largest, conference series on AI.

53. The keys of the clavier were, inside the piano, connected to computer-controlled electro-motors.

54. Artificial Life V was the fifth conference in the series of ALife conferences that Chris Langton started in Santa Fe. This conference was also organised by Langton, but it was held in Nara, Japan (May 16-18, 1996). As the submission deadline is past, the "first call for paper" page has been removed from the net, but information on the conference can be found on http://www.hip.atr.co.jp/departments/Dept6/ALife5.html.

55. Sims, in his paper from the conference, adds the ® when he writes about the connection machine. (Sims 1994)

56. Human bodies and brains are probably "less" understood mechanisms (when they are seen as mechanisms) computer programs are "more" understood mechanisms.

57. This took place not long after Brooks and Stein's published research proposal had arrived at COGS, Building Brains for Bodies (Brooks and Stein 1993).

58. And he did not seem ironic when saying this, though I may have missed out on something here.