CS378 Assignment #1 24 Jan 94 Christian Mogensen mogensen@cs.stanford.edu Q: HOW WOULD YOU DESIGN AND BUILD AN INTELLIGENT MACHINE? Pris (Replicant): Must get lonely here, J.F. J.F. Sebastian : Not really. I MAKE friends. They're toys. My friends are toys. I make them. It's a hobby. I'm a genetic designer. Rachel: It seems you feel our work is not a benefit to the public. Deckard: Replicants are like any other machine: they are either a benefit or a hazard. If they're a benefit it's not my problem. Rachel: May I ask you a personal question? Deckard: Sure. Rachel: Have you ever retired a human by mistake? Deckard: No. Rachel: But in your position, that is a risk. -- Bladerunner, 1982 URL:http://kzsu.stanford.edu/uwi/br/off-world.html "When in doubt, deny all terms and definitions." -- Calvin and Hobbes The question implies three things: 1. That machines can be intelligent. 2. It implies that such a machine can be built. 3. Finally it implies that we would know how to build such a machine. These are the three keywords then: design, machine and intelligent. All three terms are vague and are loaded with meaning. Nor can we agree on a particular meaning without stepping into some fairly treacherous areas of philosophy. In the next three sections I will look more closely at each term. INTELLIGENT: What is non-intelligent? I think we can all agree that doorknobs are non-intelligent. Telephones are non-intelligent. But is a phone switch non-intelligent? It does route all those calls correctly. A computer is a fairly complex thing. Is it intelligent? Can it be considered intelligent if it passes the Turing test? What do we mean by 'intelligent'? If we mean able to do complex calculations? In that case this workstation is intelligent, and our problem is solved. On the other hand, if we mean able to be 'create' or 'think', then we have either disqualified unimaginative accountants as non-intelligent, or we have pushed the question over to become 'what does think mean?'. The Turing test is an imitation game. If the computer (or human) at the other end of a teletype can fool an interrogator into thinking it (or s/he) is human, then Turing claims that there is no rational reason for denying it. The computer can be said to be intelligent if it imitates well enough. Early attempts at passing the test were limited successes within specific domains. However, that approach has not been generalizable. While limited Turing tests can be won, the general test (where conversation ranges from last nights Letterman to an analysis of Wittgenstein) does not seem likely to be defeated in the near future. The Turing test boils down to a test of communicative skills and general knowledge. Ignoring for the moment the fact that people occasionally fail the Turing test, there are still problems with the test as a basis for a definition of intelligence. The test is an arbitrary dividing line. Is the machine 'intelligent' or does it merely 'understand' what communicating intelligently involves? 'Understanding' also becomes slippery. Many of these limited successes contained large catalogs of canned responses. Isn't there more to intelligence than that? Maybe I never understood this at all - but merely memorized a new set of patterns to parrot when asked. Another consequence of this ambiguity is that machines cannot be intelligent. Imagine if your toaster over piped up and said "I'm intelligent! How do you want your bread this morning then?" The toaster could quote Yeats, read Shakespeare, beg forgiveness for burning your toast yesterday - but it could not _prove_ its 'intelligence' to you unless you admitted it. It would be within your right to claim that it was merely a very clever parrot. ...clearly the technology of any age provides an attractive window through which thinkers can view both their physical and metaphysical worlds. [...] A defining technology defines or redefines man's role in relation to nature. By promising (or threatening) to replace man, the computer is giving us a new definition of man: as an "information processor" and of nature, as "information to be processed." -- Turing's Man: Western Culture in the Computer Age, Introduction David Bolter Hence 'intelligence' is an ambiguous term, whose meaning cannot be precisely defined, because defining it in turn defines what we consider to be human. And that would mean we had circumscribed ourselves: venture beyond this point and you are no longer intelligent - not human. Thus 'intelligence' is an arbitrary categorization, which we might as well ignore. 'Intelligence' is what we choose it to be. MACHINE: Machines are easy to find. The steam engine is a machine obviously. It's mechanical - has moving parts - blows hot air... There are well know definitions in physics that mechanics must obey. Of course, these laws also apply to us: humans are Carnot heat engines just as much as steam engines are. A normal unmodified human is a machine, or is it? If we say it is a naturally occuring von Neumann machine (self replicating robot) then yes, humans are machines. The complimentary question "what does alive mean?" turns out to be equally complex. If 'alive' is undefined, then calling a human a machine is not so strange. 'Aliveness' could be something attributed to complex system. Questions about 'awareness' run straight into the hermeneutic circle: we can't interpret the system from outside, as we are inside and part of the system. Of course, we have only vague ideas on how to build one from scratch, but know quite a bit about how to build one in 9 months using two existing compatible forms of humans. Given that we can now replace most of the body's components, except for the brain, the body has certainly become a machine - albeit a very sophisticated one, with very complex internal systems capable of self-repair. The brain: is it a machine? Is there something that separates it from a gooey chemical blob? Machines do not possess souls (by definition). Do brains? Not in my opinion. The brain is a machine, but it is one which is very complex, and it is difficult to study without disturbing its function. Discovering how the 2 kilos of brain matter work is an immense challenge. Science has struggled with it for hundreds of years now, and still we have large gaps in our understanding. There are millions of components in our skulls, interacting and affecting each other. The mess of combinations defies simple analysis. Through the phonograph story [Hofstadter gives us the idea] if you are sufficiently strong you are by definition vulnerable and incomplete. "You see, Professor Turkle, what Achilles and the Tortoise and the Crab are saying is that if we are machines, we are human." -- The second self: computers and the human spirit (p304) S. Turkle 1984 So, given enough time, we may finally know enough to understand the brain. We could repair, we could recreate (perhaps). We might even be able to build one from base chemicals. DESIGN: Well, this is where it gets hairy: we don't know how a human is designed. Evolution has pushed us this way and that, adding a thumb here, discarding a tail there. So a human's design is fairly suboptimal. Of course we could improve upon it. We know how it evolved (roughly) and we know how a baby gestates and matures physically (roughly). But we can't build one from scratch. We understand quite a bit about how nature goes about building a human. Cells divide, redivide and shift, triggered by RNA messages coded from DNA helices unwound and untied by enzyme scissors. The instructions are there in the DNA; soon we will decode them, much later we will understand what they mean. Hofstadter describes Godel's incompleteness theorem and its implication for us: any system contains questions that cannot be answered within the system. The extension of this is that the brain cannot understand itself. My take on this is that Godel applies to us just as much as everything else, but that this isn't the case where it applies. The fact that we can breed more humans does not mean we have a design for them. We do not understand how to read it yet, but we are beginning to make sense of bits of it. If nothing else, the problems involved leads to an appreciation of how complex the design of a human really is. An easier approach might be to combine organics and inorganic machines: create cybernetic machines bred according to a specific design. Combining the advantages of computers (accuracy, logic) with the advantages of organic systems (associativity, fuzziness) might allow us to sidestep the problems of building an 'intelligent' system entirely within one domain. CONCLUSION If we constrain the terms used in the question and make the question specifically answerable (since the general answer is currently unknown) and permit 'machines' to include humans, or broaden 'human' to include sophisticated machines, then the answer becomes simpler. Hence I would build an intelligent machine by attempting to build a human. The easiest way to complete such a project involves a male and a female, and roughly nine months of waiting around. Building a machine (in the traditional sense of constructed deterministically) in this way seems strange now, but bio- mechanics and nano-technology may change how a lot of things are constructed. However, the process of 'breeding' a new machine is probably not possible to speed up. Childbearing will still take 9 months, regardless of how many pressure-cookers are assigned to the project. Intelligence is what intelligence does. People are what people do. Intelligent machines is an oxymoron. Or maybe 'intelligent humans' is. When we choose to call something intelligent, we no longer consider it a machine. I wonder if the machine would think the same of us? =======================================================