CS378 Assignment #2 26 Jan 94 Christian Mogensen mogensen@cs.stanford.edu ANALYSIS OF SUBMITTED PAPERS. Raymond Tan The author jumps right in, defining intelligence in terms of the Turing test. Despite the misgivings he cites, he clings to Feigenbaum's notion that a specialized system, given enough additional knowledge, will exhibit general intelligence. The subsequent argument rests on this assertion being valid. I cannot agree with this: a chess playing program does not contain the structures necessary to develop general intelligence - it has no way of representing "it rained today" in its theory of game playing. Consider a toaster. You can add all sorts of geegaws to a toaster, but you cannot turn it into a house or even a kitchen, or even a stove. Maybe a small ineffectual stovetop or a toasteroven, but nothing good would come of it. The other core problem here is language: how do you express "it is raining" in terms of chess moves? The machine must become proficient at language if it is to qualify for the Turing test, but how this happens is vague. The author writes "I would first give the machine a knowledge of the rules of the game..." How should this be done? How does on encode knowledge, and how is this knowledge different from information? On another level, this sentence reveals the conduit metaphor hard at work, channeling packets of knowledge to the machine. Later we get "after the machine has been trained to a reasonable level of proficiency, a large body of knowledge is then supplied to the machine's program so as to ... increase its 'intelligence'." Now does this strike anyone as a huge handwave? Stuffing the encyclopedia down the toaster does not make the toaster smart. We then have Schank's ideas on language. The example used is "Burt gave Joe a black eye for calling him a name." This is supposed to be encoded as BURT PROPEL FIST -> EYE OF JOE. Ignoring the fact that fists were never mentioned, I would like to point out that this encoding is bogus. Burt actually had an eyeball he had painted black, which he gave to Joe. The point is that language is ambiguous, and explicit encodings in semantic nets quickly collapse as one hangs more and more information on them. Similarly, the 16 classes of inference about sentences don't work in all cases. How the machine is supposed to repair misunderstandings is not discussed. We end up constructing overburdened chess machines. A chess machine with some encoding of "rain" is still a chess machine. This paper confirms my prejudices about AI rather than call them into question. To put it another way, the arguments are not strong enough to convince me I am wrong. ======================= Designing the Future Jennifer Hughes The author starts well by nailing the western focus on duality and binary belief-systems. We must change our beliefs if we want to create an intelligent machine, because the current focus on duality is hindering our ability to analyze the problem. Best quote: "The computer cannot learn Hindu simply through contact with it." We are given the tools to think anew and build an extension of ourselves. The author talk of "something organic, an interceder between psyche and world". The paper has made me go back and reanalyze my own beliefs in the light of focus on duality as 'Zen and the Art of Motorcycle Maintenance' did when I read that. ======================= Nat Johnson The paper makes the point that intelligence is a sensory experience as well as a mental one. A intelligent machine requires rich sensory data to avoid insanity. Intelligence require good filtering of the stream of data. Just as the human mind throws away most perceived data within a second, so the computer must find out what to focus on in a data-torrent. the dichotomy between the world and logic is pointed out. The author's suspicion that common sense is logical is (ironically) incorrect. According to some experiments, common sense misleads us in logical problems, such as conditional probability or lotteries. The paper gives us an idea of the problem areas involved in building an intelligent machine. We learn some of the requirements for intelligence and discuss a possible method for dealing with such a huge project. The basic impression is that this is a really hard problem. There is little in this paper that questions my assumptions or ideas. I agree with most of it. ======================= Trace Wax Merging Man and Machine The discussion is nicely framed as a dialog. Unfortunately the paper does not take its ideas far enough (IMHO). As Jennifer Hughes points out, man and machine complement each other and form an intelligent system. However, where Hughes moves the boundary between us and the machine to encompass the machine, the author retains the self as a distinct entity. The machine is a separate thing. The paper implies a Turing definition of intelligence: if it does what you expect, then it is intelligent. This is a slippery definition and one I disagree with. The fact that the author then goes on to say that we must adapt our language to the machine's expectations speaks against its intelligence. If the machine isn't smart enough to understand us, how can it be intelligent? The phrase 'humanlike character' implies a lot of work on the machine's part, a lot of understanding of the human's context. In addition the machine is supposed to entertain us. Hopefully it will be smart enough to understand when don't want to be entertained. This vagueness is not encouraging. I wonder if computers that understand humans more and more will come to have communication problems amongst themselves. As there is more and more context to be shared, there is less and less time for actual communication. I am not sure what one could build with this set of instructions. I suspect something like a more complex version of Microsoft BOB would be the result. The one thing that stands out is the emphasis on a user model. The fact that the user model must adapt to changing circumstances is vital. I think it's hard to question my essay because it is mostly questions and there are few answers. I liked the style of this paper, and wonder if I could redo the questions in my essay as a dialog. I do not place enough emphasis on the shared understanding and mental models in my paper. =======================