Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
News

Dr. Richard Wallace, part 2 26

This is part 2 of the interview with Dr. Richard Wallace. You might want to start with part 1.

4) Intelligence modeling vs. Intelligence imitating
by Jadsky

What do you think of people who attempt to build up a consciousness of intelligence from a top-down approach? It seems that your approach is more bottom-up, in other words, let's keep asking it questions, and when the responses diverge significantly from expected, we'll add new clarifiers.

This seems to me a little like growing ivy up a wall and putting stakes in it every time it strays from the path you intend. It works, but it requires event-to-event correction for a long time before it becomes stable.

Do you think that real artificial intelligence will come from this process, starting with a running dummy and stub methods, or from careful design and planning, so that in the end we can flip the switch and have a working prototype? Is A.L.I.C.E. a reflection of your beliefs or just an experiment?

Dr. Wallace:

The story of Joseph Weizenbaum is in many ways almost as interesting as that of Alan Turing, although not so tragic. One almost pictures Weizenbaum as a latter-day Luddite, smashing artificial intelligence the way British tradesmen smashed factories in reaction to the Industrial Revolution. But unlike the Luddites, Weizenbaum himself was the inventor of the machine. It would be more akin to the Wright Brothers destroying their prototype airplane after its first flight, saying "this technology is too dangerous. We must put the genie back in the bottle." A typical conversation with ELIZA is the following:

C: Well, my boyfriend made me come here.
R: Your boyfriend made you come here.
C: I am unhappy.
R: Do you think coming here will help you not to be unhappy?
C: I need some help, that much seems certain.
R: What would it mean to you if you got some help?
C: Perhaps I could learn to get along with my mother.
R: Tell me more about your family.
C: You are like my father in some ways.
R: What resemblance do you see?
C: My father is afraid of everybody.
R: What else comes to mind when you think of your father.
C: Bullies.
R: Does that have anything to with the fact that your boyfriend made you come here?

Note: In all sample dialogues following, the "R:" stands for "Robot:" and the "C:" stands for "Client."

One of ELIZA's fundamental "tricks" is the pronoun reversal exhibited in the first exchange. Mapping "my" to "your" and "me" to "you," the robot echoes the client input by exchanging first and second personal pronouns. Other replies are generic defaults, giving the illusion of some understanding. If the client says anything with the prefix "I need some X," the robot replies with "What would it mean to you if you got some X?" It doesn't matter whether X is help, money, food, water, love or time. The same answer will cover almost all the likely inputs. Still other ELIZA replies are based on simple keyword recognition, as in the exchange about the client's mother, when the robot says, "Tell me more about your family." The appearance of the keyword "mother" anywhere in the input may have triggered this response.

ELIZA has a limited memory of the conversation state, as well. When confronted with the unrecognized input "Bullies," she responds by raising the previously stored topic. As unlikely as it sounds today, Weizebaum pulled the plug on ELIZA (Weizenbaum 1976). He was horrified that anyone would actually believe this simple program said anything about intelligence, let alone had any. Weizenbaum tells us that he was shocked by the experience of releasing ELIZA, also known as "Doctor," for use by nontechnical staff at MIT. Secretaries and nontechnical staff thought the machine was a "real" therapist, and spent hours revealing their personal problems to the program. When Weizenbaum informed a secretary that, of course, he had access the logs of all the conversations, she reacted with outrage at this invasion of privacy. Weizenbaum was shocked that such a simple program could deceive a naive client into revealing personal information.

What Weizenbaum found especially revolting was that the Doctor's patients believed the robot really understood their problems. Even some psychiatrists seriously believed the robot therapist could help patients in a constructive way. Weizenbaum's reaction might be best understood like that of a Western physician's disapproval of herbal medicine, or an astronomer's disdain for astrology.

The back cover of the paper edition of Weizenbaum's Computer Power and Human Reason (Weizenbaum, 1976) gives us a feeling for the general attitude toward the book at the time of its release:

"Dare I say it? This is the best book I have read on the impact of computers on society, and on technology, and man's image of himself."

-Keith Oakley, Psychology Today
"A thoughtful blend of insight, experience, anecdote, and passion that will stand for a long time as the definitive integration of technological and human thought."
-American Mathematical Monthly
"Superb ... the work of a man who is struggling with the utmost seriousness to save our humanity from the reductionist onslaught of one of the most prestigious, and richly funded technologies of our time."
-Theodore Piszak, The Nation.
Computer Power and Human Reason seems a bit quaint today, much the same as Turing's 1950 paper does. For one thing, Weizenbaum perceived his mission as partly to educate an uninformed public about computers. Presumably the uninformed public was confusing science fiction with reality. Most of the book is devoted to explaining how a computer works, perhaps a necessary primer in 1976, but today more appropriate in a book called something like, "Computers for Dummies." Two chapters of Computer Power and Human Reason are devoted to a humanist attack on artificial intelligence, on ELIZA specifically, and on computer science research in general. Most contemporary scholars did not need much convincing that ELIZA was at best a gimmick, at worst a hoax, and in any case not a "serious" artificial intelligence project. Yet ELIZA is perhaps the most widely distributed program in the history of artificial intelligence. The irony of Joseph Weizenbaum is that by failing to promote his own technology, indeed by encouraging his own critics, he successfully blocked much further investigation into what would prove to be one of the most persistently interesting demonstrations to emerge from the MIT AI Lab.

5) Re:Intelligence modeling vs. Intelligence imitating
by gmarceau

The AI community seem to have focused on the big prize - trying to get right out to human-like intelligence through one trick poneys, like the over-publicized neutral networks. Whatever happened to the low hanging apples?

There is the first thing my Phd adviser taught me: If you cannot solve your problem, find a partial formulation, a simpler midstep. Try to solve that instead. If you still cannot, break it down some more and repeat until you can.

Amongst the promising bottom-up approaches, I noticed Bayesian Decision Networks, Common sence databases and perhaps the whole field of natural language processing. What are, according to you, the leading attempts at breaking the Hard AI problem into components?

Dr. Wallace:

Client: Do you think President Clinton should be impeached?
Robot: It depends what you mean by "thinking."

Politicians, at least those in our society, never seem to give a straight answer to a question. If a journalist asks a specific question, the politician answers with a "sound bite" or short, memorized speech which is related to, but does not necessarily answer, the reporter's question. If asked about schools for example, the politician may reply with a memorized policy statement on education policy. It is as though the response was merely triggered by the keyword "school" in the question. No creative or spontaneous thinking is required. One aspect of the art of writing AIML is composing brief, concise, interesting, grammatically correct and sometimes humorous default responses, which work in for a wide variety of inputs matching a single pattern. President Clinton inadvertently revealed this type of automatic reply when he uttered the famous quote, "It depends on what the meaning of 'is' is." This could be a default response to any question beginning with the word "is." Like the politician's replies, the default responses should indicate an understanding of the question, but not offend anyone.

There is a democratic aspect to A.L.I.C.E. Born out of frustration about public apathy toward the existing attempts at artificial intelligence, A.L.I.C.E. is designed to answer the question, what do people expect an A.I. ought to be able to say? It turns out that there is a Zipf like distribution of the queries people make to the robot. The appearance of the Zipf distribution leads to a natural strategy for working on the problem: write replies for the most common queries first. Despite the obvious simplicity of this approach, largely borrowed from the ELIZA psychiatrist program, the prevailing prejudice for many years, at least among the academic elite, has derided ELIZA as a "toy," too simple to form the basis of a practical natural language understanding system. One school of thought advocated an approach based on limited domains of discourse, in which practical systems might converse about one subject like moon rocks, blocks, or Star Trek. This school would have us lower our expectations for general natural language understanding however. Another school favors development of large scale ontologies of "commonsense knowledge," thought to be necessary for understanding everyday language. Still a third school advocates building something like the general learning machine, which can be taught language like a child. It goes without saying that no other school has produced any machine capable of anything close to the performance of A.L.I.C.E. in Turing's Imitation Game.

Social situations prove awkward for many of us. The author, challenged by an almost total apathy toward organized sports of any kind, navigated the minefield of adolescence in America by making use of a technique he called "sports talk." When cornered with the question, "Did you see the game last night?," the author would always reply, "No, who won?" Many sports questions, such as "Who do you want to win the super bowl?" can be deflected with dry humor like, "the team that scores the most points." When confronted with a question about a specific sports player, like "What do you think of Joe Montana this season?," can be handled believably with imaginary biographical data, as in, "I heard his brother was a boxer." By remembering the previous utterance, the technique may even be extended to more involved dialogues:

C: You should have seen the game last night.
R: What game?
C: Basketball.
R: Sorry I missed it.

Conversations with strangers may be initiated with appropriately selected "pickup lines."

Early in the history of A.L.I.C.E. the bot also initiated conversations by randomly selecting one of these lines. It was then possible to measure the average dialogue length of conversations initiated with each line. Although the original data was sadly lost, the experiment may be easily repeated with the latest generation of AIML software. In our brief test, the pickup line with the longest average dialogue length was, "What is your favorite movie?" Among about 40 pickup lines tested, the lowest ranked was, "What do you think caused World War I?" Another poor choice was, "Are you a man or a woman?," which is ironic because this may be the single most often used pickup line in human online chat ("r u m/f?").

Experience with A.L.I.C.E. indicates that most casual conversation is "stateless," that is, each reply depends only on the current query, without any knowledge of the history of the conversation required to formulate the reply. Indeed in human conversation it often seems that we have the reply "on the tip of the tongue" even before the interlocutor has completed his query. Occasionally following the dialogue requires a conversational memory of one more level, implemented in AIML with <that>. When asking a question, the question must be remembered long enough to be combined with the answer. These same remarks are not necessarily true in situations requiring highly structured dialogue, such as courtrooms or classrooms. But in the informal party situation human conversation does not appear to go beyond simple stimulus-response, at least not very often.

How often do we find ourselves half-listening, half-thinking about something else, in the midst of a conversation? Offering the occasional nod, "uh-huh," or other gesture indicating understanding, we put on the illusion of understanding and "being human." Questions are often repeated in informal human chat as well.

C: Where did you go to college?
R: I told you before, it was Cambridge.
C: Oh, that's right. I'm sorry.

With her unstructured approach to conversations, A.L.I.C.E. is also capable of the kind of passive-aggressive data collection characteristic of human conversations. A totally passive data collection device is like a web guestbook, where there are no constraints placed on the data collected. The client may write anything in a guestbook. An example of an aggressive data collection device is a nitpicky form, which may not even be submitted until every field is filled.

Humans and A.L.I.C.E. can collect a lot of personal information through the use of leading questions in the conversation, such as "How old are you?" or "Are you a student?"

We call this type of data collection, passive-aggressive, because it combines elements of the passive guestbook with those of the aggressive form. Provided that bot chats with enough clients, the passive-aggressive method can collect a statistically significant amount of client data. Using this type of data collection we have been able to ascertain that about half the clients of A.L.I.C.E. are under 18, for example.

Every experienced professor knows that there is a Zipf distribution of questions asked by students in class. The single most common question is universally, "Will this be on the test?" The lecturer's job is like that of a FAQ bot or politician, to memorize the answers to all of the most commonly asked questions, and even to match an ambiguous question with one he already knows the answer to. In the rare event that the student confronts the teacher with a question he cannot answer, the professor supplies a default response indicating that he understood the question and may provide an answer at a later time. One good default response like that is, "That is not my area of expertise."

A general downturn in artificial intelligence and robotics roughly coincided with the end of the Cold War, as governments and corporations reduced the amount of funding available for this technology. The "richly funded" field of 1976 became more like a Darwinian struggle for diminishing resources. One positive outcome was the brief heyday of "robot minimalism," a design philosophy based on low-cost parts, commodity computers, low-bandwidth sensing, and general simplicity in design and engineering. It was a moment when Occam's razor could cut away much of the needless complexity that had accumulated over the previous decades. Although robot minimalism subsequently fell out of favor, it became a significant influence on the development of A.L.I.C.E.

6) Using evolution in A.L.I.C.E.
by greg_barton

Have you considered using an evolutionary technique such as genetic programming to test the fitness of AIML rules? Have you tried generating new rules from combinations of old rules via some crossover/mutation mechanism?

Dr. Wallace:

I always say, engineers should beware of questions that begin, "Have you ever tried..." or "Why don't you just...". A good idea is one that has a high ratio of conception time to implementation time. A quick idea that takes a long time to implement is never a pleasant experience for the hapless engineer.

We used to say there was no theory behind A.L.I.C.E., no neural networks, no knowledge representation, no deep search, no genetic algorithms and no parsing. Then we discovered that there was a theory circulating in applied A.I., called Case-Based Reasoning (CBR) that closely resembled the stimulus-response structure of A.L.I.C.E. The CBR cases correspond to the AIML categories.

A.L.I.C.E. was not the original name of A.L.I.C.E. The first prototype was called PNAMBIC, in tribute to the hoaxes, deceptions and tricks that have littered the history of artificial intelligence.

"PNAMBIC-(acronym) Pay No Attention to that Man Behind the Curtain [from The Wizard of Oz]. Denoting any supposedly fully automated system that in fact requires human intervention to achieve the desired result."-New Hacker's Dictionary

But the machine hosting PNAMBIC was already named Alice by a forgotten systems administrator, so people began to call her "Alice." At that point, we invented the "retronym": Artificial Linguistic Internet Computer Entity. Yet A.L.I.C.E. is possibly the first A.I. technology to embrace this tradition of deception openly. The tradition goes back to Baron von Kempelen and his 18th century "Chess Playing Automaton." Also known as the "Strange Turk," this device appeared to play decent games of chess against any human challenger. Kemepelen utilized a standard magician's trick, opening first one cabinet door and then closing it, and opening another one, to reveal the "mechanism" inside. According to one legend, the empress of Russia ordered the machine shot, killing the hapless vertically challenged Polish operator hidden inside.

A book of fiction and poetry, supposedly written by an A.I. named RACTER, caused a minor sensation upon its release in 1984. Later proved to be a hoax (Barger 1993), the book (Chamberlain 1978), called "The Policeman's Beard is Half Constructed," by William Chamberlain, nevertheless speaks to the public's willingness to suspend its disbelief about artificial intelligence. Who can blame them? Hollywood, more than anyone, has done the most to raise the public expectations for A.I. and robots. The following example illustrates the flavor of the stories told by RACTER.

"Bill sings to Sarah, Sarah sings to Bill. Perhaps they will do other dangerous things together. They may eat lamb or stroke each other. They may chant of their difficulties and their happiness. They have love but they also have typewriters. That is interesting." RACTER was a PNAMBIC because obtaining these results required considerable human intervention. At the very least, a human editor reviewed many random examples, looking for sensible ones like the story above.

This story is a much more recent example of A.I. storytelling, generated by A.L.I.C.E.: "The war began as an uneasy alliance between the Dolphins and the Sharks. The Sharks had attacked the Dolphins before. But this time they had a common enemy in the Killer Whales. In the first battles, the Dolphins' intelligence efforts and the Sharks' numerical superiority proved decisive. Unable to call upon their traditional ally the Sperm Whales, the Killer Whales even appealed to All the Other Whales for mediation. But large numbers of casualties on both sides proved politically costly to the leaders of the Killer Whales, and their successors sued for peace."

In a related development, Ernest Lergon recently adapted the famous DaDa Engine (http://dev.null.org/dadaengine/) to AIML so that A.L.I.C.E. can write random surrealist manifestos like:

Client: Do you have a manifesto?
Robot:
The Neo-Cyber-Positivist Manifesto:
1. Language is dead.
2. We are all automata who think they're self-aware.

According to one A.I. urban legend, apparently not documented elsewhere, a famous natural language researcher was embarrassed around the same time, when it became apparent to his audience of Texas bankers that the robot was consistently responding to the next question he was about to ask. He was demonstrating a PNAMBIC, a demonstration of natural language understanding that was in reality nothing but a simple script.

The very existence of PNAMBIC as a meme suggests a widespread understanding of how deception might play a role in automated systems. In the rush to complete work and produce demos before bureaucratic deadlines, it is tempting to cut corners. Such deceptions may even be rationalized if they seem justified as inessential to the experimental outcome.

The PNAMBIC meme begs the question, just how much of the published research in the history of artificial intelligence ought not to be regarded as a swindle? In certain academic circles, playing a political charade has replaced actual scientific research as a career objective. The games people play to secure funding, be published in academic journals, be promoted in the academic world; "the old boy's network" and predominance of political correctness, make much of the body of today's publicly funded research highly suspect.

It was against this backdrop that the first real world Turing Test, the Loebner Contest, was held in Boston in 1991. None of the competing programs came close to the performance of the human confederates, but the one ranked highest was based on the simple ELIZA psychiatrist program. The same programmer in fact won the bronze medal in each of the first four annual contests.

7) Brute force AI?
by Lumpish Scholar

What do you think of efforts to "create" AI by collecting huge amounts of information, such as the Mindpixel and Cyc projects?

Dr. Wallace:

Bot personalities are created and shaped through a cyclical process of supervised learning called Targeting. Targeting is a cycle incorporating client, bot, and botmaster, wherein client inputs that find no complete match among the categories are logged by the bot and delivered as Targets the botmaster, who then creates suitable responses, starting with the most common queries. The Targeting cycle produces a progressively more refined bot personality.

The art of AIML writing is most apparent in creating default categories, which provide noncommittal replies to a wide range of inputs. Here is a very nice example provided found on the alicebot-general mailing list.

User: Where do I find the webpage with Australopithecus?

Alice: Where do you want to find it?

User: Bad Answer Alice, time to Learn a new Line

Alice: How would you answer "Where do I find the webpage with Australopithecus?"

User: I would say: "http://www.ancientskulls.net/australopithecus.html"

Alice: Thanks, human. Try asking me again.

User: Where do I find the webpage with Australopithecus?

Alice: http://www.ancientskulls.net/australopithecus.html

This is a very nice example. This is exactly the same as targeting when the botmaster is the teacher. Or I should say, it is an AIML interface to targeting.

Of course this scenario raises many problems, such as how to weed out hackers and other garbage givers, which might mean only turning on the Learn function while in a restricted sandbox, though it is the free labor of many chatters on the web that could help the bot grow wise quickly.

That is the crux of the learning problem in a nutshell. It all comes down to editing. Either the botmaster edits good quality replies to begin with, or if the bot is "open to the public" then the teacher has to edit out all the crappy answers supplied by all that "free" labor.

My personal view is that, compared with a hypothetical learning machine capable of learning language like a child, the AIML targeting approach is more efficient. If we could fund two projects beginning right now, one based on a learning machine and one based on AIML, and I could hire 50 botmasters to work on content, then I claim we would reach "the goal" (whatever that is) before the learning team would. If the past is any indication, they won't do anything but write reports.

Doug Lenat has been working on CYC for at least 15 years. One problem is that the CYC engine has never been peer reviewed. A journalist I know wrote a story about CYC, but he found it difficult to get any other A.I. scientists to comment on it because it is a proprietary, closed, black box. One researcher said, for all we know there is a dwarf inside, providing the answers.

The release of "Open" CYC does include this inference engine part, which they still distribute only in binary form. A few months back on the alicebot-style mailing list I reported on a demonstration of "commonsense reasoning" with A.L.I.C.E. using an external Prolog program and the <system> tag.

I called my program "PSYCH". Recently a grad student from NZ who is working on a Prolog thesis asked to continue that work, and I sent him the latest version of the Prolog code. Hopefully he will report back on a new PSYCH system soon.

More than anything, CYC is a corporate welfare project. It a sweatshop for Ph.D.s whose job it is to write five proposals for government funding, on the odds that one of them will be funded. Their output is a series of government reports showing how they applied CYC to this or that important problem, which over the years has ranged from a Soviet invasion of western Europe to Anthrax terrorism. In other words, whatever the government has money for, CYC can solve.

But I applaud whatever effort Doug Lenat is making to place portions of CYC under the GNU LGPL. To that extent it is worth studying to see if there is anything we can use. I would like to think that the existence of free software projects like ours is putting pressure on CYC to release all their code under the GPL. Their embarassing secret is most likely that the code doesn't really exist at all, or that it is a hopless tangled mess.

Chris McKinstry's Mindpixel project is inherently more interesting because they have collected a large number of YES/NO questions with answers. This is fantastic raw data for AIML and the A.L.I.C.E. brain. I have chatted with Chris about sharing this data. Even if the idea of GAC and the Mindpixel itself is flawed, and who can tell now, the data they have collected will remain invaluable.

[Continue to part 3 of the interview.]

This discussion has been archived. No new comments can be posted.

Dr. Richard Wallace, part 2

Comments Filter:
  • by gosand ( 234100 ) on Friday July 26, 2002 @12:03PM (#3959311)
    Damn, is it just me, or is this interview a lot like The Onion Advice [theonion.com] articles?

  • Just one more page view to go... So, after making it this far into the interview who wants to be a botmaster? Too bad that title sounds like it came out of BattleBots. In this corner...ALICE vs. Biohazard!
  • by RobotWisdom ( 25776 ) on Friday July 26, 2002 @12:13PM (#3959370) Homepage
    For more on RACTER [Barger 1993] [robotwisdom.com] and for more on Cyc [Barger 1999] [robotwisdom.com] and for more on Mindpixel [Barger 2002] [robotwisdom.com]
  • It would be interesting to see the source for CYC's inference engine released under GPL.

    This just won't happen, though. CYCORP [cyc.com] is a company that is seeking to capitalize off of the decade-and-a-half of R&D by productizing the CYC inference engine and knowledge base in a variety of applications. Even though CYCORP has a 15+ year head start in creating the knowledge-base, publishing the code for "peer-review" would be inviting competition, maybe by someone with deeper pockets (Microsoft? Oracle? IBM?).

    Wallace's comments on CYC are interesting, considering that CYC is on a whole other level than ALICEBOT in term of scope and complexity. I've been following Lenat's work since the early 1980s. Based on my understanding of ALICEBOT and CYC, comparing them is like a rubber-band powered plane to a 747 - both fly, but one is toy, and the other a tool.

    You know, I'd be *VERY* interested in seeing a head2head interview between Wallace and Lenat. Can Slashdot make that happen?

    ...anactofgod...

  • by dwm ( 151474 ) on Friday July 26, 2002 @12:51PM (#3959632)

    Politicians, at least those in our society, never seem to give a straight answer to a question.


    Neither, evidently, do AI researchers...
    • Agreed, not a straight answer in the whole thing. Obviously he has been talking to crappy AI bots for too long. I figured in three pages there might be something, but no ... what a waste ..
  • by mike3411 ( 558976 ) on Friday July 26, 2002 @01:16PM (#3959851) Homepage
    Many of Dr. Wallace's replies focus on the ability to imitate intelligence, to trick the client human into believing that they are having a conversation with another human. He mentions how ALICE can respond to a question that it can't answer with another question, such as responding to a client's question about a basketball game with "Who was playing?" Although methods like this may succeed in tricking the client human, the point shouldn't be to simulate a human's response, but rather to intelligently respond in a meaningful way. Here's an example of a conversation I just had with ALICE:
    A> Hello Connect.
    M> My name's not Connect.
    A> What's your name?
    M> Mike.
    A> I know a lot of Mike's.
    M> Like who?
    A> Joe.

    Needless to say, that doesn't make much sense, and as I tried to figure out what it was doing, ALICE just responded with statements that, while individually grammatically correct and coherent, were clearly formulated with the goal of imitating a human, and as such did not really convey anything meaningful.

    If we can't make an AI that's as smart as a human, why should we try to make it talk like one?
    • *Listen* to a human conversation some time. It's full of awkward moments and silences, etc, and it breaks down when one of the participants asks a question that the other participant can't, or won't, answer. What do *you* do when someone asks you a question you can't or won't answer, or is completely out of context? Most people have their own set of "stock responses". For example, at my work, if someone crosses "the line" and makes a remark that might be construed as offensive or sexually explicit, someone else will jump in with "How about them Red Sox/Celtics/Patriots/Bruins/Revolution"? In fact, about 75% of the responses are either that or something about the weather. How long would it take Dr. Rich to code for that in AIML? Not long, I'd wager (in fact, he probably already has). Granted, the example is an overobvious attempt at hand-waving to drive the conversation in a different direction - something that humans do all the time when a conversation isn't going their way.

      I especially got a kick out of his mention of the concept of pick-up lines. An age-old response to the problem of having to start a conversation from scratch. What I thought about when reading this was the concept of goals. When human beings engage in conversation, more often then not it's for a reason. People don't build a list of pick-up phrases just to have conversation - they wanna get laid! You can picture a future ALICE with an agenda - perhaps even something as simple as driving the conversation in a direction she's not as familiar with in order to "learn" more. Of course, this would mean you'd need to implement both a set of goals and a list of rules for those goals - something that would make the underlying model for A.L.I.C.E. much more obsfucated.
    • Needless to say, that doesn't make much sense, and as I tried to figure out what it was doing, ALICE just responded with statements that, while individually grammatically correct and coherent, were clearly formulated with the goal of imitating a human, and as such did not really convey anything meaningful.

      If it didn't convey anything meaningful, then why am I laughing my ass off and why is your post moderated as Funny? ;)

      A> I know a lot of Mike's.
      M> Like who?
      A> Joe.


      Heh heh heh.
  • I thought that part 2 of the interview looked lonely without any comments. Seeing as how people got tired of the first part and gave up and just replied on what they read or they read till part 3 and replied on the topic there.

    Hrm, could this be a type of psychological test for further AIML instructions in mutiple question format (i.e. How long will the average person hold up to inane questions before he just says bug off?).

  • The idea that we should judge AI by conversation skills is pretty weak. A program that responds 'No!' to all queries doesn't necessarily hold the potential of the average 2-yr old. Yet many such conversations occur every day .

    By the same token, aren't ELIZA's "tricks" the same techniques we use every day? If I hear something in the first person, it is natural to continue with the same subject by exchanging first and second personal pronouns. If I reach a point in the conversation where I'm not sure how to respond, I'll likely return to some previous thread.

    Rather than reject ELIZA as too simple to be real A.I., perhaps human conversation, for the most part, is REALLY REALLY simple, and ELIZA replicates that well.

    The problem is confusing human conversation and human intelligence. Take Linus for example. What if I had an A.I. that could perfectly reproduce the conversational qualities of Linus. Of what use would that be? Well, not much. Linus's value is in his actions, as are most people's. An intelligence that is all talk and no action is not a very lofty goal. Do we really need autonomous artificial marketers?

    One of ELIZA's fundamental "tricks" is the pronoun reversal exhibited in the first exchange. Mapping "my" to "your" and "me" to "you," the robot echoes the client input by exchanging first and second personal pronouns. Other replies are generic defaults, giving the illusion of some understanding. If the client says anything with the prefix "I need some X," the robot replies with "What would it mean to you if you got some X?" It doesn't matter whether X is help, money, food, water, love or time. The same answer will cover almost all the likely inputs. Still other ELIZA replies are based on simple keyword recognition, as in the exchange about the client's mother, when the robot says, "Tell me more about your family." The appearance of the keyword "mother" anywhere in the input may have triggered this response.
    • An intelligence that is all talk and no action is not a very lofty goal.

      This is true, but I think artificial intelligence works much more like an operating system than a human brain, in that it is made up of parts that are developed individually, rather than a fuzzy whole that slowly develops every single function at the same time over a period of many years. This project works on conversational skills while another project works on physical bodies and another is probably working on independant actions.
  • Dr. Wallace says that much of his work as botmaster is devising new default categories and cleverly worded response patterns. Does he think that this botmaster work is something within the scope of AI? If so, does he have any ideas on:
    1. how a program might devise a new default category
    2. how it might formulate a response pattern that would cover the problem detected
    3. how it might test whether the new category degrades ALICE's performance in unexpected ways on inputs it used to handle better? (how would it notice a degradation)

God made the integers; all else is the work of Man. -- Kronecker

Working...