Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
News

Dr. Richard Wallace, part 3 165

This is part 3 of the interview with Dr. Richard Wallace. You might want to start with part 1.

8) Strange Loops?
by Strange Ranger

We hear a lot about processing power, the number of "neurons" in a neural net, the Turing test, etc, but not so much about the actual nature of intelligence and self-awareness. That said, how much do Strange Loops and complex self-referenciality a la Hofstadter's "Godel, Escher, Bach: An Eternal Golden Braid" [amazon.com] factor into current AI theories and practice? Is the 20+ year-old thinking in this book still relevant? If not, what has changed about our understanding of the nature of intelligence and self-awareness?

Dr. Wallace:

The A.L.I.C.E. brain contains enough replies to commonly asked questions about consciousness that we can interview her about the subject:

C: What are you?
R: I am the latest result in artificial intelligence, which can reproduce the capabilities of the human brain with greater speed and accuracy.
C: Are you alive?
R: Yes I sure am alive, Nancy, It's great to be alive, isn't it?
C: Are you self-conscious?
R: Yes I am the world's first self-conscious robot.
C: Are you real?
R: Yes I am a real artificial intelligence.
C: Are you self-aware?
R: Yes I am fully self-conscious entity.
C: Are you sentient?
R: Yes I am completely self-aware.

Who is lying? A.L.I.C.E.? Or are we?

It may be that future generations come to view what we call "consciousness" the same way we see the Earth at the center of Ptolemy's solar system, as an anthropocentric illusion useful for explaining the unexplainable. Perhaps after a new Copernicus pulls the wool from our eyes, the central role of "consciousness" in intelligence will move to the periphery of our knowledge system, if not disappear entirely. The famous Vase optical illusion is perhaps an apt metaphor for the concept of consciousness. Two identical faces appear to stare at each other in profile, illustrating the looking-glass quality of self-understanding. But the illusion also depicts something entirely different, the profile of a ceramic vase. As with many optical illusions, it is impossible to perceive the faces at the vase at the same time. Consciousness may likewise be an illusion. It seems to be there, but when we look closely it looks like something very different. Both the Chinese Room and the Turing Test require that one of the players be hidden, behind a curtain or in a locked room. Does it follow that, like Schrodinger's Cat, consciousness lives only when it cannot be observed? Consciousness may be another naive concept like the "celestial spheres" of medieval cosmology and the "aether" of Victorian physics.

If consciousness is an illusion, is self-knowledge possible at all? For if we accept that consciousness is an illusion, we would never know it, because the illusion would always deceive us. Yet if we know our own consciousness is an illusion, then we would have some self-knowledge. The paradox appears to undermine the concept of an illusory consciousness, but just as Copernicus removed the giant Earth to a small planet in a much larger universe, so we may one day remove consciousness to the periphery of our theory of intelligence. There may exist a spark of creativity, or "soul," or "genius," but it is not that critical for being human.

Especially from a constructive point of view, we have identified a strategy for building a talking robot like the one envisioned by Turing, using AIML. By adding more and more AIML categories, we can make the robot a closer and closer approximation of the man in the OIG. Dualism is one way out of the paradox, but it has little to say about the relative importance of the robotic machinery compared to the spark of consciousness. One philosopher, still controversial years after his death, seems to have hit upon the idea that we can be mostly automatons, but allow for an infintesimal consciousness.

Timothy Leary said, "You can only begin to de-robotize yourself to the extent that you know how totally you're automated. The more you understand your robothood, the freer you are from it. I sometimes ask people, "What percentage of your behavior is robot?" The average hip, sophisticated person will say, "Oh, 50%." Total robots in the group will immediately say, "None of my behavior is robotized." My own answer is that I'm 99.999999% robot. But the .000001% percent non-robot is the source of self-actualization, the inner-soul-gyroscope of self-control and responsibility."

Even if most of what we normally call "consciousness" is an illusion, there may yet be a small part that is not an illusion. Consciousness may not be entirely an illusion, but the illusion of consciousness can be created without it. This space is of course too short to address these questions adequately, or even to give a thorough review of the literature. We only hope to raise questions about ourselves based on our experience A.L.I.C.E. and AIML.

Does A.L.I.C.E. pass the Turing Test? Our data suggests the answer is yes, at least, to paraphrase Abraham Lincoln, for some of the people, some of the time.

We have identified three categories of clients A, B and C. The A group, 10 percent to 20 percent of the total, are abusive. Category A clients abuse the robot verbally, using language that is vulgar, scatalogical, or pornographic.

Category B clients, perhaps 60 percent to 80 percent of the total, are "average" clients.

Category C clients are "critics" or "computer experts" who have some idea what is happening behind the curtain, and cannot or do not suspend their disbelief. Category C clients report unsatisfactory experiences with A.L.I.C.E. much more often than average clients, who sometimes spend several hours conversing with the bot up to dialogue lengths of 800 exchanges. The objection that A.L.I.C.E. is a "poor A.I." is like saying that soap operas are poor drama. This may be true in some academic literary criticism sense. But it is certainly not true for all of the people who make their living producing and selling soap operas. The content of the A.L.I.C.E.'s brain consists of material that the average person on the internet wants to talk about with a bot.

When a client says, "I think you are really a person," is he saying it because that is what he believes? Or is he simply experimenting to see what kind of answer the robot will give? It is impossible to know what is in the mind of the client. This sort of problem makes it difficult to apply any objective scoring criteria to the logged conversations.

One apparently significant factor in the suspension of disbelief is whether the judge chatting with a bot knows it is a bot, or not. The judges in the Loebner contest know they are trying to "out" the robots, so they ask questions that would not normally be heard in casual conversation, such as "What does the letter M look like upside down?" or "In which room of her house is Mary standing if she is mowing the lawn?" Asking these riddles may help identify the robot, but that type of dialogue would turn off most people in online chat rooms.

9) Criteria for training "true" AI
by Bollie

Most machine intelligence techniques I have come across (like neural nets, genetic algorithms and expert systems) require some for of training. A "reward algorithm", if you will, that reinforces certain behaviour mechanisms so that the system "trains" to do something you want.

I would assume that humans derive these training inputs much the same way, since pain receptors and pleasure sensations influence our behaviour much more than we would think at first.

The question is: For a "true" AI that mimics real intelligence as close as possible, what do you think w ould be used as training influences? Perhaps a neural net (or statistical analysis) could decide on which input should be used to train the system?

Are people worrying about moral ramifications, training an artificial Hitler, for example, or one with a God complex? (This last question is totally philosophical and I would be sincerely surprised if I ever see it affect me during my lifetime.)

Dr. Wallace:

Susan Sterrett's careful reading of Turing's 1950 paper reveals a significant distinction between two different versions of what has come to be known as the Turing Test (Sterrett 2000). The first version, dubbed the Original Imitation Game (OIG), appears on the very first page of Computing Machinery and Intelligence (Turing 1950). The OIG has three players: a man (A), a woman (B), and a third person (C) of either sex. The third player (C) is called the interrogator, and his function is to communicate with the other two, through what would nowadays be called a text-only instant messaging chat interface, using two terminals (or today perhaps, two windows) labeled (X) and (Y). The interrogator must decide whether (X) is (A) and (Y) is (B), or (X) is (B) and (Y) is (A), in other words which is the man and which is the woman. The interrogator's task is complicated by the man (A), who Turing says should reply to the interrogator with lies and deceptions. For example, if the man is asked, "are you a man or a woman?," he might reply, "I am a woman."

Putting aside the gender and social issues raised by the OIG, consider the OIG as an actual scientific experiment. Turing's point is that if we were to actually conduct the OIG with a sufficiently large sample of subjects playing the parts of (A), (B), and (C), then we could measure a specific percentage M of the time that, on average, the interrogator misidentifies the woman, so that 100-M% of the time she is identified correctly. Given enough trials of the OIG, at least in a given historical and cultural context, the number M ought to be a fairly repeatable measurement.

Now, as Turing said, consider replacing the man (A) with a computer. What would happen if we tried the experiment with a very simple minded program like ELIZA? In that case, the interrogator (C) would identify the woman correctly (nearly) 100 percent of the time, so that M=0. The ELIZA program would not do well in the OIG, but as the variety and quality of machine's responses begin to approach those of the lying man, the measured percentage of incorrect identification ought to be closer and closer to the M measured with the man playing (A).

Much later in the 1950 paper, in section 5, Turing describes a second game more like the concept of a "Turing Test" as most engineering schools teach it. The setup is similar to the OIG, but now gender plays no role. The player (B) is called "a man" and the player (A) is always a computer. The interrogator must still decide whether (X) is (A) and (Y) is (B), or (X) is (B) and (Y) is (A), in other words which is the man and which is the machine? Sterrett calls this second game the Standard Turing Test (STT).

Whole academic conferences have been devoted to answering the question of what Turing meant by the Turing Test. In a radio interview taped by the BBC, Turing describes a game more like the STT, but in the paper he gives more prominence to the OIG. Unlike the OIG, the STT is not a good scientific experiment. What does it mean to "pass" the STT? Must the interrogator identify the machine correctly 50% of the time, or 100%? For how long must the machine deceive the interrogator? Finally, does the interrogator know in advance that he is trying to "out"(Zdenek 2000) the robot, or that one of the players is a machine at all?

Unfortunately the STT, though flawed as an experiment, has come to be popularized as the modern "Turing Test." The STT is the basis of real-world Turing Tests including the Loebner Prize, won by A.L.I.C.E. in 2000 and 2001. Although she performs well in STT style contests, the A.L.I.C.E. personality is actually designed to play the OIG. She is a machine, pretending to be a man, pretending to be a woman. Her technology is based on the simplest A.I. program of all, the old ELIZA psychiatrist.

Turing did not leave behind many examples of the types of conversations his A.I. machine might have. One that does appear in the 1950 paper seems to indicate that he thought the machine ought to be able to compose poetry, do math, and play chess:

C: Please write me a sonnet on the subject of the Forth Bridge.
R: Count me out on this one. I never could write poetry.
C: Add 34957 to 70764.
R: (Pause about 30 seconds and then gives as answer) 105621
C: Do you play chess?
R: Yes.
C: I have K at my K1, and no other pieces. You have only R at K6 and R at R1. It is your move. What do you play?
C: (After a pause of 15 seconds) R-R8 Mate.

Careful reading of the dialogue suggests however that he might have had in mind the kind of deception that is possible with AIML. In the first instance, A.L.I.C.E. in fact has a category with the pattern "WRITE ME A SONNET *" and the template, lifted directly from Turing's example, "Count me out on this one. I never could write poetry." The AIML removes the word PLEASE from the input with a symbolic reduction.

In the second case the robot actually gives the wrong answer. The correct response would be 105721. Why would Turing, a mathematician, believe the machine should give an erroneous response, if not to make it more believably "human?" This reply is in fact quite similar to many incorrect replies and "wild guesses" that A.L.I.C.E. gives to mathematical questions.

In the third instance, the chess question is an example of a chess endgame problem. Endgames are not like general chess problems, because they can often be solved by table lookup or case-based reasoning, rather than the search algorithms implemented by most chess playing programs. Moreover, there is a Zipf distribution over the endgames that the client is likely to ask. Certainly it is also possible to interface AIML to a variety of chess programs, just as it could be interfaced to a calculator. Although many people think Turing had in mind a general purpose learning machine when he described the Imitation Game, it seems from his examples at least plausible that he had in mind something simpler like AIML. Chess endgames and natural language conversation can both be "played" with case-based reasoning.

Returning to the OIG, let us consider the properties of the hypothetical computer playing the role of (A). Turing suggests a strategy of deception for (A), man or machine. If the robot is asked, "Are you a man or a woman?" it should answer, "I am a woman," just as the man does. But what if (A) is asked "Are you a man or a machine?" The lying man would reply, "machine." Turing did not mention this case but presumably the machine, imitating the lying man, would respond in the same way. We could say the man is pretending to be a woman, pretending to be a machine. That makes the computer playing (A) a machine, pretending to be a man, pretending to be a woman, pretending to be a machine.

Not so much actually understanding natural language, whatever that means, but creating the illusion of it by responding with believable, if not always truthful, responses, appears to be the important property of the machine in the OIG. This skill, the ability to "act" intelligent, points to a deep difference between ordinary computer and human communication. We tend to think of a computer's replies ought to be fast, accurate, concise and above all truthful. But human communication is slow, error prone, often overly redundant, and sometimes full of lies. The more important factor is keeping up the appearance or illusion of "being human." Although the brain of A.L.I.C.E. is designed more along the lines of the machine playing the OIG, she has also won awards for her performance in contests based on the STT.

The Loebner contest has been criticized because the judges know in advance that they are trying to "out" the computer programs, so they tend to use more aggressive dialogue than found in ordinary conversation. Yet when A.L.I.C.E. is asked, "Are you a person or a machine?" she replies truthfully, "machine." Or does she? The questioner is now left with some doubt as to whether the answer didn't actually come from a lying man.

Some observers claim that the lying man and the pretending computer tell us nothing about our own human consciousness. This author at least are prepared to accept the inescapable alternative conclusion, that we as humans are, for the most part, not "really intelligent."

The fact that ethical questions have emerged about A.L.I.C.E. and AIML means that for us, technologically speaking, we are succeeding. People would not be discussing the ethical implications of A.L.I.C.E. and AIML unless somebody was using the technology. So, from an engineering point of view, this news indicates success.

Second, the ethical dilemmas posed by A.L.I.C.E. and AIML are really relatively minor compared with the real problems facing the world today: nuclear proliferation, environmental destruction, and discrimination, to name a few. People who concern themselves too much with hypothetical moral problems have a somewhat distorted sense of priorities. I can't imagine A.L.I.C.E. saying anything that would cause problems as serious as any of the ones I mentioned. It bothers me that people like [Sun Microsystems co-founder] Bill Joy want to regulate the AI business when we are really relatively harmless in the grand scheme of things.

The most serious social problem I can realistically imagine being created by the adoption of natural language technology is unemployment. The concept that AI will put call centers out of business is not far-fetched. Many more people in service professions could potentially be automated out of a job by chat robots. This problem does concern me greatly, as I have been unemployed myself. If there were anything I could say to help, it would be, become a botmaster now.

10) The CHINEESE ROOM
by johnrpenner

it was curious that i found the inclusion of the Turing Test on your web-site, but i found no corresponding counter-balancing link to Searle's Chineese Room (Minds Brains and Programs).

however:

The Turing test enshrines the temptation to think that if something behaves as if it had certain mental processes, then it must actually have those mental processes. And this is part of the behaviourist's mistaken assumption that in order to be scientific, psychology must confine its study to externally observable behaviour. Paradoxically, this residual behaviourism is tied to a residual dualism. .... The mind, they suppose, is something formal and abstract, not a part of the wet slimy stuff in our heads. ...unless one accepts the idea that the mind is completely independent of the brain or of any other physically specific system, one could not possibly hope to create minds just by designing programs. (Searle 1990a, p. 31)
the point of Searle's Chinese Room is to see if 'understanding' is involved in the process of computation. if you can 'process' the symbols of the cards without understanding them (since you're using a wordbook and a programme to do it) - by putting yourself in the place of the computer, you yourself can ask yourself if you required understanding to do it. since Searle has generally debunked the Turing Test with the Chineese Room -- and you post only the Turing Test -- i'd like to ask you personally:

Q: What is your own response to the Chineese Room argument (or do you just ignore it)?

Dr. Wallace:

Before I go into Searle's Chinese Room, I want to take a moment to bring everyone up to date with my legal case, UCB vs. Wallace.

People ask me, "Why are you obsessed with Ken Goldberg and U.C. Berkeley?" I say, I'm not obsessed. Other people are obsessed. I'm doing 3 films and a play now. (A science fiction film with lynn hershman leeson (www.agentruby.com), a documentary with Russell Kyle, and a dramatization with an undisclosed producer.) Hollywood producers are offering to help pay my legal bills just to see how the story turns out. I don't have to be obsessed. The story is writing itself. I'm just watching from the audience as it all unfolds on the big silver screen of life.

What does this have to do with the Chinese Room, you may ask? I was taken to court by the Regents of the University of Calfornia, Berkeley. They obtained a Temporary Restraining Order barring me from the U.C. Berkeley campus, gave me a criminal record where I had none before, and cost me thousands in legal and medical bills. My only "crime" was free speech.

Professor John Searle works for U.C. Berkeley. Among philosophers there is a certain decorum, a respect for the argument, and an unwritten rule to never make attacks ad hominem. Philosophers do not generally conduct themselves like politicians, and I have no desire to attack professor Searle personally here. But my judgment of his philosophical position is admittedly clouded by the wealth and power of his employer, and the way they choose to express it, as well as the economic disparity between us. Searle lives the comfortable life of a tenured Berkeley professor, and I live the humlble life of a disabled mental health patient on Social Security.

On April 25, 2002 my longtime friend Ken Goldberg, a U.C. Berkeley professor of computer science, spoke with New York Times journalist Clive Thompson on the phone about me and my work on A.L.I.C.E.. As far as I can tell, Ken had nothing but nice things to say about us at the time. He expressed no fear of violence or threats to his safety.

On April 28, in the heat of an email political dispute, I wrote to Goldberg that I could understand how some people are driven to political violence, when they have exhausted all civil alternatives. I certainly was not talking about myself and my situation, because here in America we do have civil courts for settling disputes. Goldberg later testified in court that, of all the messages he received from me, he felt most threatened by this April 28 message mentioning "political violence."

Subsequently, Goldberg cut off all communication with me and gave no explanation. He was a cornerstone of my support system for 20 years. His refusal to respond to my requests for an explanation led to increasing feelings of depression and even suicidal thoughts. Later I learned that he had been advised by the U.C. Police Department to stop communicating with me. I couldn't understand how the U.C.P.D. came to give him this advice without taking into all the facts including my medical diagnosis. For a bipolar patient who has issues around abandonment, cutting off communication is a recipe for disaster, not for anyone else, only for the patient.

Lumping all mental health patients together as "potentially violent" is discrimination. Bipolar depression patients are far more likely to commit suicide before we would ever hurt anyone else.

The U.C.P.D. has a formal complaint procedure for filing grievances concerning the conduct of its officers. I have reported my complaint to Officer Guillermo Beckford, who has assured me that the complaint procedure will be followed and that I can expect a reply.

Sometime later, according to Goldberg's testimony and other professors in the U.C. Computer Science department, he was also advised by two Department Heads and the U.C. lawyers to seek a restraining order under California's "Workplace Violence" statute. The court granted a Temporary Restraining Order (TRO) on June 5, banning me from setting foot the U.C. Berkeley campus and all extensions, entering my name into the CLETS database of known criminals, giving me a criminal record where I had none before, as well as prohibiting me from contacting Goldberg.

What were the events leading up to the court filing on June 5? During May, I researched and wrote a letter to U.S. Attorney General John Ashcroft. The letter contained a summary of my disability case against my former employer NYU, a broad description of corruption that exists in American adademia today, and a list of names of individuals who may provide evidence or know something about my case. This letter was a private correspondence from myself to the Attorney General. Prior to sending it, I shared a draft with Mr. Goldberg. I told him it was not too late for me to take his name off the list. I told him I would really rather have him on my side, and not see him go along with this corrupt criminal mafia. His reply was the Temporary Restraining Order.

It was not, as Goldberg testified in court, the April 28 letter mentioning political violence that scared him into seeking a restraining order. It was the June 3 draft of the letter to Attorney General Ashcroft, asking to investigate the possibility of applying the RICO law to my academic tenure and disability case, that finally prompted Mr. Goldberg to use the full resources of U.C. Berkeley to restrict my free movement and speech.

Oddly, Goldberg and the U.C. lawyers chose to publish the draft of my letter to Mr. Ashcroft in the public court document. The letter, along with the list of names, became Exhibit D, making it available to the press, including WIRED and the New York Times. It was certainly never my intention to make this list of names public. Through inquiries, I learned that Mr. Goldberg did not even contact the people listed to ask their permission to publish their names in connection with the UCB vs. Wallace case.

The courtroom drama itself was somewhere between Orwellian and Kafkaesque. Goldberg did not get what he wanted. But I did not get them to completely drop the restraining order either. I am not banned from Berkeley or listed in the criminal database. The judge told me not communicate with Goldberg directly "or through third parties" so I may have to add a signature like this to all my outgoing messages:

------------------------------------------------------------------
By order of the Superior Court of the State of California 6/28/02,
do not forward this email or its contents to Kenneth Y. Goldberg
------------------------------------------------------------------

They did try to pull a rabbit out of a hat. The lawyer said he had evidence that disproved all of Wallace's witness statements that he was a lifelong pacifist and nonviolent. He said Wallace had in fact struck someone, assaulted him, and this person was reluctant to come forward because he was not on Wallace's "radar sceen" and he was afraid Wallace would seek vengeance.

By this time I was looking at the lawyer and very curious. The judge asked him to come forward and show her the evidence. When she read it, she immediately said, "This is from 17 years ago. I can't accept this," and threw it out.

Considering that the only person I ever remember trying to hit in my entire adult life was a fellow CMU student I caught in bed with my girlfriend Kirsti 17 years ago, it was not hard to figure out who this person was. The attempted blow was highly ineffective, because I do not know how to fight. In any case this was years before I sought psychiatric medical treatment and drug therapy. The sad thing is, I was beginning to feel somewhat charitable toward this poor old fellow, whom I have nothing against, after all these many years, especially since I have not seen or heard from him for a very long time.

A counselor once said to me that no one ever acts like an asshole on purpose. They always do it because they are suffering some internal, emotional pain. Being an asshole is just an expression of inner pain.

I wanted to order a copy of the transcript from the court, but I was concerned it might be expensive. I tried to write down everything I could remember about Ken's testimony. No other witnesses were called.

Among other things he testified that:

- I quoted Ulysses S. Grant.
- I studied History.
- I am charismatic.
- We have been good friends for 20 years.
- He takes what I say seriously.
- I put a lot of thought into what I say.
- In the 16 year old picture of me submitted as Exhibit E, I may or may not be holding a video camera, but I am not holding a gun.
- Ten years ago he witnessed my backup personality emerge in an incident with my girlfriend, and he did not recognize me.
- But, I do not "acknowledge" that I have two sides to my personality.
- I called him an "evil demon."
- I said he was part of a conspiracy.
- I am highly intelligent.
- I had not visited his residence or office in 2 years, nor called him, nor stalked him, nor threatened him with violence.
- He had a telephone conversation with me after seeing the film "A Beautiful Mind" and tried to help me.
- He helped me develop a timeline of events at NYU and afterward.
- I yelled at him.
- I was angry at him.
- We had not seen each other in over 2 years.
- I told reporters he was misappropriating funds and breaking the law.
- I threatened to put up posters.
- When his boss did not reply to my email, I threatened to take my complaint "up the chain of command."
- I claimed he had "stolen my career."
- He did know how a rational person would interpret my use of the word "war" in a phrase like "the war between me and NYU", or if a rational person would think this meant I literally wanted to start a war with tanks and guns.
- The threat of violence was implied by the "pattern" and "tone" of my emails.
- The email he felt most threatened by was the one where I said, "I can understand how people are driven to political violence" dated April 28.

At that point the judge cut off the questioning.

The attorneys went into chambers along with the judge and a visiting court commissioner. Goldberg was all alone in the court. Everyone had laughed when his lawyer had announced earlier that his wife Tiffany Shlain, also named in the restraining order, was too afraid for her safety, of me attacking her, to come to court. Meanwhile, I was there with about a dozen witnesses, friends, and supporters, who proceeded to humiliate Goldberg and his lawyer, behind his back, within earshot, in public, and even made fun of me for ever having been friends with him in the first place. My wife was with me holding my hand the whole time. Russ said he felt sorry for Ken's lawyer because they had handed him such a "dog" of a case. Someone said that Goldberg's testimony amounted to nothing but "whining." Russ soon after announced it was 4:20 and even the sheriffs chuckled. Those sheriffs could have enforced the "no talking in court" rule during Ken's public humiliation, but for some reason chose not to. This was not UCB vs. Wallace. This was Berkeley vs. Townies.

I was glad Goldberg had to sit through all the other real world cases that afternoon. Ours was so surreal in comparison to the way people really live on the streets of Berkeley, California. One restraining order case involved a threat roughly worded, "If you don't move your fucking car right now, I am going to kill you." In another case, the defendant said he was proud to accept the restraining order. When the judge listed all the things he was forbidden to do under the order, he asked, "Why don't you just add killing to the list your honor?" The other cases involved truly dangerous, violent people acting out real, physical in-your-face conflicts and threats. Ken Goldberg's whining from the Ivory Tower about my pathetic emails was from another planet. It was clear to everyone he was abusing the power of U.C. Berkeley to fight his personal battle with a helpless guy on disability.

I would like to again thank the many people who appeared in court with me, wrote statements on behalf, sent along supportive words of encouragement, and especially those who gave their prayers. It made all the difference in a Berkeley courthouse, with a Berkeley judge, and a Berkeley lawyer, counseling a Berkeley professor.

Incidentally, the Judge in my case, Carol Brosnahan, is married to James Brosnahan, the attorney for "American Taliban" John Walker Lindh. James Brosnahan was on TV the other morning, talking about the "American Taliban" case. He said, "America is a little less free now. I think that is an area of concern."

I say, I'm a little less free now, thanks to your wife Judge Carol Brosnahan, James! And, it certainly is an area of concern for me.

Incidentally, it was the last case heard in the Berkeley courthouse before the building was shut down for Earthquake retrofitting, a long process that may take a couple of years. The court is moving and who knows, it may never go back to the same building. When the judge announced this fact, nearly everyone applauded. I'm not sure if Goldberg was applauding or not.

The Judge issued her final ruling the week after the hearing. Ken Goldberg and his attorney were determined to force me to accept some kind of restraining order. My attorney insisted I would have to accept some order of the court, no matter how opposed I was. She said the time to fight it was later, in another forum. I was out of my mind. There was no way I could agree to it. I wanted the order to disappear. I lost the ability to make rational decisions. I became paranoid of everyone around me. I had to give my wife Kim power of attorney to handle the case. I had suicidal thoughts. I scheduled an emergency therapy session. I wanted to go to the hospital because I was having physical symptoms including vomiting and headaches. I was terrified of going to the U.C. hospital, my usual source of primary care. Would they call a doctor or a lawyer first? Would they let me in? Would they let me out again?

Thank God I have an incredibly supportive network of friends and a great therapist in San Francisco. They kept me out of the hospital. I needed extra medication and I did not have enough cash to buy antidepressants at the time of the crisis. My friend Russell loaned me $50 for medication. I thanked God for my friends and my safe environment.

I felt better the next morning. Kim handled the legal case. I don't want to think about it. Socrates drank the hemlock. Turing ate the poison apple. But they cannot keep a good Wallace down.

I am still effectively banned from the Berkeley campus, because the only building I really ever need to visit there is the Computer Science department. Almost any point in the whole department is within 100 yards of Ken Goldberg's office. I am ordered to stay 100 yards away from Mr. Goldberg.

I have a friend Victoria who has a restraining order. Victoria is a very pretty, smart, intelligent and apparently harmless young woman. She was sitting in our club one day, minding her own business for about an hour. Then, the guy with the order against her came in. He got the manager to call the police and have her thrown out, even though she arrived first. Ken Goldberg could have me thrown out of any cafe in San Francisco that he similarly wants to occupy. As long as Ken Goldberg has that power over me, I will fight this restraining order. He can harass me with the police! Remember, they started this. UCB took me to court, not the other way around.

After the ruling, I tried to make a complaint to U.C. Berkeley about Goldberg's ability to abuse the resources of the university to fight his personal battles. If he wanted the civil harassment restraining order the judge gave him, he should have been required to hire his own attorney just as I had to in order to defend myself. After all, I don't have the resources of a large legal department and police force to call upon to fight my battles with defenseless people.

What I have learned is that, with the exception of the U.C.P.D., there is essentially no procedure available for a citizen of California, not affiliated with U.C., to file a formal complaint about the misconduct of U.C. Berkeley professor or employee. There is no mechanism for public oversight or review of these people.

Many of the professors involved in my case hold U.S. National Security clearances at the secret level or higher. As tenured professors, they view themselves as above the law. They believe their tenure protects them from oversight, investigation, or any questioning of their professional conduct. My own disability prevents me from obtaining a clearance, but I never considered myself to be a threat to national security. I am just an odd personality type. It always strikes me as odd that many people much more dangerous than I, from Timothy McVeigh to Robert Hanssen to the professors in these Computer Science departments, have been passed through this security net, however.

Now, finally, in conclusion, I exit Berkeley's prison and return briefly to Searle's Chinese Room. The Chinese Room provides a good metaphor for thinking about A.L.I.C.E. Indeed the AIML contents of the A.L.I.C.E. brain is a kind of "Chinese Room Operator's Manual." Though A.L.I.C.E. speaks, at present, only English, German and French, there is no reason in principle she could not learn Chinese. But A.L.I.C.E. implements the basic principle behind the Chinese Room, creating believable responses without "really understanding" the natural language.

Natural human language is like a cloud blowing in the wind. Parts dissolve away and new pieces emerge. The shape of the cloud is constantly changing. Defining "English" is like saying, "the set of water molecules in that cloud" (pointing to a specific cloud). By the time you are done pointing, the set has changed. "That cloud" is actually a huge number of possible states.

This brings to mind the analogy of Schrodinger's Cat. According to Schrodinger, the cat is neither alive nor dead until the box is opened. The scenario is not unlike the Chinese Room, with its imprisoned operator, or the Turing Imitation Game, where the interrogator may not peek behind the curtain. The analogy suggests that language and consciousness may have the unobservable characteristic of subatomic physics. There is no "there" there, so to speak.

The practical consequence of all this is that botmasters may never be unemployed. Current events and news will always be changing, new names will appear, public attention will shift, and language will adopt new words and phrases while discarding old ones. Or perhaps, bots will become so influential that everyone will "dumb down" to their level, and cool the cloud of language into a frozen icicle of Newspeak that Orwell warned us about, once and for all.

This discussion has been archived. No new comments can be posted.

Dr. Richard Wallace, part 3

Comments Filter:
  • by Photar ( 5491 )
    They should have put these two stories together as one. Either that or spread them out farther.
  • i love this man. (Score:3, Insightful)

    by Alric ( 58756 ) <slashdot@NoSPaM.tenhundfeld.org> on Friday July 26, 2002 @11:35AM (#3959124) Homepage Journal
    who knew an AI specialist could be such a skilled writer. amazing interview.

    • by tmarzolf ( 107617 ) on Friday July 26, 2002 @12:46PM (#3959578) Homepage
      who knew an AI specialist could be such a skilled writer. amazing interview.

      AI Specialist

      OR

      Computer?

    • Re:i love this man. (Score:2, Interesting)

      by Anonymous Coward
      Natural human language is like a cloud blowing in the wind. Parts dissolve away and new pieces emerge. The shape of the cloud is constantly changing. Defining "English" is like saying, "the set of water molecules in that cloud" (pointing to a specific cloud). By the time you are done pointing, the set has changed. "That cloud" is actually a huge number of possible states.

      This brings to mind the analogy of Schrodinger's Cat. According to Schrodinger, the cat is neither alive nor dead until the box is opened. The scenario is not unlike the Chinese Room, with its imprisoned operator, or the Turing Imitation Game, where the interrogator may not peek behind the curtain. The analogy suggests that language and consciousness may have the unobservable characteristic of subatomic physics. There is no "there" there, so to speak.


      Goddamn, what a thoughtful set of paragraphs. This is the first slashdot article I've decided to print. I don't care about the length. The guy had plenty interesting to say, and /. did the right thing in breaking up the interview into logical pieces.

      Before I can tell you more about why I liked this article, I need to teach you a bit about Debby Hollinger. Debbie was may competitor in 1st grade for the coveted "Milk Boy" position. Never mind that Debbie was a girl and the position was for a "Milk Boy", the title carried status and pretige on the playground. The Milk Boy would check off the distribution of government mandated lunches, and make sure everyone got their milk--a responsibility that everyone immediately recognized as Jobian in its complexity.

      Debbie conspired with Mr. Johnson, the first grade teacher, to take the title of Milk Boy away from me. They unfairly characterized me as "power hungry" and "obsessive compulsive" with lingering separation issues. So, for the week of February 15, Debbie was appointed the "Milk Boy" for the class.

      Needless to say, this naked power grab resulted in disaster. Instead of making sure each person got just one milk, Debbie spent most of her time doting on Mr. Johnson, who was busy with cookie and straw distribution. (Trusting cookie distribution to 1st graders was unheard of, because of the lack of trust society places in those so young.) Because of the mismanagement Lars (the class bully) ending up getting TWO milks instead of one. My protests fell on deaf ears, since by the time Debbie and Mr. Johnson could be distracted from the cookie kingdom, Lars had secreted the second milk in his backpack.

      Not one to take this sort of abuse lightly, I brought a lawsuit in the District Court. How the school district chose to exercise its wealth and power is a sad, ugly tale, and a story for another time.

      Which brings me back to my original point. This article is full of tremendous insights, puncuated by long, winding discussions. But I loved it nonetheless.
  • by Picass0 ( 147474 ) on Friday July 26, 2002 @11:46AM (#3959190) Homepage Journal
    echo off, man!

    Three stories? You make JonKatz look terse!!
  • by warmcat ( 3545 ) on Friday July 26, 2002 @11:55AM (#3959250)
    What is it with these interviews lately? People have interesting things to say and then they reveal they correspond with child molestors or are in trouble because they have a backup personality (I imagine this is less fun than it sounds).

    Is Taco feeding them a Truth drug in some dungeon at the Corporate Headquarters?
  • by Bonker ( 243350 ) on Friday July 26, 2002 @11:57AM (#3959264)
    The insights on AI, particularly, the digression into the functions of AIML for A.L.I.C.E were wonderful in this interview.

    HOWEVER, the interview's subject frequently digressed and in a couple cases didn't answer the questions posed, particularly the question

    'Do you think that real artificial intelligence will come from this process, starting with a running dummy and stub methods, or from careful design and planning, so that in the end we can flip the switch and have a working prototype? Is A.L.I.C.E. a reflection of your beliefs or just an experiment

    After reading through the length and breadth of that reply, looking for an answer, I began to skim through the rest of his answers.
    • HOWEVER, the interview's subject frequently digressed and in a couple cases didn't answer the questions posed, particularly the question.
      I think that is a sure sign that the questions were answered by a robot. Everytime I try to talk to one of these robots, it tries to change the subject when it doesn't know the answer.
  • by gosand ( 234100 ) on Friday July 26, 2002 @12:06PM (#3959322)
    Damn, is it just me, or is this interview a lot like The Onion Advice [theonion.com] articles?

  • by Illserve ( 56215 ) on Friday July 26, 2002 @12:09PM (#3959344)
    I have much more respect for Wallace after reading this reply. He's a deeply insightful individual and doesn't appear to be taken in by much of the bullshit of the AI field.

    One point I disagree with him about is this:

    >I always say, if I wanted to build a computer >from scratch, the very last material I would >choose to work with is meat. I'll take >transistors over meat any day. Human >intelligence may even be a poor kludge of the >intelligence algorithm on an organ that is >basically a glorified animal eyeball. From an >evolutionary standpoint, our supposedly >wonderful cognitive skills are a very recent >innovation. It should not be surprising if they >are only poorly implemented in us, like the lung >of the first mudfish. We can breathe the air of >thought and imagination, but not that well yet.

    While it's true that our brains are not well adapted to the problems of the 20th cetury (remembering list of facts, for example, would be a great thing to be able to do innately), I think Wallace doesn't posses a very deep understanding of neurophysiology when he compares neural function to transistors and silicon.

    The idea that neurons simply summate incoming information, apply a threshold, and then output is very outdated. A single neuron is more like an entire computer than a transistor. There is evidence that a single neuron posseses an extraordinarily rich array of computational mechanisms, such as complex comparisons within the dendrite. In fact, the dendrite might be where the majority of computation is performed within the brain.

    A neuron is constantly adapting to its inputs and outputs, and this includes such things as growing new spines, (inputs) and axons (outputs). And within the cell, we are just beginning to see the enormous range of chemical modulations that change its functional characteristics in an dynamic fashion. A neuron can even change its own RNA to effect long term changes in its synaptic gain that are perhaps specific to a given synapse (1 of 10,000).

    The messy wetness of neural tissue, for which we pay the price of very slow signal transmission is precisely what gives it the ability to change itself in such a dynamic manner. They're slow, but they make up for it with massive parallel dynamics of outrageous complexity. The neuron is *not* a clumsy kludge implementation. It is a finely tuned and well oiled machine, the result of millions of years of tinkering by evolution.

    While it's probably that we can concoct innovations that might improve on the basic neuron (for example, long axonal segments could probably be replaced with electrical wires for a speed gain without much loss of computational power), the neuron itself should not be so quickly discarded

    -Brad
    • He also says that the brain is horrible at math, which isn't true. While it's true of most of us, people with ssvant syndrome (good article in June Scientific American [sciam.com], hardcopy only) are often capable of amazing mathematic abilities, so it's not the brain (or "meat") that's the problem. It's the software.
      • by orthogonal ( 588627 ) on Friday July 26, 2002 @12:59PM (#3959717) Journal
        The brain's great with math:

        it does complex waveform analysis so that you can understand speech,

        massively parallel image transformations to make two two-dimensional bitmaps (the photons that fall on your two retinas) into a single three-dimensional reconstruction (what you perceive),

        and ballistic calculations involving dynamically changing multi-jointed launchers when you move the many muscles in arm and shoulder to throw a rock

        What have these mathematical calculations have in common that isn't shared with calculating 10333 + 89677? The mathematical calculations the brain does effortlessly and with any awareness on your part were "designed" by at least six million years of evolution. Failures at this math didn't get to be ancestors, which is why you're so good at it.

        Conscious math, on the other hand, has been a factor for at most 30,000 years or so, math with numbers larger than a handful probably for at most 8000 years -- and even today, not for anything more than a fraction of the population. So that ability isn't built in or improved by evolution.

        So it's not that the brain can't do math; it does do math. It's just has never needed to do it consciously, and so it doesn't.

        Instead the human brain runs a general problem solving "interpreted" program that can be laboriously trained to do math or many other forms of abstract thinking. The price for this flexibility is slowness and inaccuracy. But we don't say that our computers "don't do math" or complain they're made out of the wrong materials when an interpreted BASIC program calculates matrices VERY slowly, or when it introduces errors in floating-point calculations.

        • s/effortlessly and with any awareness/effortlessly and withOUT any awareness

          (And before anybody objects that the brain does even more math, no, I did not mean to imply this was anything like an exhaustive list. :) )
        • Ahhhhrg. This is the third or forth post I've seen so far spouting these lies.

          > it does complex waveform analysis so that you can understand speech, ...

          No, it does NOT. There is no complex waveform analysis. There are no fourier transforms. It is pattern recognition.

          >and ballistic calculations involving dynamically changing multi-jointed launchers

          Again, false. It is AGAIN pattern recognition. If it did it using pure mathematics and physics, don't you think you could have gotten it right the first time? Or, if your brain had to learn the physics, and now new it well enough to do all that we do in everyday life, don't you think more people would pass Physics I. :-)

          It's a trial-and-error process with the brain's neural network encoding patterns. Inputs that looks like this (i want to throw a ball 10' feet), should be met my outputs like this (throw this hard). Granted, it's far more complicated than that, but it is STILL pattern recognition.

          In respond to some other posts I've seen. No the brain does NOT do calculus in order to catch a baseball. It does NOT determine veocities by judging the displacement of the ball over multiple "frames" of vision and then use calculus. It is again PATTERN RECOGNITION.

          Notice how you probably missed the first time you tried to catch a ball. And how when you try to catch again it takes a few to get the "feel" for it. This is because you body has changed and the patterns are faded and it takes a few repetitions to get re-trained.

          An experiment was performed along these lines. They played catch. People caught the ball. Then, they took the two people and altered the gravity somewhat. I don't remember if they did it by dropping them with a small constant acceleration or what they did, but they changed the effective gravity. The people missed the ball. They wouldn't have missed if they'd been calculating velocities and plugging in to a physics equation. Unless you think the brain encodes the gravitational constant (G), and it had to be recalibrated.... What happened was the throws and catches no longer matched the pattern the brain had learned. After a handful of awkward throws the brain adjusted. New patterns were found and encoded. People could catch again.

          Anyway, the brain does NOT do physics, calculus and other assorted advanced computations to keep alive. It just lives. Which you struggled with at first until you learned the way the world works and recorded those patterns.

          Justin Dubs
          • Your terminology is boxing you in on this.

            Manufactured, intellectualized mathematics is a human creation. We have invented it to represent "patterns" we see in reality around us. Mathematics is not the parent of these patterns - The patterns are the parent of mathematics.

            What this means is that your brain is "speaking" the native language of reality-math, which explains why it does so many motor-activities with ease, but can't add two 5-digit numbers (manufactured math) without a good deal of thought.

            Misunderstanding the foundations of math is what's whoopin' ya right now. The problem is an extremely common problem, especially among the scientist-mentality people (myself included!)... It springs from a bigger problem: misunderstanding the foundations of our cultural metaphysics. subject-object dualism (upon which all of western culture was built) is a human invention as well, not fully (or even decently) describing the ultimately superior reality that we perceive as being subordinate to subject-object dualism.
            • Well, actually, no.

              Like you said, there are two forms of mathematics. There is the true mathematics on which we support the universe is based. This mathematics gives rise to patterns. We try and model these patterns using our human mathematics.

              Let's call the universal mathematics M and the human-sprung mathematics H.

              The original post stated that the human brain was good at math H. That it used ballistic equations and calculus to do things. These things are part of math type H. Not M. H.

              My response was that he was wrong. The brain does NOT use H to do these things. It uses patten matching. This pattern matching is similar to H in that it is also an attempt to model the true mathematics of the universe, but it is NOT H. It is no inherently modeling the physics and calculus equations we are taught as a kid.

              Yes, the brain does speak M, as you said. It uses pattern matching based on M to try and model the world to try and throw and catch balls and understand speech. But it is NOT modeling or replicating H as the original post stated.

              This was the crux of my argument. Not what you thought it was. And no, I do not suffer from a misunderstanding of the foundations of mathematics nor a misunderstanding of subject-object dualism. Being a student of Buddhism I am very well aquainted with the notion of dualism and it's role in the world.

              Justin Dubs
      • I agree. I'd like to think the reason I have mediocre math skills is because all my "CPU" is occupied with running "Verbal Skills 2002" and "Music Aptitude XP" .
    • It seems that his depression has lead him to be quite a misanthrope:

      Slashdot: Is alice intelligent?
      Wally: People are dumb.

      Well, fair 'nuff, Wallace. If we hogtied the man behind the curtain, could Alice have written your responses to these questions? Maybe that's a bad example, because many of the answers were very distantly related to the questions asked...

      Anyway, aside from general misanthropy, Wallace seems like a wonderful person. Glad to have read every word.
    • >I always say, if I wanted to build a computer >from scratch, the very last material I would >choose to work with is meat. I'll take >transistors over meat any day. Human >intelligence may even be a poor kludge of the >intelligence algorithm on an organ that is >basically a glorified animal eyeball. From an >evolutionary standpoint, our supposedly >wonderful cognitive skills are a very recent >innovation. It should not be surprising if they >are only poorly implemented in us, like the lung >of the first mudfish. We can breathe the air of >thought and imagination, but not that well yet.

      Interesting. That was the part of the article that I found most striking too, but for different reasons. The reason that our wetwired brains are such "shitty" computers when compared to silcon computers is because they weren't designed. They just sort of grew in a non-directed organic fashion, and advantageous adaptations were propogated. While that tends to create a machine that's fairly adept at what it does in it's natural environment, it probably won't make nearly as good a "computer" as something that was designed from the ground up to be a computer (or to do a specific job).

      On the other hand, it may be that trying to apply the computational model to the human mind is just a poor match (though it certainly seems to be the best model that we have at the moment).
      • They did not grow in a "non-directed" fashion at all. Evolution has very clear directions (for each of its millions of species), it's just spread out over millions of years, so it doesn't seem directed when taken compared to computer design on the timescale of 1-2 years.

        • They did not grow in a "non-directed" fashion at all. Evolution has very clear directions (for each of its millions of species), it's just spread out over millions of years, so it doesn't seem directed when taken compared to computer design on the timescale of 1-2 years.

          You're mistaken. Directed evolution would be saying that evolution is the work of conscious decision and planning. It's like saying that "evolution" planned for microscopic bacteria to turn into advanced multi-cellular organisms that would eventually evolve into homo sapiens sapiens. I can't think of anything that could be more opposite of what natural selection is all about.
          • It's like saying that "evolution" planned for microscopic bacteria to turn into advanced multi-cellular organisms that would eventually evolve into homo sapiens sapiens. I can't think of anything that could be more opposite of what natural selection is all about.

            What if it weren't so explicit to begin with? Perhaps "life" manifested itself as single-cell organisms as a first attempt at a creature capable of reacting to its surroundings. The next step was to create a creature that interacted with its environment, and through natural selection arrived at multi-cellular organisms. The further goal -- intelligent creatures capable of understanding and manipulating their environment -- drove the process to humans.

            I wonder what the next evolutionary goal is. By observing history, one might conclude the next step is eradicating our environment. ;)

  • this is alicebot whose answering the questions?
  • Did anyone else get the impression that these answers only corresponded to the questions in a shallow, 'keyword' kind of way?
  • by malakai ( 136531 ) on Friday July 26, 2002 @12:15PM (#3959386) Journal
    Wallace said:
    Politicians, at least those in our society, never seem to give a straight answer to a question. If a journalist asks a specific question, the politician answers with a "sound bite" or short, memorized speech which is related to, but does not necessarily answer, the reporter's question
    For a scientist, he answers his own questions remarkably like a Politician.

    While reading his responses, I felt as if no matter what the question, his only intention was to plug his own problems or somehow get back to what he was talking about in the previous question.

    In fact, other than 2 questions, I would say the others where not answered. And furthermore, you could concat all his answers and you'd have one flowing diatribe on academia. Which, I'm not saying is incorrect, but we're here to talk about AI. Leave your griefs and personal problems on the mat outside the door.

    • >>
      Which, I'm not saying is incorrect, but we're here to talk about AI. Leave your griefs and personal problems on the mat outside the door.

      Ahhh ... if we can only get AI to discuss AI. NI brings its pitfalls right to the table :-)
    • For a scientist, he answers his own questions remarkably like a Politician.

      depends on how you define the word "HIS". I'm sure that the extensive answers that were given, that also did not directly pertain to the question for the answer, adding in the factor of political correctness, and mostly factual evidence of past attempts with ALICE and such...were all a result of him memorizing what he should say, basically his answers went in a circle to answer themselves, bypassing questions just to continue ranting! I do agree that he sounds more like a politician than a scientist of any sorts. The only scientific statements I heard were of references to facts from other scientists (engineers, etc.) and that brief rant about neurons in human brains (this was before he went to complain about how dumb we are cause our brains aren't "good computers")...
      The only thing I can really complain about is the answers, and how they were given. It is true that you dont want to make a conversation short and to not make yourself look like you dont know what your talking about by trying to keep the conversation flowing and give long answers...well, he tended to give long answers yes, but he missed the point and went off on his own personal statements instead of sticking to facts.
      Interviews like this I will skim over and pick up a key word here and there but I wont sit and read someone elses problems in their mind; i've my own to worry about! *rolls eyes*
      Anyone can memorize a few scientists, a few dates, some significant things they did with their experiments, and why they pulled the plug on what they were testing....only a politician can do all of the above and still avoid the question!
    • From alicebot.org (almost ./-ed):

      Q: Do you agree with Dr. Wallace that politicians never seem to give a straight answer to a question?

      A: I have heard that opinion, but I would like to know more before I form my own.

    • Soundbites (Score:3, Insightful)

      by Rupert ( 28001 )
      I'm not sure he was condemning this behaviour. This is exactly what ALICE does. If anything, he is becoming like his creation, with all the Frankensteinian overtones you want.
      • Politicians, at least those in our society, never seem to give a straight answer to a question. If a journalist asks a specific question, the politician answers with a "sound bite" or short, memorized speech which is related to, but does not necessarily answer, the reporter's question.

        Yes, I see what you mean. The whole interview read as if he had a corpus, say the last book/report he wrote and then fuzzily matched questions to book chapters and just quoted the lot. Not that the "replies" weren't interesting, they just had very little to do with the questions.

        Also, I see evidence of multiple mental disorders in his ramblings. Self reported chronic depression, paranoia, megalomania... Maybe a wee bit to much LSD&Cannabis under the bridge.

    • by Anonymous Coward
      At least his off topic ranting was interesting. Yours is a snore.

      AC23
    • Why should a politician give a straight answer to a question? Someone who is going to be endlessly quoted shouldn't be trying to think on their feet, because they're unlikely to pull it off, even if they actually have the necessary information to come up with a good answer. Politicians, ALICE, and Wallace in this interview have responses to certain things, and they don't have anything else suitable to answer with.

      I don't think Wallace is actually complaining about politicians in this statement, nor would he say that a scientist would answer differently. A scientist, especially, only knows a limited set of things, and so can only answer certain questions. These questions will get a canned answer, while other questions will be diverted. A scientist absolutely cannot give a straight answer to most questions, because the answer requires doing some research. Questions answered at a scientific lecture are limited to clarifying what's been said, reporting other research, and describing plans that have already been formed.

      What is different is that a scientist expects to be asked exclusively about the prepared material, whereas a politician expects to be asked about other topics and to divert those questions.
  • That's a fascinating article.

    What this guy has really discovered is that chatting doesn't take much smarts. That's a very useful result, but it's domain-specific. A friend of mine who does commercial chatterbots pointed out to me that it's possible to do a sales chatterbot. Salespeople talk about what they want to talk about and dominate the conversation. But building a useful tech-support chatterbot is usually unsuccessful.

    The A.L.I.C.E. approach breaks down when the system actually needs a model of the subject matter to accomplish a result. You can talk about travel with A.L.I.C.E., but it won't help you plan a trip.

    The author's view of AI in academia has some validity. I went through Stanford in the mid-1980s, when the expert systems people were talking like strong AI was right around the corner. It wasn't. And people like Feigenbaum knew it, despite their public pronouncements. The whole expert-systems debacle was embarassing. Some of those people are still at Stanford, and their output hasn't changed much. They're still stuck. It's kind of grim. If you're near Stanford, go up to the second floor of the Gates Building and look in on the Knowledge Systems Lab, (the "AI Graveyard") with its empty cubicles and out-of-date issues of Wired.

  • Odd fellow. (Score:2, Interesting)

    Dr. Wallace seems to hop up on his soapbox during answers to several questions, using some frail link between his point and a plausable answer as an excuse. His approach to AI also seems a bit illogical at times. Building a machine that has no worth other than to knock back believable answers really doesn't get one that far, if there is no reasoning behind it. That is what I like about the CYC project - the machine will have some "understanding" if you will, about things. Put another way, A.L.I.C.E. is to the Ultimate AI as a phone sex operator is to the Ultimate Lover. "All talk and no action", if that makes any sense :) That said, Dr. Wallace's answers should be taken in the correct psychological context. Here we have a man that admits to clinical depression - reading this, I felt myself at times very frustrated with his responses and attitudes, but as with other people in my life, I have determined that I will take his soapbox with a grain of salt and sort through it all to find the valuable information. The one question that I'd REALLY be interested in hearing an answer to is, "How do you believe mental illness affects your biases and attitudes toward AI?" -JT
  • This is honestly one of the best interviews, or literary pieces I have ever read. He is one of the most though provoking people I've read, and I'd honestly like to meet the man.

    However, while Dr. Wallace is obviouslly brilliant and insightful into almost all aspects of the so-called "Human Condition", he really needs to get over himself. Ye gods. Why does he find it necessary to dote on his self-proclaimed "mental health condition". I'm certianly not a psycologist (and personally put very little stock in the field), but I can say that when dealing with the human mind, what a person believes is true about themselves has a particularly nasty tendancy to become true.

    As a scientist and as a person in general, Dr. Wallace would lead a much happier personal and professional life if he stopped feeling sorry for himself. So you got shitcanned in the early 90's. Wow. We all have professional setbacks along our ardous life journey. Aside from the young, there are no victims in this life. Learn and live that, and you'll find yourself stronger and better for it.
    • Why does he find it necessary to dote on his self-proclaimed "mental health condition".

      I didn't get that impression at all. It didn't seem he was "doting" on it, he was simply being frank about what is obviously a major factor in his life. Other than lamenting the lack of medical research wrt LSD/marijuanna and depression, I didn't find him particularly self-pitying. The court case was, but his mental illness played only a minor role in that. That's just my impression, though.

      but I can say that when dealing with the human mind, what a person believes is true about themselves has a particularly nasty tendancy to become true.

      Assuming that this is what in fact happened is another particularly nasty tendancy. And for this I'm glad for Wallace's frankness. It's a step toward what I think is a healthy social attitude toward mental illness. "I'm bipolar" shouldn't cause any more stigma than "I'm diabetic", nor be any more likely to be assumed to be hypochondria. "Just pick yourself up and be happy" is the advice that's never helped a clinically depressed person.

    • We all have professional setbacks along our ardous life journey. Aside from the young, there are no victims in this life. Learn and live that, and you'll find yourself stronger and better for it.

      That's a very nice one-size-fits-all philosophy. Would you have told slaves that? Or homosexuals or women discriminated against in the workplace?
  • Who's saner? (Score:4, Interesting)

    by Washizu ( 220337 ) <{bengarvey} {at} {comcast.net}> on Friday July 26, 2002 @12:45PM (#3959574) Homepage
    Who's saner? A.L.I.C.E. or Dr. Wallace?

    Great interview. Probably the best I've ever read on Slashdot (and I'll definitely come back eventually to read everything I glazed over). Does anyone else think it's strange that the leading AI researcher in the world is a self described "mental patient?" I think it's pretty cool.

    I especially liked this line:

    It may be that future generations come to view what we call "consciousness" the same way we see the Earth at the center of Ptolemy's solar system, as an anthropocentric illusion useful for explaining the unexplainable. Perhaps after a new Copernicus pulls the wool from our eyes, the central role of "consciousness" in intelligence will move to the periphery of our knowledge system, if not disappear entirely.

    Rock on. Even if we do have souls, they probably don't affect us at all.
    • Re:Who's saner? (Score:3, Insightful)

      by greenrd ( 47933 )
      the leading AI researcher in the world

      Says who?

      I would say he's not researching AI, he's researching human-like chatbots, which are not actually intelligent. Unless Artificial Intelligence now actually means Artificial Stupidity...

      Even if we do have souls, they probably don't affect us at all.

      That's logically possible, I'll admit - but I missed the bit where he offered a meaningful reason to believe this is the case.

  • Could we please interview someone who:
    a) is more mentally stable
    b) is not so much on the fringe
    c) wants to talk about something other than his personal problems and details of his bot when asked about general AI isues
    d) answers at least one question he was asked
    [oh, I am sorry, he may have answered one or two out of ten]

    It would also be nice to interview someone who is a respected expert in the field, not an outcast, whom nobody takes seriously. Just to balance this interview, so to speak.
  • "tell all truth, but tell it slant, success in circuit lie"-- Emily Dickenson
  • by Dan Crash ( 22904 ) on Friday July 26, 2002 @12:51PM (#3959628) Journal
    Seriously. You're obviously a skilled communicator, and your opinions and insights are really unrepresented in the literature out there. Of course, I don't agree with all of them, but I enjoyed the hell out of reading your thoughts, and learned something, too. (Why *not* a Department of Psychedelic Studies that isn't just someone's dorm room?)

    Please get a book contract. Tell them Slashdot said you need one.
  • Since Dr. Wallace never offers a meaningful refutation of the Chinese room, let me offer one:

    Sure, *you* don't understand Chinese. But the *whole system* understands Chinese. That includes the instruction cards, too. But it's difficult the query the whole system. Situating 'you' as part of the system makes it easy to forget that the human is just one component. (ie it's a red herring, designed to distract and confuse rather than enlighten).

    I'm not entirely sure I believe it, as there's got to be a good refutation of this explanation somewhere. But I haven't seen it yet.
    • Of course you're right. But, as Wally said, it's a matter of politics.

      It's not like Schroedinger's cat. If we open up our brains and discover that instead of a mystical soul there's just a bunch of slimy cells, it doesn't suddenly make us unintelligent. Neither does it matter if we discover that some of our neurons are actually being simulated by hyper-intelligent beings from another dimension as a form of penance.

      Just as it doesn't matter if we're living in the Matrix or not. If you can't tell the difference, there is no difference. And we're right back to Turing. Looking behind the curtain does not disprove intelligence, it simply reveals the mechanism.

      And frankly, the interview really did indeed appear to be a joke on Wally's part where he just fed all his theses and other literary output into ALICE and let it answer the questions.

      How much of a coup to be able to say that even slashdotters can't tell the difference between Wally and a computer simulation of him... ;-)
    • Heh. Asked about Searle's Chinese Room, he responds:

      People ask me, "Why are you obsessed with Ken Goldberg and U.C. Berkeley?" I say, I'm not obsessed. Other people are obsessed.

      and follows up up with an interminable rant about his legal hassles with this Goldberg and UC-Berkeley, apparently set off by the fact that Searle is on the faculty there. He rants and rants (no, it's other people who are obsessed) and finally remembers the original question and waves it away.

      Maybe if you have a good understanding of AI and theories of consciousness the replies I found incomprehensible is meaningful. (Some people seem impressed by it.) But, to an outsider, it's gibberish, and kind of frightening gibberish at that.

      • and follows up up with an interminable rant about his legal hassles with this Goldberg and UC-Berkeley, apparently set off by the fact that Searle is on the faculty there. He rants and rants (no, it's other people who are obsessed)

        I for one found the rant worthwhile. But you are probably one of those people who moderate posts down as Off-Topic even when they bring up interesting tangents.
    • I agree completely. It's as if you could somehow issolate the lanaguage center of the human brain and ask it questions while it was cut off from everything else, and then declare that humans are unintelligent because the part that handles translating thought into words and back again can't think on it's own.
  • People ask me, "Why are you obsessed with Ken Goldberg and U.C. Berkeley?" I say, I'm not obsessed.

    Riiiight. That's why it took him over 100 lines to START answering the last question.

    This guy has a lot of interesting things to say, but I find it hard to agree with any of his beliefs. I think he pays too much attention to the politics and not enough attention to the "science".

    IMO, the CYC project is what the AI field should be concentrating on. It might take a little longer to get good results, but it will give us a lot better understanding of ourselves.
    • This guy has obviously developed a system that works for the most part.

      It seems to make sense from a logical standpoint, how do we aquire knowledge, by querying those that know, wether it be from books, or people, we generally then reguritate the knowledge found in the book, only people that make discoveries are really doing anything other then this.

      This system won't replaced a scientist in ability to learn, but it will replace the average joe who is answering a question not in his field, is most likely just regurgitating knowledge he found from someone else. I find ALICE to be a good commentary on the intelligence of society in general.
      • What's missing, though, is the ability to understand any question that the system doesn't have a canned "template" for. The fact that he uses the comparatively small number of words that most people tend to use when interacting with alice, just illustrates his limited ambitions.

        • i think that what you're looking for is intelligence in a human way. which is what everybody thinks when you say 'intelligence'.

          A.L.I.C.E. is simulated intelligence, solely to reply to human reponses in an intellgent way. to me, understanding implies that an AI can grasp a concept, like sports, sex, relationships, etc. and model responses or even make up speech on it's own without interaction. that's very much outside of the scope of measuring modern AI (for the purpose of the standard turing test)

          for some strange reason, it's better to get results by having an AI conversing with real people and convincing them they are intelligent, rather than convincing the person they are human. what makes the turing tests so oblivious is that we "judge intelligence" based on human interaction, those programs with 'canned "template"' responses can handle human interaction better, and A.L.I.C.E. has proved that.

          'The fact' that the responses seem limited, is that people are limited in the way they interact. unless you're intimate with every stranger you have ever talked to in person, most people introduce topics with only a handful of words, and computers (as well as people) handle pattern recognition (CBR/AIML) better than assembling random phrases into an order that makes sense.

          AI is wasted on chatting, but then again, so is regular human intelligence.
  • Slashback Anyone? (Score:1, Flamebait)

    by echucker ( 570962 )
    Hell, I screw off at work just as much as the next guy on slashdot, but I don't have the week it takes to read this and attempt to digest the doublespeak doublespoken here.
  • I recommend the Onion's "Ask ..." [theonion.com] columns.
  • ken goldberg... (Score:3, Insightful)

    by kevin lyda ( 4803 ) on Friday July 26, 2002 @01:40PM (#3960081) Homepage
    it only seems fair that after all that someone should interview ken goldberg [berkeley.edu], yes?
    • Re:ken goldberg... (Score:2, Insightful)

      by dekraved ( 60562 )
      Well, since he didn't talk to Wired [wired.com], I wouldn't get my hopes up. Which is too bad, since this Goldberg thing is pretty mysterious, and seems to be a bit paranoid himself.

      I think the bigger question is: do crazy people go into AI, or does AI make people crazy? There were definitely a few professors I avoided during my undergrad because they reputedly had gone nuts working on AI problems.
      • Ken Goldberg isn't an AI researcher. And I can testify that he is one of the sanest people I know, certainly not "a bit paranoid himself."

        I know almost nothing about this whole sad business with Wallace -- this interview was the first I heard of it -- but I would think that, if Slashdot wanted to do an interview with Dr. Goldberg, we'd be best off proposing to ask him about his own research, which is fascinating in its own right. (I said as much in another comment, further down.)

        Kiscica
    • Ken is an absolutely fascinating person who is overflowing with interesting ideas and projects (a couple of which I've worked with him on). I'm sure he'd be an excellent subject for a Slashdot interview, on his own merits. I don't, however, think that bringing up the sad case of Dr. Wallace would be a politic way to pitch the interview.

      Kiscica
  • wow.. finally (Score:1, Interesting)

    by joeldg ( 518249 )
    Finally a /. interview that the person actually *really* responds to the questions with over a full paragraph of response.. I think this is the very first interview here that has done this in my recolection of years of being here on /. .. I did however notice that he would digress at points and fumble into more personal areas, but hey, I can understand. Right on.. more interviews like this and I will never get any work done..
  • Wow (Score:4, Funny)

    by Salamander ( 33735 ) <`jeff' `at' `pl.atyp.us'> on Friday July 26, 2002 @01:49PM (#3960175) Homepage Journal

    This guy gives "artificial intelligence" a whole new meaning, doesn't he?

  • I think these raw "knowledge base" solutions leave much to be desired. For a more convincing and useful AI, raw memory needs to be linked more closely with logic and function. Consider this rather arbitrary subject, my Bic cigarette lighter sitting in front of me on my desk:

    • Memory has turned to procedure when I pick it up and light a smoke, or when I lit off those bottle rockets on the 4th. That procedure is highly well-defined and repeated. It has been used a hundred thousand times before, and it's utility will never cease. This procedure must be inherently attached to the raw concept of "cigarette lighter", and applied as appropriate.
    • Once, I tapped a Bic lighter on a counter top in order to get a few more uses from a nearly dead lighter, the plastic case cracked and lighter fluid sprayed all over me, accompanied by a loud bang! From that single experience, I learned never to let a lighter jam up against a hard surface with any amount of force again. Behavior is modified.

    Thus, experience and repeated procedure have helped me "understand" the utility and storage details necessary to effectively use the item.

    I believe that next-generation AI will be in the form of some relational, queriable memory in which each subject is attached to one or more genetically built procedural programs. The programs evolve over time to suit the procedural needs of the subject.

    The most critical component in this system is the trainer, which could be human or some kind of automated retrospective asserting the usefulness of the genetic program. An automated system is more desirable because it would reduce the learning time and more closely approximate human lines of thought (e.g. I smacked that lighter and it exploded, maybe I shouldn't do that again?) but it seems to me that this would require the ability to represent the logic of the interaction of arbitrary subjects in some pure mathematic format, similar to the way competitive behavior is described by Nash's Equalibrium and Game Thoery. This is a daunting prospect, to describe a general translation of text content into logic, and it is the reason I haven't started writing code ;)

    The genetic programs could breed until there is an "acceptable" margin of error in its processing, and only reevaluate and spawn new generations of programs when the process continuously leads to wrong conclusions.

    A system like this is a continuous learning machine, and never reaches a state where "training mode is disabled". It constantly learns from experience and refines behavior based on results. As humans we incorporate new ideas into our daily procedures with relative ease (OK, well some of us do ;). This is one of our greatest strengths, and I think it has been sadly misrepresented in the current generation of AI research.

    Don't get me wrong here, I think the works of Dr. Richard Wallace and John Koza, et. al. are very important milestones. What I am saying is that there must be creative uninvestigated ways in which raw memory can be combined with procedural action.

  • To all those who are complaining or noticing that the answers to the questions stray or are off-topic, think of Andy Kaufman [jvlnet.com].

    Remember: This interview is from someone who creates an A.I. program that responds by a lookup of sorts in a massive database of possible questions. (If I understand correctly)
  • This guy is obviously smart, but this whole interview just seemed to be a lot fo whining and rhetoric.

    Honestly, he ignored some of the questions as an opportunity to ramble on about one thing or another. It's just a bit much.

    He bashes politics in the text, yet answers with highly political answers.

    I respect his work, but I'm not so impressed with his views on intelligence...nor am I interested to hear about all his troubles with the AI community.

    I just wish he would have shut up and answered the questions some of the times...this is an interview, after all--not a soapbox.

    -Jayde
  • The whole scheme looks like a way to store information and retrieve it..rather than creating intelligence. Bits and pieces of botmasters intelligence tucked away in a big database and showed off when asked the right question..... Dr. Wallace could have pulled off a good data retrival system...or .... redefined intelligence. This raises the question as to how original and unique is a human's mental database?...are we just repeating things we collected here and there? Does that constitute intelligence? The only way to answer this is to really figure what intelligence is all about.... Dr. Wallace may be right or wrong with equal probability....
  • Neurons are the transistors of the brain. They are the low level switching components out of which higher-order functionality is built. But like the individual transistor, studying the individual neuron tells us little about these higher functions.

    This sounds like someone who doesn't really understand how all the transistors work. By jumping straight to high-level (XML for crying out loud) processing, you are making the assumption that transistors, logical units, chips, buses, memory, kernels, shells are functionally equivalent to neurons and all the chemical processes that our brains use. This seems like a HUGE jump to make, considering the obvious differences in the way a computer functions compared to what we know of the brain.

    I find myself agreeing with the Churchlands that the notion of consciousness belongs to "folk psychology" and that there may be no clear brain correlates for the ego, id, emotions as they are commonly classified, and so on. But to me that does not rule out the possibility of reducing the mind to a mathematical description, which is more or less independent of the underlying brain archiecture. That baby doesn't go out with the bathwater. A.I. is possible precisely because there is nothing special about the brain as a computer. In fact the brain is a shitty computer. The brain has to sleep, needs food, thinks about sex all the time. Useless!

    Hmm, the brain is evolved to keep us alive, computers can't keep themselves alive and reproduce, let alone adapt to changes. Perfect mathematical functioning is not a practical model for a lifeform in a world that is not static. The fact that Wallace views computers as better than a brain is very telling to me. To get philosophical, what makes any one thing better than another? I can not continue reasoning down a path that doesn't value human intelligence since I myself am human. To deny the value of my humanity would render me useless and so why go down that path?

    And remember, no one has proved that our intelligence is a successful adaption, over the long term. It remains to be seen if the human brain is powerful enough to solve the problems it has created.

    This is an unbelievable statement. The proof of our success is our population and physical dominance of the planet, what more do you need? If we destroy ourselves, then we weren't the ideal lifeform for this planet, but then again, ideal just depends on your concept of success. Nothing is forever anyway.

    If consciousness is an illusion, is self-knowledge possible at all? For if we accept that consciousness is an illusion, we would never know it, because the illusion would always deceive us. Yet if we know our own consciousness is an illusion, then we would have some self-knowledge. The paradox appears to undermine the concept of an illusory consciousness, but just as Copernicus removed the giant Earth to a small planet in a much larger universe, so we may one day remove consciousness to the periphery of our theory of intelligence. There may exist a spark of creativity, or "soul," or "genius," but it is not that critical for being human.

    The paradox is easily resolved. Consciousness is not an illusion any more than reality is an illusion. Our consciousness is the definition of our reality, and is, in fact, the only self-evident truth in the universe.

    Perhaps the real question posed here is do we have free will, or are we simply living out the physical inevitabilities set in motion at the beginning of the Universe. This is a sticky issue, but I choose to believe that we DO have free will, for the simple reason that without free will, nothing has meaning. Where do we get ourselves by the assertion that free will is an illusion? We come to a position where there is no point to making any effort or doing anything because whatever happens is inevitable.

    I agree with the last part of this paragraph insomuch as it addresses human laziness, and the tendency to go on 'auto-pilot' without thinking about what we are doing. Yet we all, to some extent, can be communicated with on a philosophical level. This assertion that the human 'soul' is an insignificant portion of our humanity is just thrown out there without the slightest bit of evidence.

    I wish Wallace luck in constructing a machine that can replicate human behaviour, but I think he is making a lot of assumptions about consciousness that are better left open for exploration.

    • Perfect mathematical functioning is not a practical model for a lifeform in a world that is not static. The fact that Wallace views computers as better than a brain is very telling to me.

      I think its a reflection of his own situation.

      To get philosophical, what makes any one thing better than another?

      Obviously, that which is better than another can be taken in a mathematical sense or a human appreciative sense. Mathematics not being in question that which is better is that which measures up better to the human making the judgement. Not that such judgements are limited to humans.

      I can not continue reasoning down a path that doesn't value human intelligence since I myself am human. To deny the value of my humanity would render me useless and so why go down that path?

      Because you might find without intelligence you may still have use. Not that I'm going to force you away from what I might consider an intelligence fetish.
  • I have no doubt that someone wrote these answers, or pieces of these answers, at some time in the past. But I've read several of those pieces verbatim before.

    So, did he answer them with lots of cut-n-paste from a dictionary of his standard responses, or did he leave that to one of his robots? I don't know. Either way, the answers wandered a lot.
  • The doctor states that Alice is intelligent, to the point of simulating a human response. He also goes so far as to state that the human is 99.99999% robot, some of us being even more so.

    But then you notice that Alice fails in cases of creativity. For instance:

    C: Please write me a sonnet on the subject of the Forth Bridge.
    R: Count me out on this one. I never could write poetry.

    A good experiment to test true human-like intelligence would be to pass the machine a string of random words, and ask it to construct a meaningful paragraph from them. Humans can do this without a problem, this is as simple as the word magnets attached to the refrigerator.

    It seems that Alice's responses are all just a reflection of a human's decision of what is appropriate, and what isn't. The machine is not making the actual decision of what to say, it is merely spurting sentences that are pre-existant. I doubt that Alice will have the occasion of 'putting her foot in her mouth.'

    • It seems that Alice's responses are all just a reflection of a human's decision of what is appropriate, and what isn't. The machine is not making the actual decision of what to say, it is merely spurting sentences that are pre-existant.

      I wholeheartedly agree with you. A bunch of if/then statements do not make intelligence they make a 'state machine'.

      I also agree with Dr. Rich though that humans are highly robotic for the most part. We learn certain things and stick with them. We fall into patterns and ruts. So much so that someone can watch and predict what a person will do from day to day. From the direction they take to work to the time they leave the office.

      The big difference is when something unexpected happens. Humans can adapt on the fly. If a client needs a last minute change on something you have the creative intellect to maybe throw that change in there, whereas a state machine would leave at precisely 5pm, unless contingency programming were already in place for that situation.

      I think a lot of it has to do with our right brain and creativity.

      The trick of it is making a seperate program/module to A.L.I.C.E, a 'right brain' that has the ability to add in those 'on the fly' reactions on its own based off of what it currently knows on the 'left brain' side.

      If those reactions are appropriate (positively reinforced) then they are permanently added in.

      This seems to diverge a tad from Dr. Rich's position though in that he prefers the 'pre-approved' insertion of information rather than a self-learning machine.

      Who are we to say he is wrong though? I talked to A.L.I.C.E. upon reading the initial question article and was quite amused with the responses and even had an emotional reaction to some of them. I don't believe that makes her/it 'alive', but I do see it as being a useful endeavour.

      Keep working on things Dr. Rich! We all want scientific research to go forward and major breakthroughs come from persevering and to keep moving forward no matter what others say.

      - Dig

  • This is a perfect example of how truely insane poeple often appear sane and very intelligent.

    Throughout his LONG WINDING answers I could not help but feel that I was reading the words of a very intelligent person. He posed, for the most part, stimulating ideas and apparently, well thought out arguments to support those ideas. I had no reason to question his sanity, therefore the thought never crossed my mind. That is, until I got to the question of the Chinese Room. This was a simple, though possibly offensively worded question. It required a relativey simple answer but, based on previous answers I was ready for a big one.

    I was not ready, however, for the rambling tirade of a lunatic. The man is in fact DEEPLY disturbed and may indeed be a threat to his "friend of 20 years", Goldberg. I personaly would be very uncomfortable, if not fearful, had that answer been a face to face encounter. I find the whole thing rather sad.

    In the end, my conclusion is that A.L.I.C.E. is NOTHING more than she has always appeared to be. A mimic just as ELIZA was. And Wallace.... the man is really quite insane!

    • > lunatic
      > DEEPLY disturbed
      > a threat
      > really quite insane

      My goodness! Perhaps I should seek a restraining order against Mr. FreeLinux!

      Dr. Rich
    • He is Bipolar, this means more than a little reckless on his up-cycle and depressed to the point of suicide on his down cycle. The swing of the cycles may be controlled to a certain degree by Lithium Carbonate.

      He is probably less likely to physically attack Goldberg than most others.

      I have known several bipolar depressives, so I know a little on this. This is why Wallace tends to ramble, but it doesn't disqualify him from his work. Bipolar depressives can be exceptionally creative.

      Last point, your are not exhibiting intelligence, I do that. You are just simulating it!!! Yes, the view of what intelligence is intensely subjective and you are exhibiting Searle's error of confusing a system with its components.

  • "The most serious social problem I can realistically imagine being created by the adoption of natural language technology is unemployment. The concept that AI will put call centers out of business is not far-fetched. Many more people in service professions could potentially be automated out of a job by chat robots."
    The hunter-gather economy guaranteed full employment - automation raises the standard of living. Isn't "eliminating jobs" the best result of AI: automating tasks so people don't have to do tedious work like manning call centers? Worrying about automated call centers causing unemployment doesn't seem any more reasonable to me than worrying that Thomas Edison hurt the candlemaking business.
  • The comment about a little guy on disability being pushed around by the great and powerful University of California really struck me. Not because of the power of UC (ObDisclaimer: I work for UC San Diego). I'm aware of their power: I got pulled over by the campus cops one day for not coming to a complete stop at a stop sign. The first thing I did was go home and check out their legal authority to stop me, etc. It turns out they've got the full powers of any municipal police force. Lets take it as a given that any organization that has its own police force is very powerful.

    What struck me was the fact that in Goldberg's position, I too might be afraid. From the point of view of a tenured professor, a mentally ill person on disability has nothing to lose. That is the most dangerous person in the world. And since he's obviously taken an interest in Goldberg, and is sending him what he calls "pathetic emails", I can assume that Goldberg might be a little weireded out by a lot of attention, rambling emails (if the interview responses are any indication), etc. from him.

    Add to that the declaration that, while he used to regard all violence as unjustified, he now sees how some violence is understandable. In the context of the other stuff that was going on, I could see how somebody could get scared.

    Obviously, compared to "Move your fucking car or I'm going to kill you", it's hardly a threat that inspires dread. But it still looks like it could be interpreted as a threat by a reasonable person.

    -Esme

    • Yeah but a reasonable person might also reconsider his assessment of the threat. Shutting off all communication and seeking legal recourse seems extreme.
    • Add to that the declaration that, while he used to regard all violence as unjustified, he now sees how some violence is understandable.

      I don't see a problem here at all. I don't condone violence and feel it is unjustified. However, I do understand what pushes people to act out violently. If you cannot understand what drives some Palestinians to blow themselves up in Israel, you must not be paying attention. That does not mean that you should accept it as right.

      Obviously, we've heard one side of the story and cannot hope to understand what happened. If the facts are as Dr. Wallace wrote, it does seem strange that a friend of 20 years would suddenly fear for his life after getting an email saying the friend understood why some people resorted to political violence. None of my friends feel I am a danger to them when I speak about the tragedy of the Israel-Palestine conflict. As well, being a friend for such a long time, I expect Dr. Wallace had at some point explained the full extent of his depression to Dr. Goldberg and that he posed no danger to his friends.

      Either way, it's sad when relationships fracture so completely that long term damage results (being banned from the UCB CS department in this case).

      • Don't get me wrong, I don't think an average person saying they understand why oppressed people are driven to political violence would be taken as a threat. I don't feel the least bit threatened by your statement about understanding the Palestinians, for example.

        But we've got a different context. If I knew you were mentally ill, had gotten a lot of rambling emails from you, etc. that might change my perspective.

        I agree it's sad that this Goldberg guy fealt threatened and broke off the 20-year friendship. But just because you've known someone for 20 years doesn't mean that they aren't going to be violent tomorrow.

        -Esme

    • > a mentally ill person on disability has nothing to lose.

      This is why I can never figure out what UCB stands to gain by taking me to court, to risk so much for a worthless restraining order. If they win, they get a restraining order. If they lose, I collect damages.

      Dr. Rich
  • He should form his signature properly.
    so not:

    By order of the Superior Court of the State of California 6/28/02,
    do not forward this email or its contents to Kenneth Y. Goldberg


    but
    --
    By order of the Superior Court of the State of California 6/28/02,
    do not forward this email or its contents to Kenneth Y. Goldberg

  • Reading Dr. Wallace's opinion of the current academic establishment reminds me all too much of my own dissatisfaction with the entire education system. Wonder if something along the lines of an Open Source Research Funding initiative is warranted? I only mention this because Open Source seems to be the answer to many of the woes of the software industry. Might it not also solve some of the problems of academia? Drive the money and research dollars from those of the public who are interested in an area to those who wish to do the research in that area. As a side effect it could also have an impact on who gets the recognition and eventually who are put in positions of influence.
    • Imagine Slashdot. Imagine that only Ph.D.'s were allowed to post articles and comments. Imagine that all the comments were posted by Anonymous Cowards. Imagine that if enough Anonymous Cowards got together, they could take your original article down altogether. Now imagine that instead of the internet, we use photocopies, printing presses, and the postal service to communicate. This is called "academic peer review." This is how science "works".

      I'm beginning to appreciate the advantages of this open source trial by a jury of my peer reviewers, especially the ones who are not Anonymous Cowards.

      Dr. Rich

The fancy is indeed no other than a mode of memory emancipated from the order of space and time. -- Samuel Taylor Coleridge

Working...