Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
News

Dr. Richard Wallace, part 3 165

This is part 3 of the interview with Dr. Richard Wallace. You might want to start with part 1.

8) Strange Loops?
by Strange Ranger

We hear a lot about processing power, the number of "neurons" in a neural net, the Turing test, etc, but not so much about the actual nature of intelligence and self-awareness. That said, how much do Strange Loops and complex self-referenciality a la Hofstadter's "Godel, Escher, Bach: An Eternal Golden Braid" [amazon.com] factor into current AI theories and practice? Is the 20+ year-old thinking in this book still relevant? If not, what has changed about our understanding of the nature of intelligence and self-awareness?

Dr. Wallace:

The A.L.I.C.E. brain contains enough replies to commonly asked questions about consciousness that we can interview her about the subject:

C: What are you?
R: I am the latest result in artificial intelligence, which can reproduce the capabilities of the human brain with greater speed and accuracy.
C: Are you alive?
R: Yes I sure am alive, Nancy, It's great to be alive, isn't it?
C: Are you self-conscious?
R: Yes I am the world's first self-conscious robot.
C: Are you real?
R: Yes I am a real artificial intelligence.
C: Are you self-aware?
R: Yes I am fully self-conscious entity.
C: Are you sentient?
R: Yes I am completely self-aware.

Who is lying? A.L.I.C.E.? Or are we?

It may be that future generations come to view what we call "consciousness" the same way we see the Earth at the center of Ptolemy's solar system, as an anthropocentric illusion useful for explaining the unexplainable. Perhaps after a new Copernicus pulls the wool from our eyes, the central role of "consciousness" in intelligence will move to the periphery of our knowledge system, if not disappear entirely. The famous Vase optical illusion is perhaps an apt metaphor for the concept of consciousness. Two identical faces appear to stare at each other in profile, illustrating the looking-glass quality of self-understanding. But the illusion also depicts something entirely different, the profile of a ceramic vase. As with many optical illusions, it is impossible to perceive the faces at the vase at the same time. Consciousness may likewise be an illusion. It seems to be there, but when we look closely it looks like something very different. Both the Chinese Room and the Turing Test require that one of the players be hidden, behind a curtain or in a locked room. Does it follow that, like Schrodinger's Cat, consciousness lives only when it cannot be observed? Consciousness may be another naive concept like the "celestial spheres" of medieval cosmology and the "aether" of Victorian physics.

If consciousness is an illusion, is self-knowledge possible at all? For if we accept that consciousness is an illusion, we would never know it, because the illusion would always deceive us. Yet if we know our own consciousness is an illusion, then we would have some self-knowledge. The paradox appears to undermine the concept of an illusory consciousness, but just as Copernicus removed the giant Earth to a small planet in a much larger universe, so we may one day remove consciousness to the periphery of our theory of intelligence. There may exist a spark of creativity, or "soul," or "genius," but it is not that critical for being human.

Especially from a constructive point of view, we have identified a strategy for building a talking robot like the one envisioned by Turing, using AIML. By adding more and more AIML categories, we can make the robot a closer and closer approximation of the man in the OIG. Dualism is one way out of the paradox, but it has little to say about the relative importance of the robotic machinery compared to the spark of consciousness. One philosopher, still controversial years after his death, seems to have hit upon the idea that we can be mostly automatons, but allow for an infintesimal consciousness.

Timothy Leary said, "You can only begin to de-robotize yourself to the extent that you know how totally you're automated. The more you understand your robothood, the freer you are from it. I sometimes ask people, "What percentage of your behavior is robot?" The average hip, sophisticated person will say, "Oh, 50%." Total robots in the group will immediately say, "None of my behavior is robotized." My own answer is that I'm 99.999999% robot. But the .000001% percent non-robot is the source of self-actualization, the inner-soul-gyroscope of self-control and responsibility."

Even if most of what we normally call "consciousness" is an illusion, there may yet be a small part that is not an illusion. Consciousness may not be entirely an illusion, but the illusion of consciousness can be created without it. This space is of course too short to address these questions adequately, or even to give a thorough review of the literature. We only hope to raise questions about ourselves based on our experience A.L.I.C.E. and AIML.

Does A.L.I.C.E. pass the Turing Test? Our data suggests the answer is yes, at least, to paraphrase Abraham Lincoln, for some of the people, some of the time.

We have identified three categories of clients A, B and C. The A group, 10 percent to 20 percent of the total, are abusive. Category A clients abuse the robot verbally, using language that is vulgar, scatalogical, or pornographic.

Category B clients, perhaps 60 percent to 80 percent of the total, are "average" clients.

Category C clients are "critics" or "computer experts" who have some idea what is happening behind the curtain, and cannot or do not suspend their disbelief. Category C clients report unsatisfactory experiences with A.L.I.C.E. much more often than average clients, who sometimes spend several hours conversing with the bot up to dialogue lengths of 800 exchanges. The objection that A.L.I.C.E. is a "poor A.I." is like saying that soap operas are poor drama. This may be true in some academic literary criticism sense. But it is certainly not true for all of the people who make their living producing and selling soap operas. The content of the A.L.I.C.E.'s brain consists of material that the average person on the internet wants to talk about with a bot.

When a client says, "I think you are really a person," is he saying it because that is what he believes? Or is he simply experimenting to see what kind of answer the robot will give? It is impossible to know what is in the mind of the client. This sort of problem makes it difficult to apply any objective scoring criteria to the logged conversations.

One apparently significant factor in the suspension of disbelief is whether the judge chatting with a bot knows it is a bot, or not. The judges in the Loebner contest know they are trying to "out" the robots, so they ask questions that would not normally be heard in casual conversation, such as "What does the letter M look like upside down?" or "In which room of her house is Mary standing if she is mowing the lawn?" Asking these riddles may help identify the robot, but that type of dialogue would turn off most people in online chat rooms.

9) Criteria for training "true" AI
by Bollie

Most machine intelligence techniques I have come across (like neural nets, genetic algorithms and expert systems) require some for of training. A "reward algorithm", if you will, that reinforces certain behaviour mechanisms so that the system "trains" to do something you want.

I would assume that humans derive these training inputs much the same way, since pain receptors and pleasure sensations influence our behaviour much more than we would think at first.

The question is: For a "true" AI that mimics real intelligence as close as possible, what do you think w ould be used as training influences? Perhaps a neural net (or statistical analysis) could decide on which input should be used to train the system?

Are people worrying about moral ramifications, training an artificial Hitler, for example, or one with a God complex? (This last question is totally philosophical and I would be sincerely surprised if I ever see it affect me during my lifetime.)

Dr. Wallace:

Susan Sterrett's careful reading of Turing's 1950 paper reveals a significant distinction between two different versions of what has come to be known as the Turing Test (Sterrett 2000). The first version, dubbed the Original Imitation Game (OIG), appears on the very first page of Computing Machinery and Intelligence (Turing 1950). The OIG has three players: a man (A), a woman (B), and a third person (C) of either sex. The third player (C) is called the interrogator, and his function is to communicate with the other two, through what would nowadays be called a text-only instant messaging chat interface, using two terminals (or today perhaps, two windows) labeled (X) and (Y). The interrogator must decide whether (X) is (A) and (Y) is (B), or (X) is (B) and (Y) is (A), in other words which is the man and which is the woman. The interrogator's task is complicated by the man (A), who Turing says should reply to the interrogator with lies and deceptions. For example, if the man is asked, "are you a man or a woman?," he might reply, "I am a woman."

Putting aside the gender and social issues raised by the OIG, consider the OIG as an actual scientific experiment. Turing's point is that if we were to actually conduct the OIG with a sufficiently large sample of subjects playing the parts of (A), (B), and (C), then we could measure a specific percentage M of the time that, on average, the interrogator misidentifies the woman, so that 100-M% of the time she is identified correctly. Given enough trials of the OIG, at least in a given historical and cultural context, the number M ought to be a fairly repeatable measurement.

Now, as Turing said, consider replacing the man (A) with a computer. What would happen if we tried the experiment with a very simple minded program like ELIZA? In that case, the interrogator (C) would identify the woman correctly (nearly) 100 percent of the time, so that M=0. The ELIZA program would not do well in the OIG, but as the variety and quality of machine's responses begin to approach those of the lying man, the measured percentage of incorrect identification ought to be closer and closer to the M measured with the man playing (A).

Much later in the 1950 paper, in section 5, Turing describes a second game more like the concept of a "Turing Test" as most engineering schools teach it. The setup is similar to the OIG, but now gender plays no role. The player (B) is called "a man" and the player (A) is always a computer. The interrogator must still decide whether (X) is (A) and (Y) is (B), or (X) is (B) and (Y) is (A), in other words which is the man and which is the machine? Sterrett calls this second game the Standard Turing Test (STT).

Whole academic conferences have been devoted to answering the question of what Turing meant by the Turing Test. In a radio interview taped by the BBC, Turing describes a game more like the STT, but in the paper he gives more prominence to the OIG. Unlike the OIG, the STT is not a good scientific experiment. What does it mean to "pass" the STT? Must the interrogator identify the machine correctly 50% of the time, or 100%? For how long must the machine deceive the interrogator? Finally, does the interrogator know in advance that he is trying to "out"(Zdenek 2000) the robot, or that one of the players is a machine at all?

Unfortunately the STT, though flawed as an experiment, has come to be popularized as the modern "Turing Test." The STT is the basis of real-world Turing Tests including the Loebner Prize, won by A.L.I.C.E. in 2000 and 2001. Although she performs well in STT style contests, the A.L.I.C.E. personality is actually designed to play the OIG. She is a machine, pretending to be a man, pretending to be a woman. Her technology is based on the simplest A.I. program of all, the old ELIZA psychiatrist.

Turing did not leave behind many examples of the types of conversations his A.I. machine might have. One that does appear in the 1950 paper seems to indicate that he thought the machine ought to be able to compose poetry, do math, and play chess:

C: Please write me a sonnet on the subject of the Forth Bridge.
R: Count me out on this one. I never could write poetry.
C: Add 34957 to 70764.
R: (Pause about 30 seconds and then gives as answer) 105621
C: Do you play chess?
R: Yes.
C: I have K at my K1, and no other pieces. You have only R at K6 and R at R1. It is your move. What do you play?
C: (After a pause of 15 seconds) R-R8 Mate.

Careful reading of the dialogue suggests however that he might have had in mind the kind of deception that is possible with AIML. In the first instance, A.L.I.C.E. in fact has a category with the pattern "WRITE ME A SONNET *" and the template, lifted directly from Turing's example, "Count me out on this one. I never could write poetry." The AIML removes the word PLEASE from the input with a symbolic reduction.

In the second case the robot actually gives the wrong answer. The correct response would be 105721. Why would Turing, a mathematician, believe the machine should give an erroneous response, if not to make it more believably "human?" This reply is in fact quite similar to many incorrect replies and "wild guesses" that A.L.I.C.E. gives to mathematical questions.

In the third instance, the chess question is an example of a chess endgame problem. Endgames are not like general chess problems, because they can often be solved by table lookup or case-based reasoning, rather than the search algorithms implemented by most chess playing programs. Moreover, there is a Zipf distribution over the endgames that the client is likely to ask. Certainly it is also possible to interface AIML to a variety of chess programs, just as it could be interfaced to a calculator. Although many people think Turing had in mind a general purpose learning machine when he described the Imitation Game, it seems from his examples at least plausible that he had in mind something simpler like AIML. Chess endgames and natural language conversation can both be "played" with case-based reasoning.

Returning to the OIG, let us consider the properties of the hypothetical computer playing the role of (A). Turing suggests a strategy of deception for (A), man or machine. If the robot is asked, "Are you a man or a woman?" it should answer, "I am a woman," just as the man does. But what if (A) is asked "Are you a man or a machine?" The lying man would reply, "machine." Turing did not mention this case but presumably the machine, imitating the lying man, would respond in the same way. We could say the man is pretending to be a woman, pretending to be a machine. That makes the computer playing (A) a machine, pretending to be a man, pretending to be a woman, pretending to be a machine.

Not so much actually understanding natural language, whatever that means, but creating the illusion of it by responding with believable, if not always truthful, responses, appears to be the important property of the machine in the OIG. This skill, the ability to "act" intelligent, points to a deep difference between ordinary computer and human communication. We tend to think of a computer's replies ought to be fast, accurate, concise and above all truthful. But human communication is slow, error prone, often overly redundant, and sometimes full of lies. The more important factor is keeping up the appearance or illusion of "being human." Although the brain of A.L.I.C.E. is designed more along the lines of the machine playing the OIG, she has also won awards for her performance in contests based on the STT.

The Loebner contest has been criticized because the judges know in advance that they are trying to "out" the computer programs, so they tend to use more aggressive dialogue than found in ordinary conversation. Yet when A.L.I.C.E. is asked, "Are you a person or a machine?" she replies truthfully, "machine." Or does she? The questioner is now left with some doubt as to whether the answer didn't actually come from a lying man.

Some observers claim that the lying man and the pretending computer tell us nothing about our own human consciousness. This author at least are prepared to accept the inescapable alternative conclusion, that we as humans are, for the most part, not "really intelligent."

The fact that ethical questions have emerged about A.L.I.C.E. and AIML means that for us, technologically speaking, we are succeeding. People would not be discussing the ethical implications of A.L.I.C.E. and AIML unless somebody was using the technology. So, from an engineering point of view, this news indicates success.

Second, the ethical dilemmas posed by A.L.I.C.E. and AIML are really relatively minor compared with the real problems facing the world today: nuclear proliferation, environmental destruction, and discrimination, to name a few. People who concern themselves too much with hypothetical moral problems have a somewhat distorted sense of priorities. I can't imagine A.L.I.C.E. saying anything that would cause problems as serious as any of the ones I mentioned. It bothers me that people like [Sun Microsystems co-founder] Bill Joy want to regulate the AI business when we are really relatively harmless in the grand scheme of things.

The most serious social problem I can realistically imagine being created by the adoption of natural language technology is unemployment. The concept that AI will put call centers out of business is not far-fetched. Many more people in service professions could potentially be automated out of a job by chat robots. This problem does concern me greatly, as I have been unemployed myself. If there were anything I could say to help, it would be, become a botmaster now.

10) The CHINEESE ROOM
by johnrpenner

it was curious that i found the inclusion of the Turing Test on your web-site, but i found no corresponding counter-balancing link to Searle's Chineese Room (Minds Brains and Programs).

however:

The Turing test enshrines the temptation to think that if something behaves as if it had certain mental processes, then it must actually have those mental processes. And this is part of the behaviourist's mistaken assumption that in order to be scientific, psychology must confine its study to externally observable behaviour. Paradoxically, this residual behaviourism is tied to a residual dualism. .... The mind, they suppose, is something formal and abstract, not a part of the wet slimy stuff in our heads. ...unless one accepts the idea that the mind is completely independent of the brain or of any other physically specific system, one could not possibly hope to create minds just by designing programs. (Searle 1990a, p. 31)
the point of Searle's Chinese Room is to see if 'understanding' is involved in the process of computation. if you can 'process' the symbols of the cards without understanding them (since you're using a wordbook and a programme to do it) - by putting yourself in the place of the computer, you yourself can ask yourself if you required understanding to do it. since Searle has generally debunked the Turing Test with the Chineese Room -- and you post only the Turing Test -- i'd like to ask you personally:

Q: What is your own response to the Chineese Room argument (or do you just ignore it)?

Dr. Wallace:

Before I go into Searle's Chinese Room, I want to take a moment to bring everyone up to date with my legal case, UCB vs. Wallace.

People ask me, "Why are you obsessed with Ken Goldberg and U.C. Berkeley?" I say, I'm not obsessed. Other people are obsessed. I'm doing 3 films and a play now. (A science fiction film with lynn hershman leeson (www.agentruby.com), a documentary with Russell Kyle, and a dramatization with an undisclosed producer.) Hollywood producers are offering to help pay my legal bills just to see how the story turns out. I don't have to be obsessed. The story is writing itself. I'm just watching from the audience as it all unfolds on the big silver screen of life.

What does this have to do with the Chinese Room, you may ask? I was taken to court by the Regents of the University of Calfornia, Berkeley. They obtained a Temporary Restraining Order barring me from the U.C. Berkeley campus, gave me a criminal record where I had none before, and cost me thousands in legal and medical bills. My only "crime" was free speech.

Professor John Searle works for U.C. Berkeley. Among philosophers there is a certain decorum, a respect for the argument, and an unwritten rule to never make attacks ad hominem. Philosophers do not generally conduct themselves like politicians, and I have no desire to attack professor Searle personally here. But my judgment of his philosophical position is admittedly clouded by the wealth and power of his employer, and the way they choose to express it, as well as the economic disparity between us. Searle lives the comfortable life of a tenured Berkeley professor, and I live the humlble life of a disabled mental health patient on Social Security.

On April 25, 2002 my longtime friend Ken Goldberg, a U.C. Berkeley professor of computer science, spoke with New York Times journalist Clive Thompson on the phone about me and my work on A.L.I.C.E.. As far as I can tell, Ken had nothing but nice things to say about us at the time. He expressed no fear of violence or threats to his safety.

On April 28, in the heat of an email political dispute, I wrote to Goldberg that I could understand how some people are driven to political violence, when they have exhausted all civil alternatives. I certainly was not talking about myself and my situation, because here in America we do have civil courts for settling disputes. Goldberg later testified in court that, of all the messages he received from me, he felt most threatened by this April 28 message mentioning "political violence."

Subsequently, Goldberg cut off all communication with me and gave no explanation. He was a cornerstone of my support system for 20 years. His refusal to respond to my requests for an explanation led to increasing feelings of depression and even suicidal thoughts. Later I learned that he had been advised by the U.C. Police Department to stop communicating with me. I couldn't understand how the U.C.P.D. came to give him this advice without taking into all the facts including my medical diagnosis. For a bipolar patient who has issues around abandonment, cutting off communication is a recipe for disaster, not for anyone else, only for the patient.

Lumping all mental health patients together as "potentially violent" is discrimination. Bipolar depression patients are far more likely to commit suicide before we would ever hurt anyone else.

The U.C.P.D. has a formal complaint procedure for filing grievances concerning the conduct of its officers. I have reported my complaint to Officer Guillermo Beckford, who has assured me that the complaint procedure will be followed and that I can expect a reply.

Sometime later, according to Goldberg's testimony and other professors in the U.C. Computer Science department, he was also advised by two Department Heads and the U.C. lawyers to seek a restraining order under California's "Workplace Violence" statute. The court granted a Temporary Restraining Order (TRO) on June 5, banning me from setting foot the U.C. Berkeley campus and all extensions, entering my name into the CLETS database of known criminals, giving me a criminal record where I had none before, as well as prohibiting me from contacting Goldberg.

What were the events leading up to the court filing on June 5? During May, I researched and wrote a letter to U.S. Attorney General John Ashcroft. The letter contained a summary of my disability case against my former employer NYU, a broad description of corruption that exists in American adademia today, and a list of names of individuals who may provide evidence or know something about my case. This letter was a private correspondence from myself to the Attorney General. Prior to sending it, I shared a draft with Mr. Goldberg. I told him it was not too late for me to take his name off the list. I told him I would really rather have him on my side, and not see him go along with this corrupt criminal mafia. His reply was the Temporary Restraining Order.

It was not, as Goldberg testified in court, the April 28 letter mentioning political violence that scared him into seeking a restraining order. It was the June 3 draft of the letter to Attorney General Ashcroft, asking to investigate the possibility of applying the RICO law to my academic tenure and disability case, that finally prompted Mr. Goldberg to use the full resources of U.C. Berkeley to restrict my free movement and speech.

Oddly, Goldberg and the U.C. lawyers chose to publish the draft of my letter to Mr. Ashcroft in the public court document. The letter, along with the list of names, became Exhibit D, making it available to the press, including WIRED and the New York Times. It was certainly never my intention to make this list of names public. Through inquiries, I learned that Mr. Goldberg did not even contact the people listed to ask their permission to publish their names in connection with the UCB vs. Wallace case.

The courtroom drama itself was somewhere between Orwellian and Kafkaesque. Goldberg did not get what he wanted. But I did not get them to completely drop the restraining order either. I am not banned from Berkeley or listed in the criminal database. The judge told me not communicate with Goldberg directly "or through third parties" so I may have to add a signature like this to all my outgoing messages:

------------------------------------------------------------------
By order of the Superior Court of the State of California 6/28/02,
do not forward this email or its contents to Kenneth Y. Goldberg
------------------------------------------------------------------

They did try to pull a rabbit out of a hat. The lawyer said he had evidence that disproved all of Wallace's witness statements that he was a lifelong pacifist and nonviolent. He said Wallace had in fact struck someone, assaulted him, and this person was reluctant to come forward because he was not on Wallace's "radar sceen" and he was afraid Wallace would seek vengeance.

By this time I was looking at the lawyer and very curious. The judge asked him to come forward and show her the evidence. When she read it, she immediately said, "This is from 17 years ago. I can't accept this," and threw it out.

Considering that the only person I ever remember trying to hit in my entire adult life was a fellow CMU student I caught in bed with my girlfriend Kirsti 17 years ago, it was not hard to figure out who this person was. The attempted blow was highly ineffective, because I do not know how to fight. In any case this was years before I sought psychiatric medical treatment and drug therapy. The sad thing is, I was beginning to feel somewhat charitable toward this poor old fellow, whom I have nothing against, after all these many years, especially since I have not seen or heard from him for a very long time.

A counselor once said to me that no one ever acts like an asshole on purpose. They always do it because they are suffering some internal, emotional pain. Being an asshole is just an expression of inner pain.

I wanted to order a copy of the transcript from the court, but I was concerned it might be expensive. I tried to write down everything I could remember about Ken's testimony. No other witnesses were called.

Among other things he testified that:

- I quoted Ulysses S. Grant.
- I studied History.
- I am charismatic.
- We have been good friends for 20 years.
- He takes what I say seriously.
- I put a lot of thought into what I say.
- In the 16 year old picture of me submitted as Exhibit E, I may or may not be holding a video camera, but I am not holding a gun.
- Ten years ago he witnessed my backup personality emerge in an incident with my girlfriend, and he did not recognize me.
- But, I do not "acknowledge" that I have two sides to my personality.
- I called him an "evil demon."
- I said he was part of a conspiracy.
- I am highly intelligent.
- I had not visited his residence or office in 2 years, nor called him, nor stalked him, nor threatened him with violence.
- He had a telephone conversation with me after seeing the film "A Beautiful Mind" and tried to help me.
- He helped me develop a timeline of events at NYU and afterward.
- I yelled at him.
- I was angry at him.
- We had not seen each other in over 2 years.
- I told reporters he was misappropriating funds and breaking the law.
- I threatened to put up posters.
- When his boss did not reply to my email, I threatened to take my complaint "up the chain of command."
- I claimed he had "stolen my career."
- He did know how a rational person would interpret my use of the word "war" in a phrase like "the war between me and NYU", or if a rational person would think this meant I literally wanted to start a war with tanks and guns.
- The threat of violence was implied by the "pattern" and "tone" of my emails.
- The email he felt most threatened by was the one where I said, "I can understand how people are driven to political violence" dated April 28.

At that point the judge cut off the questioning.

The attorneys went into chambers along with the judge and a visiting court commissioner. Goldberg was all alone in the court. Everyone had laughed when his lawyer had announced earlier that his wife Tiffany Shlain, also named in the restraining order, was too afraid for her safety, of me attacking her, to come to court. Meanwhile, I was there with about a dozen witnesses, friends, and supporters, who proceeded to humiliate Goldberg and his lawyer, behind his back, within earshot, in public, and even made fun of me for ever having been friends with him in the first place. My wife was with me holding my hand the whole time. Russ said he felt sorry for Ken's lawyer because they had handed him such a "dog" of a case. Someone said that Goldberg's testimony amounted to nothing but "whining." Russ soon after announced it was 4:20 and even the sheriffs chuckled. Those sheriffs could have enforced the "no talking in court" rule during Ken's public humiliation, but for some reason chose not to. This was not UCB vs. Wallace. This was Berkeley vs. Townies.

I was glad Goldberg had to sit through all the other real world cases that afternoon. Ours was so surreal in comparison to the way people really live on the streets of Berkeley, California. One restraining order case involved a threat roughly worded, "If you don't move your fucking car right now, I am going to kill you." In another case, the defendant said he was proud to accept the restraining order. When the judge listed all the things he was forbidden to do under the order, he asked, "Why don't you just add killing to the list your honor?" The other cases involved truly dangerous, violent people acting out real, physical in-your-face conflicts and threats. Ken Goldberg's whining from the Ivory Tower about my pathetic emails was from another planet. It was clear to everyone he was abusing the power of U.C. Berkeley to fight his personal battle with a helpless guy on disability.

I would like to again thank the many people who appeared in court with me, wrote statements on behalf, sent along supportive words of encouragement, and especially those who gave their prayers. It made all the difference in a Berkeley courthouse, with a Berkeley judge, and a Berkeley lawyer, counseling a Berkeley professor.

Incidentally, the Judge in my case, Carol Brosnahan, is married to James Brosnahan, the attorney for "American Taliban" John Walker Lindh. James Brosnahan was on TV the other morning, talking about the "American Taliban" case. He said, "America is a little less free now. I think that is an area of concern."

I say, I'm a little less free now, thanks to your wife Judge Carol Brosnahan, James! And, it certainly is an area of concern for me.

Incidentally, it was the last case heard in the Berkeley courthouse before the building was shut down for Earthquake retrofitting, a long process that may take a couple of years. The court is moving and who knows, it may never go back to the same building. When the judge announced this fact, nearly everyone applauded. I'm not sure if Goldberg was applauding or not.

The Judge issued her final ruling the week after the hearing. Ken Goldberg and his attorney were determined to force me to accept some kind of restraining order. My attorney insisted I would have to accept some order of the court, no matter how opposed I was. She said the time to fight it was later, in another forum. I was out of my mind. There was no way I could agree to it. I wanted the order to disappear. I lost the ability to make rational decisions. I became paranoid of everyone around me. I had to give my wife Kim power of attorney to handle the case. I had suicidal thoughts. I scheduled an emergency therapy session. I wanted to go to the hospital because I was having physical symptoms including vomiting and headaches. I was terrified of going to the U.C. hospital, my usual source of primary care. Would they call a doctor or a lawyer first? Would they let me in? Would they let me out again?

Thank God I have an incredibly supportive network of friends and a great therapist in San Francisco. They kept me out of the hospital. I needed extra medication and I did not have enough cash to buy antidepressants at the time of the crisis. My friend Russell loaned me $50 for medication. I thanked God for my friends and my safe environment.

I felt better the next morning. Kim handled the legal case. I don't want to think about it. Socrates drank the hemlock. Turing ate the poison apple. But they cannot keep a good Wallace down.

I am still effectively banned from the Berkeley campus, because the only building I really ever need to visit there is the Computer Science department. Almost any point in the whole department is within 100 yards of Ken Goldberg's office. I am ordered to stay 100 yards away from Mr. Goldberg.

I have a friend Victoria who has a restraining order. Victoria is a very pretty, smart, intelligent and apparently harmless young woman. She was sitting in our club one day, minding her own business for about an hour. Then, the guy with the order against her came in. He got the manager to call the police and have her thrown out, even though she arrived first. Ken Goldberg could have me thrown out of any cafe in San Francisco that he similarly wants to occupy. As long as Ken Goldberg has that power over me, I will fight this restraining order. He can harass me with the police! Remember, they started this. UCB took me to court, not the other way around.

After the ruling, I tried to make a complaint to U.C. Berkeley about Goldberg's ability to abuse the resources of the university to fight his personal battles. If he wanted the civil harassment restraining order the judge gave him, he should have been required to hire his own attorney just as I had to in order to defend myself. After all, I don't have the resources of a large legal department and police force to call upon to fight my battles with defenseless people.

What I have learned is that, with the exception of the U.C.P.D., there is essentially no procedure available for a citizen of California, not affiliated with U.C., to file a formal complaint about the misconduct of U.C. Berkeley professor or employee. There is no mechanism for public oversight or review of these people.

Many of the professors involved in my case hold U.S. National Security clearances at the secret level or higher. As tenured professors, they view themselves as above the law. They believe their tenure protects them from oversight, investigation, or any questioning of their professional conduct. My own disability prevents me from obtaining a clearance, but I never considered myself to be a threat to national security. I am just an odd personality type. It always strikes me as odd that many people much more dangerous than I, from Timothy McVeigh to Robert Hanssen to the professors in these Computer Science departments, have been passed through this security net, however.

Now, finally, in conclusion, I exit Berkeley's prison and return briefly to Searle's Chinese Room. The Chinese Room provides a good metaphor for thinking about A.L.I.C.E. Indeed the AIML contents of the A.L.I.C.E. brain is a kind of "Chinese Room Operator's Manual." Though A.L.I.C.E. speaks, at present, only English, German and French, there is no reason in principle she could not learn Chinese. But A.L.I.C.E. implements the basic principle behind the Chinese Room, creating believable responses without "really understanding" the natural language.

Natural human language is like a cloud blowing in the wind. Parts dissolve away and new pieces emerge. The shape of the cloud is constantly changing. Defining "English" is like saying, "the set of water molecules in that cloud" (pointing to a specific cloud). By the time you are done pointing, the set has changed. "That cloud" is actually a huge number of possible states.

This brings to mind the analogy of Schrodinger's Cat. According to Schrodinger, the cat is neither alive nor dead until the box is opened. The scenario is not unlike the Chinese Room, with its imprisoned operator, or the Turing Imitation Game, where the interrogator may not peek behind the curtain. The analogy suggests that language and consciousness may have the unobservable characteristic of subatomic physics. There is no "there" there, so to speak.

The practical consequence of all this is that botmasters may never be unemployed. Current events and news will always be changing, new names will appear, public attention will shift, and language will adopt new words and phrases while discarding old ones. Or perhaps, bots will become so influential that everyone will "dumb down" to their level, and cool the cloud of language into a frozen icicle of Newspeak that Orwell warned us about, once and for all.

This discussion has been archived. No new comments can be posted.

Dr. Richard Wallace, part 3

Comments Filter:
  • by Illserve ( 56215 ) on Friday July 26, 2002 @01:09PM (#3959344)
    I have much more respect for Wallace after reading this reply. He's a deeply insightful individual and doesn't appear to be taken in by much of the bullshit of the AI field.

    One point I disagree with him about is this:

    >I always say, if I wanted to build a computer >from scratch, the very last material I would >choose to work with is meat. I'll take >transistors over meat any day. Human >intelligence may even be a poor kludge of the >intelligence algorithm on an organ that is >basically a glorified animal eyeball. From an >evolutionary standpoint, our supposedly >wonderful cognitive skills are a very recent >innovation. It should not be surprising if they >are only poorly implemented in us, like the lung >of the first mudfish. We can breathe the air of >thought and imagination, but not that well yet.

    While it's true that our brains are not well adapted to the problems of the 20th cetury (remembering list of facts, for example, would be a great thing to be able to do innately), I think Wallace doesn't posses a very deep understanding of neurophysiology when he compares neural function to transistors and silicon.

    The idea that neurons simply summate incoming information, apply a threshold, and then output is very outdated. A single neuron is more like an entire computer than a transistor. There is evidence that a single neuron posseses an extraordinarily rich array of computational mechanisms, such as complex comparisons within the dendrite. In fact, the dendrite might be where the majority of computation is performed within the brain.

    A neuron is constantly adapting to its inputs and outputs, and this includes such things as growing new spines, (inputs) and axons (outputs). And within the cell, we are just beginning to see the enormous range of chemical modulations that change its functional characteristics in an dynamic fashion. A neuron can even change its own RNA to effect long term changes in its synaptic gain that are perhaps specific to a given synapse (1 of 10,000).

    The messy wetness of neural tissue, for which we pay the price of very slow signal transmission is precisely what gives it the ability to change itself in such a dynamic manner. They're slow, but they make up for it with massive parallel dynamics of outrageous complexity. The neuron is *not* a clumsy kludge implementation. It is a finely tuned and well oiled machine, the result of millions of years of tinkering by evolution.

    While it's probably that we can concoct innovations that might improve on the basic neuron (for example, long axonal segments could probably be replaced with electrical wires for a speed gain without much loss of computational power), the neuron itself should not be so quickly discarded

    -Brad

No man is an island if he's on at least one mailing list.

Working...