Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
AI China

Interviews: Dr. Andy Chun Answers Your Questions About Artificial Intelligence 33

Recently, you had the chance to ask CIO for the City University of Hong Kong and AI researcher Andy Chun about his system that keeps the Hong Kong subway running and the future of artificial intelligence. Below you'll find his answers to those questions.
How similar is your AI boss to the fictional Manna
by Ken_g6

Dr. Chun,
Have you read a short story about an AI boss called Manna? (I'll include relevant quotes if you don't have time.) How does your system for the Hong Kong subway compare? It's clearly similar to your subway system in some ways: "At any given moment Manna had a list of things that it needed to do.... Manna kept track of the hundreds of tasks that needed to get done, and assigned each task to an employee one at a time."

But does it micro-manage tasks like Manna?
"Manna told employees what to do simply by talking to them. Employees each put on a headset when they punched in. Manna had a voice synthesizer, and with its synthesized voice Manna told everyone exactly what to do through their headsets. Constantly. Manna micro-managed minimum wage employees to create perfect performance."

Does it record employee performance metrics and report them to (upper) management like Manna?
"Version 4.0 of Manna was also the first version to enforce average task times, and that was even worse. Manna would ask you to clean the restrooms. But now Manna had industry-average times for restroom cleaning stored in the software, as well as "target times". If it took you too long to mop the floor or clean the sinks, Manna would say to you, "lagging". When you said, "OK" to mark task completion for Manna, Manna would say, "Your time was 4 minutes 10 seconds. Industry average time is 3 minutes 30 seconds. Please focus on each task." Anyone who lagged consistently was fired."

And how have employees reacted to their AI boss - if, in fact, you have been able to get honest evaluations from employees?


Chun: The AI system for the Hong Kong subway does not micro-manage like Manna. Yes, it has a list of tasks to be done, and assigns people to work on them. But that's where the similarity ends. Our AI system schedules engineers, and they have total say on how best to get their job done. The AI mainly determines which jobs are most important to be done on a particular day, whether there are enough people and equipment to do the job, and whether all the rules and constraints are met, such as safety rules. If any of these factors are not satisfied, then the job might be postponed and rescheduled for another day when resources are available and factors are right. On the surface, the AI scheduling task might seem easy, it is not. To do the scheduling requires a lot of knowledge about how railways operate, the people and equipment, and the physical layout of the tracks and power lines. Another thing the AI do is optimization, doing more with less; it tries to “combine” two or more related jobs together so that the jobs can share people/equipment.

The AI does not record employee performance. The quality of work is determined by humans right now. There is no job-specific “target times.” Actually, all jobs must be completed within roughly 4 hours, i.e. the time window when there is no passenger trains running. However, some jobs may span several days, in which case they will need to set up and shut up the worksite each day.

So far, all the people we talked to simply love the AI system. The main reason is that the AI really helps make work easier for them. Humans need not worry about forgetting some esoteric safety rule for example. With AI, everyone saves time and the company saves money, plus safety of engineering works is ensured.



Broader implications
by Iamthecheese

What real-world problems are best suited to the kind of programming used to manage the subway system? That is to say, if you had unlimited authority to build a similar system to manage other problems which problems would you approach first? Could it be used to solve food distribution in Africa? Could it manage investments?

Chun: The AI algorithms used in the Hong Kong subway can indeed be applied to other problems. It is quite generic. It can be used in any situations where there is lots of work to be done, but you have only limited resource, plus lots of restrictions on how resources can be allocated. The AI system prioritizes jobs and ensures there are sufficient resources for all jobs it approved, while at the same time satisfying all the different restrictions, such as safety, environmental concerns, business/marketing needs, etc. It also caters to any last minute change, by making sure any change will not violate those restrictions nor interfere with other jobs already assigned. It is also intelligent enough to see how resources can be optimized so that more work can be done with less. If I had unlimited authority and money to build a similar system, I would probably consider building an AI system to allocate humanitarian relief work after a nature disaster, such as after Katrina. Tasks are numerous, many parties are involved, time is critical, resources limited, and the situation is very dynamic.



Hubert Dreyfus
by MAXOMENOS

Have you read Professor Dreyfus's objections to the hopes of achieving "true AI" in his book What Computers Can't Do? If so, do you think he's full of hot air? Or, is the task of AI to get "as close to the impossible" as you can?

Chun: There is still tremendous debate on what is “true AI” and how will we know if we created it. Is Samantha-like intelligence (as in “her” the movie) true AI for example? Why or why not? The answer is not obvious. However, even without true AI, we still do some very useful work with our current AI, even if we are able to mimic only a small bit of human intelligence processing. In the AI system for the subway, we are using well-established AI algorithms, such as rules and search. But because of the sheer volume of knowledge needed to accomplish the scheduling task (several hundred rule instances), the AI actually does a better job than humans in ensuring all relevant factors are considered as well as optimizing on resources.



Narrow down to one thing that needs improvement
by gunner_von_diamond

If you had to narrow it down to one thing that needs the most improvement in the field of AI, something that we could focus on, what would it be?

Chun: If I need to narrow it down to only one thing, I would say AI needs to be better at “reading minds.” I say that with a bit of tongue-in-cheek. Humans are highly effective at communicating with each other; we understand each other sometimes with just a nod, a wink, or just a single word/sound. Computers need everything spelled out, so to speak. Computers are not good at filling-in-the-gaps with data/info from different sources, and making assumptions when data/info is missing. Humans can do that easily because they have a vast amount of life experience to draw upon.



Current progress
by JoeMerchant

Dr Chun, What area of AI development is currently making the most progress? In other words, where are the next big advances most likely to come from?

Chun: I believe the biggest progress has been in integrating AI into various devices we use daily, such as our smart phones – Siri, Cortana, Now, Pretty much everything has some “intelligence” built in - intelligent TV, intelligent refrigerator, and even intelligent washing machine. With computing power getting cheaper and cheaper, I think the next big advances will be in pushing the intelligent device concept further with intelligent IoT.



Will we know when we create it?
by wjcofkc

Considering we have yet to - and may never - quantify the overall cognitive process that gives rise to our own sentient intelligence, will we have any way of knowing if and when we create a truly aware artificial intelligence?

Chun: Interesting question, one that needs a much longer discussion. If we talk about the level of AI as in Samantha (in the movie “her”) for example, Ray Kurzweil predicts 2029 as when we will achieve that. How will we know or measure true intelligence and true awareness? My guess: have a long heart-to-heart conversation with it/him/her.



Ethics
by meta-monkey

I'm presupposing it's eventually possible to create a machine that thinks like a man. Is conscious, is self-aware. I doubt we'd get it right first try. Before we got Mr. Data we'd probably get insane intelligences, trapped inside boxes, suffering, and completely at the whim of the man holding the plug. What are your thoughts on the ethics of doing so, particularly given the iterative steps we'd have to take to get there?

Chun: I think you are asking whether it is ethical to “kill” an AI process and reboot it with a better version? I think by the time we have true conscious and true self-aware AI, we will not be able to “pull the plug,” so to speak. The AI will be intelligent enough to get its own power source and replicate/distribute itself across different networks.



Still 30 years out?
by Chas

Like many futuristic technologies, AI seems like one of those things that's always "just 30 years away". Do you think we'll make realistic, meaningful breakthroughs to achieve AI in that timeframe?

Chun: Kurzweil puts Samantha-like intelligence at 15 years away in 2029. Based on the past decade of technology progress and adoption, his prediction is quite believable. The web was only invented little more than 20 years ago. iOS/Android only 6 years old. If the progress/evolution of those technologies are good indicators, I would say it is not hard to believe that we will have realistic AI within the coming decade or so.



Bootstrap Fallacy?
by seven of five

Dr Chun, Can you comment on the potential of machine learning? Is it theoretically possible for a "naive" AI system to undergo great qualitative changes simply through learning? Or is this notion a fallacy? Although it is an attractive concept, no one in AI has pulled it off despite several decades of research.

Chun: We use machine learning all the time. It is just not learning at the same level or rate as a human. Machine learning algorithms can be used to learn new rules and knowledge structures. It can learn how to categorize things based on examples. Siri for example uses machine learning to improve its answer and knowledge about you with time. Microsoft Cortana is also using AI to get smarter as people use it. Google is experimenting with “deep learning” which will leverage ANN and the massive compute power that Google has. But you are right, we have yet to be able to create a naïve AI system that learns like a child; we will need a system that can easily interact with its environment like a child can.



programming languages
by Waraqa

With the rise of many programming languages especially popular scripting languages, Do we really need specialized languages for AI? Also, Do you think any of the existing ones is the future of AI and what qualifies it for that?

Chun: Back in the days when I started to do AI, you had to use Prolog or Lisp. They were popular because they were better at symbolic processing and symbol manipulation. Lisp, in particular, had a lot of cool language features that made it more productive as a general programming language and general programming environment. However, those differences are no longer as important since most modern programming languages share similar pool of advanced language features. The difference between scripting and programming is also blurred. Take .NET for example, all .NET languages compile to CIL and work seamlessly; allowing different programmers to use different languages. Programming language has become more of a personal preference. For me I routinely use Python, C# or Java for my AI work.
This discussion has been archived. No new comments can be posted.

Interviews: Dr. Andy Chun Answers Your Questions About Artificial Intelligence

Comments Filter:
  • by Anonymous Coward
    Are you concerned about Godzilla rampaging and destroying Hong Kong? Doesn't the term 'artificial intelligence' seem derogatory towards the intelligent entities it applies to? If they are truly intelligent, wouldn't applying such a label to them seem to make them hate us and want to destroy us?
    • by Tablizer ( 95088 )

      Doesn't the term 'artificial intelligence' seem derogatory towards the intelligent entities it applies to? [Emph. added]

      What's a better term? Alternative intelligence? That still makes it sound like an oddball. Non-human intelligence? That makes it sound like humans are the standard reference point. Silicon intelligence? We don't say humans have "flesh intelligence".

      Maybe we'll finally know that true AI has arrived when the AI itself gives us better suggestions. Or sues us for discrimination. To Sue is to

    • I never really understood all the fear of AI.

      Mainly because all the talk about fear, destroy, etc. seems pretty foolish to me.

      Have you ever met a functional severely autistic person? Do you think they are 'conscience'? Do you truly think that the first AI created by man will be more like us than the severely autistic but functional person?

      Worse, most people tend to think of AI's as comic book villain types.

      I predict that the first truly artificial intelligent creature will:

      1. Quite literally not c

  • In the 80s, you had movies like Tron where a learning algorithm goes rogue, or people talking about the model of the brain, but those don't give you a clear path to making AI. All you need are sensors to translate the world to a 3d imagination space like a video game. Once the AI knows its environment, it can do tasks inside it. AI isn't hard to think of. Here is my AI page. It shouldn't be hard to read. [botcraft.biz]
  • Tolerance (Score:5, Insightful)

    by meta-monkey ( 321000 ) on Wednesday August 06, 2014 @12:32PM (#47614931) Journal

    1) Cool, he answered my question! And in a way that's vaguely disturbing.

    2) In response another question he said:

    Humans are highly effective at communicating with each other; we understand each other sometimes with just a nod, a wink, or just a single word/sound. Computers need everything spelled out, so to speak

    I also find that people are far more forgiving of humans than computers. We expect machines to be perfect. When you're on the phone with a human operator and he misunderstands a word in your street address, he reads it back and you say "oh, no I said Canal Street, not Camel Street." When a computer answering system gets it wrong, we get angry at the stupid machine and press 0 for an operator.

    I think one of the things that makes the human race "intelligent" is our ability to fill in gaps in incomplete information, take shortcuts, and accept close-enough answers. That means we will most certainly be wrong, and often. This tolerance for inexactness I think is something computers are not good at. People expect a computer AI to be both intelligent and never wrong, and I don't think that's possible. To be intelligent you have to guess at things, and that necessitates being wrong.

  • ... I am not an AI nut, but since I am an old man, I have been aware of the field for some time.

    Artificial Intelligence and Artificial Sentience are not the same thing. If an application seems smart to us, it's AI.

    Worries about evolution to sentience are premature, at best.

    We will recognize it in many ways, and one way will be when the machine weeps when it loses the Internet.

    • You are perfectly right! I believe artificial sentience is far easier to achieve than AI. After all you need only a thing that can think about itself, and does not need to be a Hongkong Subway Work Scheduling expert or a genius in chess. Or does not even need to be a washing machine AI.

    • by geekoid ( 135745 )

      Finally, someone has clearly defined sentience. I look forward to your Nobel Prize winning paper.

    • ... I am not an AI nut, but since I am an old man, I have been aware of the field for some time.

      Artificial Intelligence and Artificial Sentience are not the same thing. If an application seems smart to us, it's AI.

      Worries about evolution to sentience are premature, at best.

      We will recognize it in many ways, and one way will be when the machine weeps when it loses the Internet.

      What will it do when it wins the internet?

  • I think all of this misses the point. Will humanity ever create an intelligence that's greater than the sum of its parts? I think we will, and maybe already have. But we keep thinking of it in the wrong way. Do human cells have the ability to create artificial cells? Do they have any concept of what a multi-celled organism is? Of course not. They can't even think. What we are is completely outside what reality is for them.

    Likewise I think whatever artifical being humanity will create, will be something we d

  • As soon as a problem in AI is solved, it is no longer considered AI because we know how it works in terms of an algorithm. AI invents itself out of existence. Webcomic: http://artificial-intelligence... [artificial...igence.com]
  • The best comment I think I have seen on artificial intelligence was at a company I worked for in the 1960s. The company advertised for "artificial intelliigence," when what they meant was that they wanted a specialist in artificial intelligence. Someone pinned the ad on the bulletin board, circled the desire for artificial intelligence, and wrote under it: "too much in the company already."

What is research but a blind date with knowledge? -- Will Harvey

Working...