Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI China Education Technology

Interviews: Ask Dr. Andy Chun About Artificial Intelligence 71

samzenpus (5) writes "Dr. Andy Chun is the CIO for the City University of Hong Kong, and is instrumental in transforming the school to be one of the most technology-progressive in the region. He serves as an adviser on many government boards including the Digital 21 Strategy Advisory Committee, which oversees Hong Kong's long-term information technology strategies. His research work on the use of Artificial Intelligence has been honored with numerous awards, and his AI system keeps the subway in Hong Kong running and repaired with an amazing 99.9% uptime. Dr. Chun has agreed to give us some of his time in order to answer your questions. As usual, ask as many as you'd like, but please, one question per post."
This discussion has been archived. No new comments can be posted.

Interviews: Ask Dr. Andy Chun About Artificial Intelligence

Comments Filter:
  • .. when A.I. has _nothing_ to do with consciousness?

    Why isn't the more accurate term "artificial ignorance" used to distinguish itself on the day when "Actual Intelligence" is created / discovered?

    • The only relationship between consciousness and intelligence is that the latter is needed before the former can present itself. Actual artificial intelligence has been created already and is used to solve chess problems, schedule subway maintenance, and answer Jeopardy questions. If your definition of "intelligent" includes "we don't know how it works" then one day you'll wake up to find yourself unintelligent. We're answering the question of how brains work at a rapid pace.

      So ignorance is not an antonym o
      • by geekoid ( 135745 )

        "... is that the latter is needed before the former can present itself."
        we don't know that.

        • "... is that the latter is needed before the former can present itself."
          we don't know that.

          There is also no reason to believe it is true. "Consciousness" is a mostly meaningless word. There is no consensus definition, and no testable or falsifiable phenomenon associated with it. Is a monkey conscious? What about an amoeba? What about the guy in the next cubicle? How is consciousness different from "free will" or having a soul? Intelligence is about observed behavior. Consciousness is about internal state. If an entity behaves intelligently, then it is intelligent, regardless of the inte

          • by narcc ( 412956 )

            There is also no reason to believe it is true. "Intelligence" is a mostly meaningless word. There is no consensus definition, and no testable or falsifiable phenomenon associated with it.

            • "Intelligence" is a mostly meaningless word. There is no consensus definition

              Nonsense. Intelligence is the ability to formulate an effective initial response to a novel situation. Not everyone would agree on the exact wording, but most people would generally agree on what intelligence means. Most people would also agree that a dog is more intelligent than a chicken, a monkey is more intelligent than a dog, and that (most) people are more intelligent than monkeys. "General intelligence" means an ability to solve general problems, but intelligence can also exist in domains. For i

            • I guess that makes it easier for you to pass as intelligent?
              Hehe, sorry, but that was a perfect pass!

          • by Teancum ( 67324 )

            We know so little about what self-awareness, intelligence, or sentience actually is that every attempt to simulate the concept is usually met with dead ends in terms of research. There is some usefulness that comes from legitimate AI research, but at this point it is parlor tricks and a few novel programming concepts that have some usefulness in a practical sense.

            The only thing that is fairly certain is that somehow a raw physical process is involved with establishing consciousness. Some real effort has b

      • " If your definition of "intelligent" includes "we don't know how it works" then one day you'll wake up to find yourself unintelligent."

        So he'll finally figure out what the rest of us immediately determined when we read his post, then?

  • Broader implications (Score:5, Interesting)

    by Iamthecheese ( 1264298 ) on Thursday July 17, 2014 @12:18PM (#47476277)
    What real-world problems are best suited to the kind of programming used to manage the subway system? That is to say, if you had unlimited authority to build a similar system to manage other problems which problems would you approach first? Could it be used to solve food distribution in Africa? Could it manage investments?
    • This is what I came to ask. I imagine any system with a lot of sensors and measurements of success (like the subway uptime) would be the low hanging fruits.
      If the idea of self driving cars becomes popular in the next 15 years, you could mitigate a lot of traffic issues with correct planning (assuming a majority of the drivers use it).
  • Would you be so kind and shut down my pain receivers?
  • Have you read Professor Dreyfus [wikipedia.org]'s objections to the hopes of achieving "true AI" [wikipedia.org] in his book What Computers Can't Do? If so, do you think he's full of hot air? Or, is the task of AI to get "as close to the impossible" as you can?
    • Well, I can't speak for Dr. Chun, but I have known for at least 25 years that computer systems are merely a modeling of humans, albeit subconsciously so. Networking is merely a model of the psychic connection that one can attune to with enough meditation. Claiming that it is a poor assumption that the mind works like a computer has it bass ackwards. Computers function like the human mind, for that is from whence they come. Also, I should point out it is called Artificial Intelligence, not Actual Intelli
  • by gunner_von_diamond ( 3461783 ) on Thursday July 17, 2014 @12:28PM (#47476381) Journal
    If you had to narrow it down to one thing that needs the most improvement in the field of AI, something that we could focus on, what would it be?
  • by wjcofkc ( 964165 ) on Thursday July 17, 2014 @12:33PM (#47476451)
    Considering we have yet to - and may never - quantify the overall cognitive process that gives rise to our own sentient intelligence, will we have any way of knowing if and when we create a truly aware artificial intelligence?
  • by JoeMerchant ( 803320 ) on Thursday July 17, 2014 @12:35PM (#47476487)

    Dr Chun,

    What area of AI development is currently making the most progress? In other words, where are the next big advances most likely to come from?

  • by ArcadeMan ( 2766669 ) on Thursday July 17, 2014 @12:38PM (#47476515)

    Dear Dr. Chun,

    why do I have this terrible pain in all the diodes down my left side?

  • Like many futuristic technologies, AI seems like one of those things that's always "just 30 years away".

    Do you think we'll make realistic, meaningful breakthroughs to achieve AI in that timeframe?

  • by wjcofkc ( 964165 ) on Thursday July 17, 2014 @12:46PM (#47476581)
    Slashdot editors,

    Please don't ruin this by turning it into a video interview where you don't actually ask anyone's questions like you did the last one.

    Sincerely,
    Speaking for a lot of us.
  • by Maxo-Texas ( 864189 ) on Thursday July 17, 2014 @12:50PM (#47476637)

    And what's the latest date you see A.I. that is conscious and self aware in the human and animal sense?

    • by TheLink ( 130905 )

      Uh, but how do you tell when you succeed? Are we even close to discovering what consciousness is?

      Isn't it possible to build a computer that behaves as if it is conscious but isn't? https://en.wikipedia.org/wiki/... [wikipedia.org]

      See also: https://en.wikipedia.org/wiki/... [wikipedia.org]

      This is one of the big mysteries of the universe. There's no need for us to be conscious but we are. Or at least I am, I can't really be 100% sure about the rest of you... ;)

      It's kind of funny that scientists have difficulty explaining one of the very fir

      • Actually, we are pretty close to discovering what consciousness is physically.

        They've found one spot in the brain that when stimulated electrically, you don't go asleep but your "conciousness" turns off. When the stimulation stops, you recover conciousness without an awareness of any time passing.

        The particular part appears to be acting like a conductor of multiple streams of information from the rest of the brain. For some reason in 70 years of this type of research, they'd never explored that particula

  • by Okian Warrior ( 537106 ) on Thursday July 17, 2014 @12:56PM (#47476679) Homepage Journal

    Can you explain to us exactly what AI is?

    As a definition, the Turing test has problems - it assumes communication, it conflates intelligence with human intelligence, and humans aren't terribly good at distinguishing chatbots from other humans.

    Also, using a test for a definition works well in mathematics, but not so much in the real world. Imagine defining a car as "anything 5 humans say is a car" and then trying to develop one. Without feedback or guidance, the developers have to trot every object in the universe in front of a jury, only to receive a yes/no answer to the question: "is this a car?"

    Many AI texts have a 'kind of fuzzy, feel-good definition of AI that's useless for construction or distinguishing an AI program from a clockwork one. Definitions like "the study of programs that can think", or "programs that simulate intelligent behaviour" shift the burden of definition (of intelligence) onto the reader, or become circular.

    One could define a car as "a body, frame, 4 wheels, seats, and an engine in this configuration", and note that each of these can be further defined: a wheel is a rim and a tire, a tire is a ring of steel-belted rubber with a stem valve, a stem valve is a rubber tube with a schrader valve, a schrader valve is a spring and some gaskets...

    With a constructive definition, one could distinguish between a car and, say: a tractor, a snowmobile, a child's wagon, a semi, and so on. Furthermore, it would be conceptually straightforward to build one: you know where to start, and how to get further information if you are unsure.

    Compare with a group [wikipedia.org] from mathematics: a closed set plus an operator with certain features (associativity, identity, inverses), and each feature can be further defined (an identity element is...). Much of mathematics is this way: concepts constructed from simpler concepts with a list of requirements.

    The study of AI seems to be founded in mathematics. At least, all the AI papers I've read are heavy with mathematical notation - usually obscure and very dense mathematical notation. It should be possible to determine with some rigor what the papers are talking about.

    Can you tell us what that is? What *exactly* is AI?

  • I'm presupposing it's eventually possible to create a machine that thinks like a man. Is conscious, is self-aware. I doubt we'd get it right first try. Before we got Mr. Data we'd probably get insane intelligences, trapped inside boxes, suffering, and completely at the whim of the man holding the plug.

    What are your thoughts on the ethics of doing so, particularly given the iterative steps we'd have to take to get there?

  • I would like to know from Dr. Chun in which areas of life can AIs be used right now (are most beneficial), which areas are too difficult (for now) and in which areas should AIs never be used.

  • Could you please speak to the bait-and-switch (i.e. changing definitions midstream) inherent in the Chinese Room argument? Can you elucidate how the program encodes/encapsulates/contains the intelligence, and how the symbols used/manipulated are immaterial? The idea that the program encodes "two-ness", irrespective of whether I use the symbol "two" or "dos" or "zwei". The word-games and verbal-sleight-of-hand inherent in the Chinese Room argument have irritated me for many years, but I lack the precision v
  • We had two articles touching on this in recent months, so: Singularity or no singularity? Will AI achieve consciousness/sentience in our lifetime/ever, why or why not, and what is your take on the implications of such a thing if you do think it is reasonably possible?
  • What do your think of Fergus et all paper : Intriguing properties of neural networks [nyu.edu]. Is this a phenomenon you came across before?
  • by Ken_g6 ( 775014 ) on Thursday July 17, 2014 @03:32PM (#47477899)

    Dr. Chun,

    Have you read a short story about an AI boss called Manna? [marshallbrain.com] (I'll include relevant quotes if you don't have time.) How does your system for the Hong Kong subway compare? It's clearly similar to your subway system in some ways:

    At any given moment Manna had a list of things that it needed to do.... Manna kept track of the hundreds of tasks that needed to get done, and assigned each task to an employee one at a time.

    But does it micro-manage tasks like Manna?

    Manna told employees what to do simply by talking to them. Employees each put on a headset when they punched in. Manna had a voice synthesizer, and with its synthesized voice Manna told everyone exactly what to do through their headsets. Constantly. Manna micro-managed minimum wage employees to create perfect performance.

    Does it record employee performance metrics and report them to (upper) management like Manna?

    Version 4.0 of Manna was also the first version to enforce average task times, and that was even worse. Manna would ask you to clean the restrooms. But now Manna had industry-average times for restroom cleaning stored in the software, as well as "target times". If it took you too long to mop the floor or clean the sinks, Manna would say to you, "lagging". When you said, "OK" to mark task completion for Manna, Manna would say, "Your time was 4 minutes 10 seconds. Industry average time is 3 minutes 30 seconds. Please focus on each task." Anyone who lagged consistently was fired.

    And how have employees reacted to their AI boss - if, in fact, you have been able to get honest evaluations from employees?

  • What do you think is easier to solve?

    Artificial Intelligence or the idea of mimicking natural physical systems to process information?

    or

    Machine Intelligence or the idea of creating systems that do not use natural systems but investigate wholly new ideas of machine design to process information?

  • The three laws of robotics is not very practical (as evidenced by Asimov himself; his fiction is essentially a long list of all the ways the laws fail). In fact, ethics classes themselves are complex enough that it's difficult to imagine any simple, cogent way to summarize ethical decision making into a sound bite. But do you believe it is possible at all to codify into the behavior of future complex systems? Personally, if we ever do get strong AI in my lifetime, I'm betting it'll be as screwed up and err
  • Do you belive we will make an artificial intelligence that can do everything a human can do, including learning new tasks it wasn't specifically programed to do? If so how long do you think it will take, and what do you think is the mechanism that will be used (eg nerual network programing, albeit on custom chips)?
  • Dr Chiun,

      What do you see as the best AI based approach (fuzzy logic, neural networks, etc) to perform stock exchange predictions ?

  • Dr Chun, Can you comment on the potential of machine learning? Is it theoretically possible for a "naive" AI system to undergo great qualitiative changes simply through learning? Or is this notion a fallacy? Although it is an attractive concept, no one in AI has pulled it off despite several decades of research.
  • With the rise of many programming languages especially popular scripting languages, Do we really need specialized languages for AI? Also, Do you think any of the existing ones is the future of AI and what qualify it for that?

news: gotcha

Working...