Human Nature - by skywalker107
Do you think we will ever be able to program robots to understand and possibly copy human nature?
Assuming that you mean human nature as a human conditioning (personality) that has been experienced in human existence, I believe that robots will be comprised of both software and hardware and that the combination of their programs and various sensors will help them to learn, understand, and communicate with humans in a human's environment. In my eyes, it won't be as simple as just downloading a particular 'understanding' program - it will be unique to the combination of the overall structure and systems each robot will have combined with its perceptions and interactions with its surroundings (e.g., a domestic robot will understand humans more than a mining robot will, and a domestic robot in a home with many people of different ages, genders, etc., will understand humans more than a robot who lives with a sole individual). I think robots will be able to communicate with us verbally and to understand what we are saying, but not understand in terms of empathy, sympathy, or in a visceral way. In terms of copying human nature, I foresee an emulation of human nature in order to respond, interact, and work with other humans, and though some humans may perceive a future robo-personality as a 'copy', to me it will always be a robotic nature, though possibly housed in a very human-like shell.
Re: Human Nature - by jbrader
To which I would like to add: do you think there is any reason to try to copy human nature? I can see the point in having machines understand humans as it could make communicating with robots and computers easier. But why try to make an artificial human? It seems as though we have more than enough of the real thing already.
Depending upon how you define artificial, most of us humans are already physically 'artificial' in that we have in some way technologically augmented our organic selves - lasik, pacemakers, structural implants, cochlea implants, neural prostheses, electroactive polymer actuators, and in the next few months, for a few paralyzed individuals, a neural interface implant as part of a U.S. clinical study which will provide them with a permanent interface to a computer.
It is this ongoing quest for humans bettering themselves and wanting to live longer and more qualitatively, that an artificial human will at some point result, whether 'artificial' will be defined as more than 50% of a human's biological body parts merging with technology or whether 'artificial' is an automonous robotic being that looks like a human in order to best serve and work with humans in an environment that is set up for humans. In the latter case, I believe 'copying human nature' will be more of an indirect consequence (a result of an effective response system in sophisticated, higher performing, higher communications robots), rather than as a direct attempt.
Aren't you just another shameless tech self-publicizing... - by Sanity
I spent a while looking through the "publications" section of your website to seek out the "hard academic underpinnings" that Roblimo mentioned, but all I could find there were a selection of puff-piece articles, vaguely gushing about a brave new robotic future (without actually saying anything that Asmov didn't cover years ago, but he did it with infinitely more elegance and forsight). Which brings me to my question: Do you do any scientifically valuable research? I ask because you seem like just another shamelessly self-publicising cyber-pundit, much like the UK's Kevin Warwick [kevinwarwick.org.uk] (who, famously claimed to be the world's first cyborg after implanting a dog-tracking chip in his arm).
I am not a scientist nor an engineer, and therefore my goal has never been to do scientific research. My goal, by humorously proclaiming myself as the World's First Robotic Psychiatrist (and the real Susan Calvin) 18 years ago, was and still is, to educate the public, that group of people that buy the National Enquirer, watch American Idol, and read puff-piece articles. My objective is to make them aware of robotics, a technology that will have more of an impact on their lives than the automobile, PC, and the internet, by 'translating' the technology developed by roboticists so that the public can understand its benefits as opposed to fearing them, something that I think is more common in the U.S. due to how robots have been perceived over the years via the media and Hollywood. Thus, I was immediately interested in the practical, not theoretical applications of robotics, and informing the masses about them.
Here's more detail:
In the 80s, (after graduating from Tufts University in child development - mostly cognitive development, which by the way, Marvin Minsky's book, Society of Mind is based on), I sold computers to small businesses - doctors, lawyers, accountants, etc. At the time, this was not an easy task as business operations were manual and few were readily willing to automate them. For those that did, however, the burden usually fell on the secretary, who was typically female, whether she liked it or not. Expectations were way off - the business owners thought the computer and all the information would be up and running in no time, and the secretaries feared they'd lose their jobs to their computers. When personal computers became a commodity in the late 80s, executives bought them - and often they just sat on their desks, never used.
If people can't program their VCRs, nevermind use their PCs, I thought, how will we as a society be ready for a robot in our home to do our dishes? It was then, in 1986, that I proclaimed myself as the World's First Robotic Psychiatrist, and brought Susan Calvin to life. There was no formal course of study for robopsychology. Therefore, I became the first in my field and gave myself credentials (and actually received an official U.S. Trademark later on). I was pioneering unchartered territory.
Being the World's First Robotic Psychiatrist was a tongue-in-cheek way of saying that one day, like pets, when robots co-existed with humans, they may actually develop problems similar to humans. It was a way for me to get the public to think about the future of technology while increasing their current awareness. If a non-engineering 5 ft. tall woman understands the technology, subliminally, so will the rest of the public. (And this is another topic altogether, but I believe females will be the primary purchasers of domestic and household robots, yes, for all types of purposes). Robotic Psychiatry during my lifetime, I believed, would not be to program robots, but to ready the humans (though I always hoped that there would be patients that, like Susan Calvin, I could communicate verbally with and observe their behaviors within their environments). The best way to make the U.S. masses aware is through the medium of television. For years, various people - Joan Embery, Jack Hanna, et al - have been bringing rare animals onto The Tonight Show and the Letterman Show. Millions of people got to see rare monkeys peeing and koalas clinging. It was funny, entertaining yet educational. That's exactly what I wanted to do with robots, especially considering most people had never seen one.
While researching robotic developments that might be of interest to the general public, I decided to engross myself completely in the robotics industry while at the same time learn about robots in science fiction. I met Asimov in 1989 at a World Science Fiction Convention and continued correspondence until he was too sick to do so any longer. He dubbed me the 'Real Susan Calvin' (in writing).
In 1991, I began working for an industrial robot manufacturer. I attended the company's programming and maintenance classes and wrote technical manuals and eventually ended up where I wanted to, in sales and marketing. During the ten years I worked for Sankyo Robotics, I must have visited hundreds of manufacturing plants. You name it and I saw it made: cars, golf balls, jelly beans, eyeglasses, IUDs, robots (in Japan), french fries, and the list goes on. However, in each case, I was there to sell SCARA robots - SCARA, the acronym for a type of robot arm developed in Japan in the 70s that stood for selective compliance assembly robot arm.
Yup, my job was to go into factories and try to justify why they should invest in SCARAs (being from the northeast, it sounded more like Scare Har to me. How did we allow this acronym to become nomenclature in the U.S.??). Regardless if the manufacturing engineering manager was knowledgeable on robotics and could easily justify automating his process, if the executive(s) in charge of the money were not accepting of robotics, even if it saved them $, there was often great resistance at the corporate level. No amount of scientific evidence would change their mind to their opposition to technology. (Though it was fun to get them to try, and one year I succeeded in breaking company sales records.)
Also while at Sankyo in the early 90s, I ran a RoboCamp for kids in the summer in which kids not only built robot kits, but learned about industrial robots. Ten years later, I developed a curriculum for elementary school children called "Robots and Me", a program that fosters the robot/child interaction.
In 1996, I was asked by MCB University Press out of England, to be the U.S. Associate Editor for their journals, Service Robot, Industrial Robot (IR), Sensor Review, and Assembly Automation. My main role was (and still is with Industrial Robot Journal ) to research innovative robotic technologies here in the States, and work with the developers of the technology to get them to contribute a technical article on their findings. Industrial Robot Journal is in its 31st year, is an internationally respected journal, listed in all the important citation indexes and regularly used as the publisher of choice by the world's leading practicing industrial, service and healthcare roboticists. I've published many articles for Service Robot Journal and Industrial Robot Journal and one of them, an article on surgeons' view of RoboDoc, the first surgical robot in the world (which was manufactured by Sankyo Robotics, the robot company I worked for) won a literary award and was referenced by the International Federation of Robotics in their annual World Robotics publication two years in a row. I am now the U.S. Associate Editor for the world's first International Journal of Medical Robotics and Computer Assisted Surgery, being launched as you read this. Do I do the scientific research for the medical robotic companies? No, but I help them educate clinicians and surgeons worldwide by getting them published and reporting their innovative applications.
The above represent some of my efforts over the years.
Ongoing research and scientific developments in robotics are a necessity, but without real world exposure and acceptance, the inventions may not survive. Robotics cannot go forward without the simbionic relationship of all those things occurring. I hope, therefore, that you will see that I am not a scientist, but someone helping to bring others' robotic developments to the forefront.
About Human-Robot Relationships... - by MagiGraphX
I've watched too much Chobits perhaps, but is it right for a human to fall in love with an artificially intelligent (and emotional) robot? Just a thought of what could happen...
Is it wrong for a human to fall in love with a sentient robot? Humans have loved all sorts of machines for years - their cars, boats, computers, their Aibos....imagine how we'll feel when the computer-face of our dreams with its robotic body lives with us (I know, this is sounding like the movie Cherry 2000). Robots will offer companionship to those who are lonely, to those who feel more comfortable with a robot than with a human, and falling in love will be a natural phenomenon out of coexisting with them. But will they love us back in the same way? They may love us in terms of being loyal, subservient, trustworthy, etc., but I don't think they'll ever experience how we as humans define "falling in love". And how will this make us feel if a robot does not feel the same way as we do for it?
Falling in love is just part of the issue - will sexual relations between a human and a robot be right? To me, it's more right than those immoral relations that occur between a teacher and student, a parent and child, a priest and altar boy, and the list goes on. Sex with a robot could decrease the rampant spreading of Aids and other diseases and possibly even help to decrease violence.
A whole host of other issues may arise: Will it be legal to love a robot? Will a robot love us back out of being subservient while really loving another robot? Will humans who love other humans feel rejected when their partner falls in love with a robot? I think any of these situations may be feasible, though not in the near future.
Future of robots? - by Merkuri22
We've all seen the movies and read the books about machines in the future, and frankly most of these stories portray robots and AI as terrifying things that humanity will end up battling with for supremicy of the planet. Do you think there are any truths to these stories? Will robots compete with us in the future for jobs and/or living space? Do you ever see robots and humans living side by side as equals, or do you think they will always be subservient machines? Or, even, do you think robots will surpass us one day as the dominant force on the planet?
My view of robots is that they are tools used to assist humans to do the mundane, the dangerous, the difficult, etc. Put more simply, I also see a computer as a tool - attach mobility, manipulators, and sensors to my computer and you've got a robot that can do a lot more for me.
I don't see humans competing with robots for jobs - I see them doing the jobs we don't want or shouldn't be doing, and creating more jobs for humans. This is perhaps one of the biggest misconceptions - that "Robots take jobs away". Robots help companies stay competitive (by helping produce better quality products at a lower cost, and allowing companies to meet the changing demands of customers); thus, robots help save jobs that otherwise may have been lost, and help create new jobs (although not always the same type of job). Perhaps if there were more robotic automation in place, there would have been less jobs going offshore...
As a side note: The Wall Street Journal recently (Friday, April 2) cited some Bureau of Labor Statistics (BLS) predictions back in 1988 and looked at the results through 2000...."Of 20 occupations that the BLS predicted in 1988 would suffer the greatest losses between 1988 and 2000, half actually grew. The agency predicted that the number of assemblers in electrical and electronic factories would drop 173,000, a 44% decrease. Twelve years later, there were 45,000 more, an 11% increase. Neither outsourcing nor robots made as much of a dent as the BLS expected."
Source: Robotic Industries Association
Living space - I don't see us competing there, either. In some of Asimov's stories, robots were actually banned from earth. They didn't need what we humans need to survive and they did their work on other planets.
Robots will be in different sizes and shapes for a multitude of tasks and we've certainly found space for all our appliances, computers and TVs, and thus we will welcome our robotic assistants. Having more robotic assistants that can allow us to stay in the convenience of our home longer may decrease the need for as many buildings such as day care facilities, nursing homes, assisted living care facilities and hospitals, particularly as the worldwide aging population continues to rapidly increase.
I see robots living and working with us side by side, but not necessarily as 'equals'. They are designed to be better than humans at some things (much like computers), but I don't see them as 'equals' either, i.e., having the same needs as humans do. Robots may surpass our own abilities, but having a robot uprise where robots want to dominate the planet, I don't buy it.
Interesting books on the subject by Dr. Hans Moravec, "Mind Children" and "Robot: Mere Machine to Transcedent Mind."
What form will A.I. take? - by mykepredko
A bit of a navel gazing question for you; what form do you think A.I. will take when somebody finally comes up with a program that is accepted as intelligent?
My own feeling is that the first A.I. program will simulate a simple life form (like a worm) instead of a highly complex and communicative form like humans. This goes against what Dr. Minsky believes A.I. should be, but I can't honestly believe that our first interaction with an intelligent mechanism would with something with similar capabilities to ourselves, but with something with the same mental capabilities and capacities as a bug. The important aspects of Aritficial Intelligence will be making sense of its environments and learning from experience. To demonstrate that the Intelligence is learning is observing and testing the Intelligence's application of this knowledge. What are your thoughts?
The definition of artificial intelligence is still an age old debate (right up there with what is a robot), and there are plenty of artificially intelligent forms today (that are accepted as intelligent) being used in both software (AI agents, computer games, etc.) and in robotics. ASCI Purple, built by IBM, is supposedly the world's most powerful supercomputer, capable of carrying out 100 trillion operations per second, which some believe could be approaching the processing power of the human brain. Two famous roboticists share your view of a simple life form (bugs/insects) with their behavior based robotics: Dr. Rodney Brooks who pioneered Subsumption Architecture, which provides an incremental method for building robot control systems linking perception to action; and Dr. Mark Tilden with his processorless, autonomous, intelligent BEAM (Biology, Electronics, Aesthetics, Mechanics) Robotics which uses simple analog circuits. (Also, check out his latest humanoid, RoboSapien. It will be a HUGE success.) I agree that the important aspects of artificial intelligence are making sense of its environment and able to learn from its experiences.
My question... - by hookedup
Dr. Joanne Pransky, do you see Asimov's 3 laws of robotics playing a role in our relationship with robots in the future? Since most of our technological advances seem to come from developing warfare systems, will the 3 laws be left by the wayside, or will it become an integral part of robotics in the years to come.
I think safety for humans has and will continue to be a critical part of robotics, but I don't think Asimov's three laws, though brilliant as they are (here goes my entire career as Susan Calvin), will suffice exactly as is as a postulate for all robots.
You brought up a good point - that of warfare systems, something robots are well suited for. As a matter of fact, just last week CNN reported what a great moment for it was for iRobot Corporation when they were told by the Pentagon that one of its PackBots was destroyed in action for the first time, meaning the life of a human may have been saved.
Though Packbots are used for battlefield reconnaisance, in the future, other robots may certainly be utilized in the front line of fire for destroying enemies (not that I'm a proponent of war nor killing humans (nor destroying robots for that matter), but certainly I'd rather see a robot hurt than any human). There are many situations in which we can envision security robots having to injure some humans to protect others and in each situation, a robot would have to make the best decision it can, as we humans must do at times, with its understanding of the information at hand.
For intelligent robots in the real world, the Three Laws as they stand now, will not work effectively, although I believe there will be some other similar safeguards that they will need to adhere to.
Human Features of Robots / Bonding with robots - by jhouserizer
Over the years, there has been a fair amount of debate about whether robots should take on human forms, especially with regards to having detailed life-like faces. Some robot designers, wary of this debate, have settled on giving their creations near human-like faces [theconnection.org].
My question is in relation to this topic. Do you think that people (and "sentient robots" that may exist some day) will be be overall better served if robots are readily distinguishable from humans? How strongly will this affect our "bonding" with robots and their bonding with us? Dogs for instance look quite different from humans, but many a family-pet seems to believe itself to be a real part of the family, and sometimes even seem to think themselves to be human. How will this affect the way we deal with "death" of a robot?
I think it depends on both the task at hand, and who the user is, that will determine if humans will be better served by robots that are readily distinguishable from humans. My Roomba is a robot that is perfectly well-designed for vacuuming so in this case the answer is yes, I am glad that it is distinguishable. Personally, I am more concerned that a robot do its job well and less concerned about what it looks like.
However, I still think that humans in general will relate and bond more easily to anthropomorphic robotic companions. I think it will be easier for most to accept and communicate with robots that look like themselves. Even when we communicate with other humans, we are conditioned to look into someones eyes to gauge how we're doing in the conversation. When someone's wearing sunglasses, it's harder to determine how they're responding to us. I think we're going to want to look into the eyes of a robot to know it's listening to us and we may want it to smile, frown, etc., much like a human face.
But exactly like a human, as you say indistinguishable, is an excellent question. The Uncanny Valley theory, described by Japanese roboticist Masahiro Mori, addresses just this issue. Mori found that people tend to empathize more with robots that are more humanlike, but if the robot becomes too human at a certain point, the robot becomes repulsive to the human. So what's the answer - a robot that is slightly, but not too indistinguishable?
People will bond with their robots regardless of how distinguishable or indistinguishable they physically are, and yes, I do believe that like our pets, we will mourn their "death." I've always believed that our initial relationships with them will be similar to our relationships with pets. That means that some humans will: buy them matching clothes and jewelry; take pictures of them on Santa's lap; pay for an extra seat on the airplane to have them fly with them; make sure that their wills provide for their maintenance contracts and all the latest upgrades when they outlive them; fight over who gets to keep them in a divorce suit; and if owners feel a robot is depressed, they will even take them to a robotic psychiatrist for a weekly family encounter session. Humorous or not, it will no doubt be interesting
Artificial intelligence without embodiment? - by macshune
As an undergraduate philosophy student interested in the theoretical implications of A.I., could you tell me what your thoughts are on the validity of the assumption that artificial intelligence is possible separate from the notion of embodiment? I think the lack of consideration given embodiment is one reason why artificial intelligence researchers have come up empty-handed so far in their quest to synthesize a conscious, self-reflective entity. To ask the question more succinctly, do you think a mind needs a body and possibly and environment to interact with in order to be conscious, or can a mind exist and know itself independent of an external context?
I don't think that the quest of AI researchers has been to synthesize a conscious, self-reflective entity as it has been to emulate the human thought and reasoning process. As part of this, many researchers believe that for an entity to have artificial intelligence, it must have an understanding of and be able to interact with its environment. Some believe it is the form of an embodied robot, but not necessarily - as long as the machine that simulates human intelligence is receiving information from and responding to its surroundings (e.g., an intelligent computer).
I personally believe that nothing artificial will be able to be truly conscious in the same way a human is; however, there could be some kind of machine consciousness that has similarities to human consciousness, and it could be difficult to dilineate the difference. What if, however, we are able to download a human's brain into a machine that had no embodiment. Ten years later, after continuing to receive stimulus from its surrounding and responding to it, would this machine be considered conscious?
Roborights? - by jrpascucci
Do you believe there will come a time that we will have a 'robot rights' movement? Will it be more credible than most of the 'animal rights' movement, or just a good-hearted (but weak-minded) anthropomorphization of our silicon companion machines?
Someone (Dennis Miller?) once said, animals can have rights as soon as they accept responsibilities. Robots obviously can be given responsibilities (your job is to fit tab A into slot B), but ethically, should they get rights? As soon as someone programs a robot to pass the turing test, and then immediately ask for his rights? Or is it something deeper?
Beyond some kind of second-class entity status, will robots become citizens? Do robots have a god-given right (recall, our rights are considered by the Declaration of Independence to be given us either by 'Nature's God' or by their 'Creator') to freedom of expression, association, religion? The right to bear arms? Do robots have a 'right to work'? "One Robot, One Vote"? Will Robots have to file tax returns? Will there be Robot Courts? Robot Lawyers? Robot Jail? Robot Schools? Robotic Members elected to the Legislature? Some day, will we have a Robot President? Is a Robot built in Japan eligible to be president? What if the robot was shipped from Japan as parts with software, and put together here, does that count?
If you start building a robot, and decide to stop, will that be considered to be a robaboration? Or the work of their 'creator'? And if, after building, you switch it on and then decide you don't like it that much, and power it off again and harvest the parts, is that robomurder and disrobomemberment?
I suppose anything is possible, and perhaps I am blinded by the hopes of an optimistic future long after I am dead, but I just can't see the motivation for robots to do a lot of the things you're describing. I see them as extraordinary mechanisms able to physically and mentally perform many tasks/jobs,and though I see them having behaviors and challenges similar to humans from dealing with humans in a human world, I just don't see them with the innate human emotions that drive a lot of the above `rights' such as the desire for greed, power, freedom, control, etc. I see robots as almost the future perfect child - we help to create them, they're like us and we're responsible for them, but yet they remain quite content serving us humans and implementing their tasks.
That's not to say that I don't think a robot would make a better politician (certainly can't be much worse than some of the human ones we have now) nor that certain robots wouldn't get certain responsibilities (police bots that carry guns), but a desire to vote? For what, so that they can vote against humans allowing robots to get destroyed in a robotic sporting event, against the very reason they were designed in the first place?
However, I do see robotic law as possibly the largest field of law (i.e., humans practicing robotic law). Whose responsible for a robot who 'breaks the law'? Is it the company who manufactured the domestic robot or the hacker who purchased it and had it harm someone in his family?
Regarding a robot rights movement - hopefully we will protect the integrity of our robots. Don't we as humans have a responsibility to use our robots properly and not to misuse or abuse them and shouldn't there be laws in place for those who don't? I suppose if we aren't responsible with our robots, then there could be the need for robots to protect their own interests.
where's the positronic brain? - by futuretaikonaut
In Asimov's robot novels, the assumption was that modern science had invented the positronic brain, which was thought to be capable of actual sentient thought, though most of the robots in the books did so on a very basic and childlike level. It was this that actually gave Dr. Calvin a job... seeing as how the brains had the capacity for original thought, even though it was mostly predictable. As it stands today, and into the foreseeable future, we have invented no such thing capable of acting with original thought. Our hardware has, instead, given the appearance of thought, as it is capacble of so many calculations per second that it appears to come up with things on its own. So, my question is, what use is a robot psychologist if every action that a robot can take is already predetermined by its programming? What new field is there to be discovered that is not already known? In the human mind, we are constantly learning new things about the brain, a mechanism we only barely understand, but what is there to derive from a machine we ourselves create?
I don't agree that every action that a robot can take is already predetermined by its programming - there are some highly sophisticated robots out there that are provided with a set of tools for navigating in their environment and the combinations of these systems are often unpredictable. (The autonomous robotic vehicles at the Darpa Grand Challenge are an example. Communicating real-time data between various systems such as correctional decision-making systems, perception sensor systems, navigational systems, and terrain modeling systems, and translating the results into the movement of a military vehicle or HMMWV is not predetermined.)
Yes, we are constantly learning new things about the brain and as humans and machines merge, who knows what fields it will bring? Could anyone have predicted the types of new fields, say in 1970, the computer industry would bring?
Asimov himself, a few years before his death and nearly 50 years after first writing about robopsychology (and after seeing the burgeoning field of robotics take reality) wrote at the end of the 80s, "Robotic intelligence may be so different from human intelligence that it will take a new discipline - "robopsychology" to deal with it. That is where Susan Calvin will come in. It is she and others like her who will deal with robots, where ordinary psychologists could not begin to do so. And this might turn out to be the most important aspect of robotics, for if we study in detail two entirely different kinds of intelligence, we may learn to understand intelligence in a much more general and fundamental way that is now possible. Specifically, we will learn more about human intelligence than may be possible to learn from human intelligence alone."
Interesting Read: The Age of Spiritual Machines by Ray Kurzweil
Your favorite fictional robotic character - by Strange Ranger
What is your favorite robot/cyborg character in written or film fiction?
For instance, I'm happy to admit mine is Data from Star Trek: Next Generation. Most especially the earlier seasons. Reason: I'm not much of a "trekkie" but that character made me consider so many different possible aspects of AI and of being not-human. From trying to understand other humans' emotions to his contrast with 'The Borg' down to what it might be like to have an "internal chronometer". For totally different reasons I loved Douglas Adams 'Marvin the Depressed Robot' in HHGTTG.
I have a lot of favorite robot characters - RoboCop, Bicentennial Man, Johnny Number 5 in Short Circuit, and I'm not sure if it's my favorite or that for the past couple of years he's the one I've been thinking about most, but I'd have to say David Swinton in the movie AI. Perhaps it's my maternal, female side coming out, but my reaction to David was very strong. David 'imprinted' his love solely to his mother - unconditionally and forever, yet there were no requirements for her to do the same for her robot child when she decided to activate his code. Usually this is the opposite - we love our children unconditionally although it's not always the reverse. To have this unilateral condition of a one-way commitment on the part of a robot, I found especially disturbing.