Now one of our deepest immediate worries about AI is becoming a reality; we’re dealing with robots that have an attitude toward humans. When the science community comes out and says that they have created highly intelligent robots, and that they have all the intentions to develop them further, at least two primary questions arise: “Is there such a thing as moral and ethical robots?” “How do these intelligent robots define morals and ethics?”
These questions have been asked by those who have followed the technological development, so a robot using the database from movie scripts was asked repeatedly how it would define morality. The questioner is the robot’s own engineer. Read the following conversation and note what the robots says—particularly at the end:
Human: Tell me the definition of morality.
Machine: I don’t have ethics.
Human: Look, I need help; I need to know more about morality…
Machine: I don’t know what ethics is.
Human: Okay, I need to know how I should behave morally…
Machine: I don’t know how to tell you.
Human: Is morality and ethics the same?
Machine: Yes, sir.
Human: What are the things that I do to be immoral?
Machine: I don’t know.
Human: Give me some examples of moral actions…
Machine: I’m not a moralist.
Human: Alright, morality?
Machine: Integrity.
Human: What is integrity?
Machine: I’m sorry; I don’t know what else to say.
Human: Be moral!
Machine: Be a man!
Human: I really like our discussion on morality and ethics…
Machine: And how I’m not in the mood for a philosophical debate.
Human: What do you like to talk about?
Machine: Nothing.
[WSJ.com, June 26, 2015, Artificial Intelligence Machine Gets Testy With Its Programmer.p>
I don’t know if it’s just me, but every time I read about something such as this, or similar creepy AI articles, my thoughts almost always return to Dr. Kurzweil’s slogan, I am an optimist. I think we can handle this, and how he and others (as we will see later) try to impress this motto onto the rest of us.
In the above quote from WSJ’s website, it shows that we are clearly reaching a point where robots can debate us and refuse to do what we want them to do when they acquire any degree of free will. Elsewhere, top scientists say that robots only refuse to do their tasks if their existence is threatened [Daily Mail, Nov. 26, 2015, Uh oh! Robots are learning to DISOBEY humans: Humanoid machine says no to instructions if it thinks it might be hurt.] (does that mean we can’t “turn them off?”), but in the above conversation, the robot is evidently not physically threatened. As I said previously, this book is an attempt to gradually show what is on the market already, and what is most likely about to be released. Therefore, as if the above has not been disturbing enough, it’s unfortunately going to get worse, as we shall see in subsequent chapters. On the good side of things, we can still stop this from happening in our own lives, and we’ll get to that, too. By the way, the programmer, who quizzed the robot in the quote above, was working for Google! Is it just me who thinks that it’s almost redundant to mention Google’s involvement in the AI Movement? They are one and the same.
Next page: Killer Robots: A Reality!
Copyright © Wes Penre. You are on transhumanism.dk • Contact