Monday, June 4, 2012

"Blame the Machines"




"You are not required to think at all."
-Duran Duran, "Blame the Machines"

Robot ethics.

That is not the name of a college course in a transhuman or other kind of science fiction novel.  It’s a subject that will require real discussion.

As robots grow more autonomous in their functions, what methods of thinking will govern them?  The Economist has recently published an article and brief video that raises this question.  The inevitable science fiction comparisons arise, primarily and immediate citing of the behavior of HAL from 2001, this being the staple of those who cry about nightmare scenarios in technology.  Yet when considering a set of ethics to be programmed into a thinking machine, one needs look no further than science fiction.  Author Isaac Asimov already cobbled a few rules together when writing I, Robot.

I am of course referring to Asimov’s Three Laws of Robotics.  Now undoubtedly many or most of you can recite these by rote but I’m posting them here for everyone else:

1.     A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2.    A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3.     A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

The problem with these laws is that they toss UAV and other military robots out of the picture.  After all, a UAV such as a Reaper or a Predator with AI…as the Pentagon has openly stated that it wants to have…would automatically violate the first law.  This is a big deal as unmanned robotic weapons will be taking up more and more of our defense capabilities.

Over in the private sector, cars are becoming more and more automated.  Certain models can now sense people and objects around them, can brake on their own, and take you to your destination via GPS without any more input from you other than submitting the location.
When AI machines operate in either one of these areas, the potential consequences from a malfunction are serious.  This is all without including cybernetics, when humans replace body parts or otherwise augment themselves with technology.  “It made me do it,” an accused might say of a brain or neural implant.  Who is responsible?

One proposed solution is to have the AI make the same kind of a decision that a human would in a given situation.  That is not far-fetched when you bring The Singularity into the picture, when AI becomes indistinguishable from human.  “Teaching them right from wrong,” as someone in the comments section of The Economist article mentioned.  That’s a problem when the subject of right and wrong is one we’re not too sure of ourselves. 

Look at this example.  Given: it is wrong to kill.  Then is it wrong to kill a terrorist?  Patriotism aside, you can find several different opinions on that matter…and I’m not saying that I personally have a problem with lethal force in certain situations.  That’s just it.  Ethics can be inherently murky and full of disagreements.

How do we teach “cyber-ethics” to a robot if we’re not solid on the subject ourselves?


Follow me on Twitter: @Jntweets

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.