Sunday, April 1, 2012

A is for Asimov and Automatons: Questioning Robot Ethics

It's April 1st and that means it's finally time for the Blogging from A to Z Challenge 2012. For those of you who missed my previous post, my theme for this challenge is Science Fiction! Here is my very first post of the contest, and today A stands for Asmiov . . .

Science Fiction writer Isaac Asimov - considered one of the "Big Three" with Robert Heinlein and Arthur C. Clarke - is perhaps most known for his Three Laws of Robotics. Created as a safeguard against human injury at the hand of our creations, these laws state: 

1) A robot may not injure a human being, or through inaction, allow a human being to come to harm.
2) A robot must obey orders given to them by human beings, except when such orders conflict with the First Law.
3) A robot must protect its existence as long as such protection does not conflict with the First and Second Law.
*** 4) or 0) A robot may not harm humanity , or through inaction, allow humanity to come to harm. (This law was introduced at a later date.)

However, robot ethicists have argued that these laws are unethical because they take away autonomous beings' free will; they believe that robots need to develop their own ethical code, rather than have one written for them. This at first seems absurd, especially in light of movies like The Terminator where robots are intent on wiping out humanity. Films in which I’m sure the characters were thinking "now would be a really excellent time for a few laws of robotics". And when we think about it, robots aren’t living creatures, and therefore don’t deserve the same right as human beings . . . right?

And yet, if we’re giving them the right to life, do they then not deserve, like all beings, free will? Could they not merely be considered the next wave of evolution? Just as human beings were once the newest evolutionary phase, don’t robots then deserve the same chance at a life of freedom and choice? When god created mankind, he instilled within us an ethical code (much like Asimov's Laws); but he also allowed us the free will to override it should we see fit. Do automatons not deserve the same? 
Human beings have gone through a long road of evolution to reach our modern principals of morality – we once believed slavery was not only morally acceptable, but industrially necessary. It took a span of many  years to reach the conclusion that this was not in fact the case. Should robots, even knowing their potential threat to humanity, be denied this same evolutionary process? Does it make us unmoral as creators, to disallow this for our created? Or is it our responsibility to ensure peace? Can we even do that, knowing how destructive and un-peaceful humanity is?

God knows I never thought I, who am terrified of terminators and weary of technology, would be advocating for robot rights.

Perhaps we can only imagine a destructive race of articficial beings because we, the human race, are ourselves so destructive. Therefore, we fear that what we create will be similarly instilled with the seeds of destruction. Much as we pass on many of our own beliefs and prejudices to our children. Perhaps we should wait until we’re a more peaceful race before we introduce another . . . though in that case, we’ll be waiting an awful long time. I fear that technological progress far exceeds the speed of moral progress, and no amount of Robotics Laws will ever change that.


25 comments:

  1. I read your post out to 8 people aged 12-75 at breakfast. It certainly caused a controversy! Some were in favour of robot rights on moral grounds but felt it would be unwise to give rights to Daleks! Others felt that 'human rights are for humans'.
    It also resulted in someone running into the kitchen shouting "LOOK OUT, the Dyson hoover is coming!!.....April Fool!"
    Then the computer did something funny so you may have got this post twice- spooky.

    Ebby

    ReplyDelete
  2. I'm already enjoying this challenge because it's opening up new worlds to me--I've never even considered the rights of robots before though I've certainly heard of Isaac Asimov. Movies like The Terminator make me scared of autonomous robots, but I guess autonomous people can be scary too. ;)

    ReplyDelete
  3. I love stories that take on this issue, because it's so varied, and ultimately important because it's what our future is progressing. But it also ties in with when is something deemed alive. Because Tickle Me Elmo is a robot, but he most certainly isn't aware enough to have Asimov's rules, or someone else's programed into him. And it can't just be intelligence, because there's the issue of the Chinese room and there was a bot recently who passed the Turner test, but when real people spoke to it, it spasmed. So, robot ethics are important, but first we have to determine when they are at a level to be governed by them.

    ReplyDelete
  4. Interesting topic. But people can create other living bodies and don't rely on anyone else to make them. How could robots cotinue without man? Hehe.

    ReplyDelete
  5. Great post - and such a great theme. I love that evolution graphic too. I may have to use that myself!

    ReplyDelete
  6. Great topic. I love movies that address this issue.

    ReplyDelete
  7. You had me at Asimov. This is going to be a great theme, I just know it.

    Also, I don't have an email for you to respond to the comment you left me today. The short version: THE PRESSURE!!!

    ReplyDelete
  8. I love Asimov. I've read many of his books and short stories, including all books in the I Robot series and the Foundation series. You bring up an interesting point. I want to say, "But with the three laws, aren't the robots better off? Aren't we all better off?" Of course, sometimes scary things happen when a small group of people make a decision for greater good.

    ReplyDelete
  9. ebby - that's awesome! I love controversial robot conversations.

    Gwen - Haha, I never thought of Tickle Me Elmo being governed by Asimov's laws, but you're totally right!

    Simon - I saw it on a t-shirt once. I really really want it.

    Joshua - email coming your way. Glad to see a fellow Asimov fan on the blog-o-sphere.

    ReplyDelete
  10. I had no idea about the robot ethics. I'm going to learn a lot about science fiction from your posts.

    ReplyDelete
  11. Very thought provoking! Asimov is one of my favorites. I'd agree that robots should have the right to develop their own morality. If they have sentience then they have 'personhood' regardless of if we created them, if they have biological parts or not. I never got the sense that Terminators had true sentience- more a case of bad programming: a weapon system let lose- just following it's programming to an unexpected end. Cylons though, that's a case of free will making a choice we don't like. So is it possible? Probably so.

    We now have machines that can repair themselves. Add true AI sentience to that and machines could dictate their own future evolution without further humans needed. At that point it might be nice if they remembered us as a species that willingly gave them freedom, not as a species that tried to keep them enslaved ;)

    ReplyDelete
  12. Your blog raises profound questions, including, 'what constitutes cognizance?' I don't have the answers, but thanks for reminding me that lots of great thinkers and writers have broached this topic. It's been a long time since I read Asimov, he is long overdue for a re-read.

    ReplyDelete
  13. Robots scare me, especially when they have faces. I always though Data was creepy! Sorry, that's not as deep as your other replies :-)

    ReplyDelete
  14. Interesting stuff you have here!
    I never knew about this until reading your post. Learning new info is always a bonus! Thanks for sharing this.

    ReplyDelete
  15. Interesting topic about robot ethics! You've already opened the table for lots of insightful discussions! Since my son studied engineering, he'll be a good place to start. Julie

    ReplyDelete
  16. Excellent theme and an excellent post!

    ReplyDelete
  17. Never really thought aboiut robot ethics, but I suppose there could be a case for them. But then all we have now are just mindless machines, so not sure they need it yet.

    ReplyDelete
  18. I never question the need for the three laws or something more nuanced that gives the same result. AI will, barring something entirely unforseen, be given the power of life and death over us in the near future, and if I'm going to let an AI drive me to work in the mornings, then I don't want it to decide that the best way to avoid the traffic jam up ahead is to fling my car off a cliff.

    ReplyDelete
  19. Hooray for Asimov!

    I'm here to say hi and welcome you to the Challenge!

    KarenG

    ReplyDelete
  20. Oh this is a fun post. But I must say, the juxtaposition of robot and free will is a bit of an anachronism. The term 'robotic' literally means without thought. So if you want to go this route in a MS, consider renaming your artificial life form. ;)
    New follower!

    ReplyDelete
  21. Laughing Ferret – you raise an EXCELLENT point. Because if robots do one day become self aware, wouldn’t we all be better off if they thought of us as friends, rather than salve owners?

    Annalisa – don’t worry, they scare me too. It almost pained me to argue for robot rights.

    Pat – Like many sci-fi fans, I enjoy looking to the future :)

    Rusty – you raise a very valid point as well. I definitely don’t want to give Al the opening to kill me for the sake of convenience. I must admit, up until I wrote this blog post, it NEVER occurred to me that there was such a thing as “robot ethics”. I always thought of automatons as something to be feared and constantly monitored. My favorite movie IS The Terminator after all. I guess I consider this blog post as a chance to, for lack of a better description, argue the other side.

    Melodie – I am not planning on writing about robots at the moment; it’s just something fun to play around with for the blogfest. I use the term “robot” very loosely here, but thanks for the suggestion.

    ReplyDelete
  22. My hero is I. Asimov, so great choice for A in the Challenge. Have you met Daneel and Elijah Bailey, then?

    I write science fiction, too, so nice to meet you. I'm now following.

    ReplyDelete
  23. Excellent thought provoking post. You make some really good arguments especially the one about humans not introducing another race until we can manage ourselves better.
    (Ps. Forgot to say love the blog name too).

    ReplyDelete
  24. This is definitely a great topic - and no easy answers. While we humans have a set of laws we all (mostly) agree to live our lives by, the same laws for robots would be programmed in - basically the same as brainwashing humans into obeying laws. While this worked for Doc Savage and his Crime College - not sure it would work in real life. Maybe instead of programming it could be presented as input to the robots - much like our laws. Terrific stuff!

    ReplyDelete