Saturday, July 30, 2011

Asimov's Laws of Robotics and Web 2.0 - Part 2

In my previous post I mentioned a possible use of Asimov's laws of robotics in modern web applications (and other software). In this post I will elaborate how specifically that could be done and what are some possible challenges in doing so that some people might not be willing to take.

First, let's analyze the laws one by one:

1. A software robot may not injure a human being or, through inaction, allow a human being to come to harm.

There are some ways a web application can 'injure' a human being. For example, exposing someones private data to wrong people can cause all sorts of trouble ranging all the way from being dumped because of some photos from a party to having your house robbed while you are on a vacation by burglars who had your address and read your status message 'We are on vacation!'. Various ways of hurting people or allowing them to get hurt may include child pornography, cyber stalking (possibly in combination with real stalking) allowing them to get robbed, etc.

2. A software robot must obey any orders within its scope given to it by authorized human beings, except where such orders would conflict with the First Law.

This one is pretty straightforward, since it encompasses a requirement common to all software - authentication and authorization. I would also put user friendliness and overall user experience in this category because it allows an easier way for humans to communicate their orders to software. Maybe it's a stretch, but even performance metrics like response time could fit nicely here.

3. A software robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

As I said in my previous post, this one should be easiest to implement, since it basically implies that software should be stable enough not to crash or delete its own data.

As you may have noticed, software requirements imposed by this interpretation of Asimov's laws are not in any way new. Respect of privacy, user friendliness, fast response time and overall stability have always been important aspects of software design. What is important about Laws of Robotics isn't in any one particular law - it's in their order of importance, and that is the main point of this article.

In order to be Asimov compliant, not only does software have to meet all these requirements, but also in correct order.

Example 1: Let's say that some hacker tries to steal personal data from an online service. Our software detects unauthorized intrusion, however has no way to stop it other than to shut down the server or otherwise cause downtime. Now, 'Asimovian' software has to protect its own existence in order to comply with the Third law. However, the First law says that human beings may be harmed if their personal information leaks to shady people. The First law is more important than the Third law, and software has to shut down and cause downtime, rather than to expose its users to risk.

Problems with such approach are quite evident:
1. Downtime = loss of money.
2. If a company shuts down their server to prevent information theft, they are practically admitting they have been hacked. Much better to just pretend it didn't happen and that no one would find out anyway.

Guess what, people generally do find out when their credit card gets drained. And they hate it much, much more than when they see a 404 page!

Saturday, July 2, 2011

Asimov's Laws of Robotics and Web 2.0 - Part 1

As any science fiction geek knows, Isaac Asimov wrote a series of short stories and novels about robots. He was more or less the first author to approach the topic seriously and he even coined the word Robotics, nowadays used for both scientific discipline and industry. The most famous aspect of his robots is the fact that they always have to respect the famous Three Laws of Robotics:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

It is year 2011 and a lot of our every day reality depends on robots, even if we don't see or hear about them every day. Our cars are constructed by assembly lines consisted almost entirely of robots, we can buy a robot pet or watch one of these Japanese robots dancing or playing a violine. Heck, even this very blog post will be read, indexed and catalogued by a search engine (ro)bot.

In a way Asimov was right, unlike other science fiction predictions (flying automobiles, anyone?) we really do have useful robots. All right, they don't have positronic brains, but they do have semiconductor microchips. And they don't have Three laws built in. So, why don't our robots respect them, like Asimov's robots did? Wouldn't they be more useful or at least safer if they did? Unfortunatelly, at this time it is an impossible task. For example, how would we formally define the concept of "injure"? Also, not all injuries have to be inflicted willingly. A modern robot may crush a human by tripping over the edge and falling on him. Most modern robots can barely walk and hold objects. Making sure that they don't injure people by accidentally hitting them, let alone covering all possible meanings of injuring at this point in our technological development really is impossible.

Ok, but I also mentioned a software robot browsing the web. With software robots there isn't even a remote possibility that they might physically harm people. A simple fact - if a robot is incorporeal, it can't crush anyone's bones. This allows us to avoid the fundamental problem with implementation of the First law. The Second law can be easily implemented if we allow the robot to decide whether or not to obey the order based on human's authorization and permissions. Also, a robot should only be expected to do what it is programmed to, so an order to do something outside its scope should also be politely refused. The Third law is much easier to implement, since all it would require is that software bot gives its best not to crash and lose all its data.

So, we might try to adapt Asimov's laws into Three Laws of Software Robotics (changes emphasized):

1. A software robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A software robot must obey any orders within its scope given to it by authorized human beings, except where such orders would conflict with the First Law.
3. A software robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

I'll just conclude this article with an important note. Some of you may ask: "But, why bother? Why would anyone want to waste time on trying to make online software behave like robots from science fiction stories?". It's not about trying to make the present (formerly known as 'the future') like what we expected it to be when we were kids reading futuristic magazines. Ok, actually it is, but that is not the main point. :)

The thing is that the way Web is developing lately, with all privacy concerns, corporation conspiracy data miners, stalkers, pedophiles, etc. it may be necessary to address all these issues and try to stop web services from being harmful. Maybe we need all that software to be more Asimov compliant.