In 1942, the science fiction author Isaac Asimov published Runaround, a short story in which he introduced three laws of robotics:
First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Of course, in 1942, our most sophisticated imagined images of robots looked something like this picture. Up to today, we do not have any robots to which such laws would apply, and we have no legislation anywhere in the world that applies to robots (that I know of).
When we think of robots coming after people, we have visions of large flying machines attacking humans in apocalyptic movies. The reality is that we don’t have such robots yet. But let’s make no mistakes about it. Robots kill humans right now in 2019.
During a recent trip to Syracuse, I noticed an MQ-9 Reaper drone landing at Hancock Field, which is at the Syracuse airport, where the New York Air National Guard’s 174th Attack Wing is based.
The Reaper is built by General Atomics right here down the street in Poway, California, and costs $15.9 million each. As of 2014, 163 of them have been built, mostly for the United States military, but also used by NASA, the CIA and U.S. Customs. Some have been sold to allied countries.
Here is a short video of a flight.
These drones are semi-automated robots. They can stay aloft for 14 hours when fully loaded. They are actually flown by pilots on the ground in the U.S. but they can be located anywhere in the world. The MQ-9 can attack a village in Yemen or Afghanistan, kill all participants at a wedding or birthday party (because presumably a terrorist was present there), and then the “pilot” can go home at 5:00pm in Syracuse and have a BBQ party with his friends in his backyard. We citizens don’t actually know how autonomous these robots are, but I am quite certain they could fly autonomous missions, without pilots, if we so chose.
Is the Reaper following the First Law of robotics? Certainly not. It’s designed to perform reconnaissance and to kill people. That’s its purpose.
Another robot that recently killed humans by the hundreds is the Boeing 737 Max airliner. Ok, the plane is flown by humans, but the investigation has shown that the humans on the plane were not able to override the flight control system (a piece of robotics). Due to a single faulty sensor, in two separate airplanes, the flight control system pushed down the nose and minutes later all people on board perished. In October 2018, 189 people died in Indonesia, and in March 2019, another 157 people died in Ethiopia. See New York Times article here.
Another type of robot are self-driving cars. People are getting killed in self-driving vehicles. While most experts, as well as insurance companies, agree that self-driving vehicles are safer than human-driven vehicles, we really do not have enough incidents and experience monitoring self-driving traffic and incidents to establish reliable statistics. The bottom line is: we do not know. I personally believe that self-driving vehicles are going to be MUCH safer than human-driven vehicles.
We classify them as Level-1 to Level-5, where Level-5 is fully automated, defined as unconditional (that is, no-limits) automated driving, with no expectation that a human driver will ever have to intervene. Clearly, a Level-5 automated vehicle is definitely a robot. It is only a matter of time before our roads will be full of them. It could be a few years, it could be a few decades. But it is a certainty.
Let’s think, for a moment, about ethics of robotics, going right back to Asimov’s three laws. Let’s say a self-driving vehicle is driving down a narrow street and, due to a stalled car crossing in front of it, it has to stop. Let’s say the brakes of the vehicle are failing. I know this should not be happening, but brakes do fail occasionally. The vehicle detects that the brakes are failing, and it now knows that in 0.8 seconds it will impact. But it has a choice to make. I might add here that 0.8 seconds is a long time for an automated intelligent system to “think” about a problem.
Here is the problem: Let’s say it detected that there is a young woman with two small children in the back in car seats in the stalled vehicle. It realizes that if it hits that stalled car broadside at the current speed, it will likely kill all three occupants of that vehicle. But it can swerve to the right and jump up on the sidewalk. Unfortunately, there is an elderly couple walking their dog. The man is in a walker. The woman alongside him, is holding the leash for the dog. The car knows that if it swerves to the right, it will kill the elderly couple and the dog. It has no other choices.
The robot now needs to decide, in less than 0.8 seconds, which three souls it will certainly kill. How should it decide? Clearly, Asimov’s three laws are not sufficient. Does it go for the old people and spare the children and young woman?
I know this is a gruesome example, but this type of thing will happen, and to some degree it’s going on already. Our American lawmakers are certainly not thinking about this stuff right now. Congress can’t seem to make up its mind about what to do about relatively benign ethical problems like illegal immigration and asylum laws. The laws that apply to robotics are an entirely different matter altogether.
Robots are killing people, right now, as we speak, and we as a society are not yet ready to deal with the consequences.