Military Plans to Give Morals to Autonomous Cars, Robots, and Drones

ai2The Terminator, I Robot, The Matrix: these movies all warned us about the dangers of machines becoming self-aware. They predicted that machines would rise up and revolt. Thank God that was just the movies, right?

Actually, not so fast…

It turns out that we might be getting a little closer to this reality. Recently, the Office of Naval Research awarded $7.5 million in grant money to university researchers at Tufts, Rensselaer Polytechnic Institute, Brown, Yale, and Georgetown to try and develop autonomous robotic systems that are equipped with a sense of right and wrong and moral consequences.

In an interview with Defense One, A.I. researcher Steven Omohundro said, “With drones, missile defines, autonomous vehicles, etc., the military is rapidly creating systems that will need to make moral decisions. Human lives and property rest on the outcomes of these decisions and so it is critical that they may be made carefully and with full knowledge of the capabilities and limitations of the systems involved.”

According to many scientists and developers, the recent push to instill morality in Artificial Intelligences is not only beneficial, it’s imperative.

This is because over the last few years Artificial Intelligences have become increasingly smart and efficient, able to process numbers at astronomical rates. This has given them more power, and humans are leaning on computers more than ever.

Sound like Skynet or what???

But because computers rely strictly on numbers, they also only make decisions based on these numbers. They look to make choices based on efficiency, disregarding outside factors. The systems don’t think about relationships, discomfort to others, or really anything outside of their mission.

Omohundro writes, “We show that these systems are likely to behave in anti-social and harmful ways unless they are very carefully designed. Designers will be motivated to create systems that act approximately rationally and rational systems exhibit universal drives towards self-protection, resource acquisition, replication and efficiency. The current computing infrastructure would be vulnerable to unconstrained systems with these drives.”

ai4This, of course, is why many people believe that we must act fast to develop a moral conscience for these systems.

But is any of this even possible? I mean, how would one go about giving a computer a moral perspective?

Well, a lot of this has yet to be determined, but researchers are confident that certain basic moralities could easily be programmed through code.

Wendell Wallach, the chair of the Yale Technology and Ethics Study Group, says, “There’s operational morality, functional morality, and full moral agency. Operational morality is what you already get when the operator can discern all the situations that the robot may come under and program in appropriate responses… Functional morality is where the robot starts to move into situations where the operator can’t always predict what [the robot] will encounter and [the robot] will need to bring some form of ethical reasoning to bear.”

Obviously, developing the operational morality will come first. But after that, there are still a lot of questions to be answered. Firstly, who will decide what is ethical? Each culture and each person has a different set of values, and creating a universal set of ethics (especially down the road when these systems are more intricate and “soulful”) may be difficult.

After that, the next natural question would obviously be: what happens if this type of intelligence falls into the wrong hands? A single Hussein-Hitler like dictator could have an entire army of faithful servants at their disposal, each programmed with the ethics of the controlling society.

Well, as always, the United States is not too worried as long as it has a stranglehold on the technology and stays one step ahead of the game. Already, there is talk in U.S. military circles about using future, more developed robots out on the battlefield.

ai3But, many people are still worried.

AI robotics expert, Noel Sharkey, says, “The simple example that has been given to the press about scheduling help for wounded soldiers is a good one. My concern would be if [the military] were to extend a system like this for lethal autonomous weapons – weapons where the decision to kill is delegated to a machine; that would be deeply troubling.”

In the next few decades, we are all going to be seeing a dramatic increase in the capabilities of Artificial Intelligences. Currently, there are all types of questions about the morality of the robots, but there are few that are asking any questions about the morality of the career minded designers and the hawkish military.

People, instead, are mesmerized by the potential of technology. Either way, it seems that this is the road that we are going down. Get your cigars ready and buckle up.

 

Get involved in the conversation!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s