In 1958, shortly after the Soviet launch of Sputnik, the Department of Defense created the Defense Advanced Research Projects Agency (DARPA) to develop and apply cutting-edge advances in military technology. Stemming from DARPA’s research is a recent $127 billion Army project known as Future Combat Systems, a high-tech network designed to link infrastructure, platforms and weapons across all branches of the U.S. Armed Forces. Part of that research focuses on combat robotics, which have been in use since the 1930s. The Pentagon expects that by 2015 a full third of its deep-strike aircraft and ground combat vehicles will be unmanned. The first autonomous robot soldiers could enter service by the 2030s. In 2006 the Army Research Office awarded a $290,000 grant over three years to professor Ronald Arkin, a leading roboticist and director of the Mobile Robot Laboratory at Georgia Tech, to develop an ethical framework for battlefield autonomous systems— giving deadly combat robots an artificial conscience.
Isn’t combat robotics a proven field?
Indeed, it has been around for quite some time. One of the primary developments in the early days of DARPA was a robotic scout that would go over hills in advance of armored forces. The unmanned ground vehicles program and the like were targeted toward Cold War threats. We don’t have that threat anymore, so things have been moving heavily into urban terrain, which is probably the most dangerous kind of battlefield for soldiers.
How advanced are modern-day combat robots?
Oh, they’re quite advanced. For example, there’s a big order for micro-UAVs [unmanned aerial vehicles], little cameras on wings that can get views without putting soldiers at risk. Fielded systems include SWORDS [Special Weapons Observation Reconnaissance Detection System] by Foster-Miller and PackBots by iRobot.
The same iRobot that manufactures a robot vacuum cleaner?
Exactly…the Roomba. And they made a robot doll as well. Very diverse portfolio.
What do these robots look like—the Terminator or the humidifier?
Neither. “Baby tanks” is the best way to describe them—treaded vehicles, sometimes with articulated levers in the front to help lift them up. Such systems are typically used in caves and in buildings to peer around a corner, so the soldier can see things without putting himself at risk.
Why send a robot to do a soldier’s job?
Robots don’t have the same survival instincts. They can sacrifice themselves. And you don’t have to write a letter home to the mother of the robot if it gets destroyed.
How do you envision the human-robot interface? Voice commands? Joystick? Keyboard?
All of the above. Also gloves fitted with haptic sensors, so the robot can see your gestures. Feedback from the robots, with tactile sensors, so you could feel the presence of something approaching. Heads-up displays. Anything that can be done to improve situational awareness on the battlefield, because you could get shot while looking at a computer screen.
Aren’t robots and their networks vulnerable to hackers?
Network-centric warfare is a concern. Yes. Could they be taken over? Conceivably so. The question is how to authenticate users. Retinal scan? Fingerprints? Voice print technology? You can secure these systems against most threats.
Has any military force deployed autonomous systems?
South Korea is on target to deploy an autonomous border guard with the capability to deploy lethality. And I suspect the rule of engagement in the DMZ is, If you’re in the DMZ, you get shot.
Will autonomous robots be able to learn?
Oh yeah. They do right now. They can learn new behavior. They can learn what constitutes a better target. Machine learning is a very, very powerful tool.
Isn’t that risky?
Are you talking about a robot uprising sort of thing?
What about a HAL scenario, if a robot were to refuse an order?
You could build in an override—perhaps a two-key switch. But if taken, along with that override would go the responsibility for that action. The fact that we haven’t come up with good protocols for what’s called “mixed-initiative automation”—where the system or the human can be in control, and ideally the best agent will be in charge at the right time—has led, unfortunately, to things like 9/11. That was completely avoidable with existing technology. You could easily put an anti-collision system onto a commercial airliner that would usurp control from a pilot if it were on a collision course with a building. But we trust people more than we trust robots.
It goes back to this ultimate trust in human authority being best under all circumstances. I would contend that emotions, the “fog of war,” and issues surrounding the battlefield tend to cloud human judgment, severely. Many commanders might agree with me. I ultimately believe robots can exercise better ethical judgment than human soldiers in the battlefield. I’m not talking 2008—I’m talking in the future.
Is that why you’ve chosen to focus on roboethics? This is a personal passion and commitment on my part to further the field I have helped to make. I have to bear some responsibility for the creation of this technology as a young roboticist. The joy of discovery and the intellectual curiosity that fuels scientific endeavor makes you passionate about creating new things. But I want to take control of it proactively.
What is your goal?
First, to ascertain what people think of battlefield autonomous systems. A survey currently under way will gauge opinion from robotics researchers, military personnel, policymakers and the general public. The second part, which continues through year three [June 2009], is the construction of an ethical basis for their deployment.
Who determines the ethical and moral parameters?
The Army and international treaties. What I am doing is taking existing guidelines and embedding them within these robotic systems.
Do you allow for changing technology and opinions?
Yes. These systems have to be fluid and incorporate changes as they occur. I mean, no one ever thought of eye-blinding lasers in 1900, so there were no protocols against it. Indeed, there may be protocols eventually written regarding, perhaps even forbidding, the use of autonomous systems in the battlefield.
Do academic colleagues ever challenge your approach?
More so in Europe and Japan than in the United States. Most robotics researchers in the United States have research funding from the Department of Defense, so it’s easily justifiable. The recent pacifistic tradition in Europe seems to counter that.
What does the future hold?
In the early days, I had a hard time convincing a robot to do anything. Now people think that robots can do everything, and they can’t. There’s still a lot of work to be done in basic research, and the rush to deployment is a bit of a concern to me. It does have good foundations, but those foundations need to be shored up with good basic science before we start introducing high levels of autonomy into the field.
Do you have any lingering concerns?
I would be happy if anything that I’m creating in this regard never had to be used again. But I’m a realist as well and don’t expect that’s going to be the case. I also consider myself a patriot. I don’t like seeing our young men and women thrown in harm’s way without adequate support from a technological perspective.
Originally published in the October 2007 issue of Military History. To subscribe, click here.