Dr. Ron Arkin, who runs the Mobile Robot Lab at the Georgia Institute of Technology, released his report “Governing the Lethal Behavior“.
This article provides the basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement. It is based upon extensions to existing deliberative/reactive autonomous robotic architectures, and includes recommendations for (1) post facto suppression of unethical behavior, (2) behavioral design that incorporates ethical constraints from the onset, (3) the use of affective functions as an adaptive component in the event of unethical action, and (4) a mechanism in support of identifying and advising operators regarding the ultimate responsibility for the deployment of such a system.
Ron and I have discussed his report and have agreed to disagree. I encourage you to the report if you’re at all interested in this stuff.
Our disagreement mostly hinges on his mid-19th – mid-20th Century view of war, i.e. Lawfare or industrial age warfare based on rules of war. To Ron, justifications matter and have time to be discussed. To me, perceptions matter more than fact and an engagement model based on the Laws of War might actually create too permissive of an environment. The era of Lawfare is passed.
To me, the fungibility of force decreases as the asymmetry of perception management increases (some might call that “controlling the narrative”).
For more on our points of disagreement, see this post.
If you participated in my survey on the subject, which I will be expanding, you’ll see I have a very different approach based on 21st Century Struggles for Minds and Wills where the facts matter little, if at all, and perceptions can turn the tide.
I don’t know a whole lot about robots or weapons systems, and I have to admit some of this stuff sounds freaky, like a hunter-killer UAV named the Reaper… that’s something you don’t want hacked, but just thinking aloud, is there any research into employing robots in a missle defense system? Would something like the Reaper be able to track and destroy an incoming missle?
Not the Reaper itself, unless it was armed w/ the right missile. But as far as armed robots in the battlespace, already a fact, my friend. Phalanx for one, Patriot for another. MLRS is robot artillery. Aegis warships are effectively robots. In these systems and many others, we’re barely in the loop and often just glorified caretakers.As far as missile defense per se, like the phalanx which is air/ship to ship missile defense, absolutely robotic because you’re talking superfast decision cycles. We want to think a robot is something standing on its own w/ out people around, but we’ve already deployed robots that have us nearby to keep them clean, etc.