There is more on the robot killing in South Africa.
In light of this event, as well as Ron Arkin’s “ethical controls” on robots, and that I’m returning to the subject to finish a report, I re-opened a survey on
Click here for an informal survey on unmanned warfare, your participation is appreciated. If you have already taken the survey, provided
Draft findings in my report so far, many of which you’ll find are in direct opposition to Ron Arkin “ethical controls” report above:
1. Robots reduce the perceived cost of war and may result in increased kinetic action to defend the national interest. Robots may be used like President Clinton’s lobbed cruise missiles against Afghanistan and Sudan. They also be used to facilitate a more expeditionary foreign policy with less public and Congressional oversight. To some, the value of private security contractors will pale in comparison to that of robots.
2. Robots may reduce local perceptions of US commitment and valuation of the mission. If the US isn’t willing to risk our own soldiers, do we value our people more than local lives? Is the
missionnot important enough to sacrifice our lives?
3. Robots reduce or remove the human from the last three feet of engagement and with it opportunities to build trust and understanding, as well as gain local intel and get a “feel” for the street (mapping the human terrain). There’s a reason why urban US police departments put cops on foot patrol and bikes. There is an analogy here with the difference of armored Humvees / MRAPs and Jeeps: the latter forces a connection / dialogue with locals. FM3-24 highlights the problem of too much defensive posturing. In American run detention centers in Iraq, General Stone noted the importance of skilled human contact with prisoners and not a sterilized warehouse run by robots replacing untrained personnel. Noteworthy is this anecdotal measurement of engaging “hearts and minds.”
4. Robots continue the trend of increasing the physical distance between killer and killed. Even if the robot is teleoperated, the operator will not have the nuances in the environment. The robot may not know when not to engage or when to disengage. The psychological cost of killing will decrease and targets will continue to be dehumanized.
5. Technological failures, or induced failures (i.e. hacking), would result in more negative press as the US continues to “hide behind” technology. Errors or accidents would likely be described by USG communications in a way that satisfies the US domestic audience. Before the South African robot-cannon, other examples high profile accidental killing of civilians, ostensibly by technology, including the KAL 007 and Iran Air 655 (USS Vincennes), both of which are notable for the different public diplomacy/communications strategies employed to address the particular incident.
6. Robot rules of engagement are being designed around Western / Machiavellian Laws of War (see Arkin’s report on ethical controls). This lawyer-on-lawyer based on facts and is ignorant of perceptions generated from actions. This perfect world model may become more a liability than asset in 21st Century Warfare. This is not to say that a more permissive environment should be created, but that the Machiavellian model of the end justifies the means creates too permissive of an environment to the detriment of the mission. The “new” U.S. Counterinsurgency manual notes the same when it says too much force as well as too much defensive posturing may be counterproductive.