The Public Diplomacy of Drones

Today’s article in the Wall Street Journal entitled “More Drones, Fewer Troops” looks at the policy behind the increasing use and reliance on drones, but it misses an essential point: unmanned warfare’s impact on public opinion and public diplomacy.  While the technical and budgetary advantages of unmanned systems are front and center, their impact on foreign policy are often an aside, usually in the context of meddlesome by-products of using “drones.” We have seen, if not acknowledged, the powerful impact of human intervention (e.g. SEAL Team Six) over the powerful impact of robots, either remote controlled or autonomous.  Leaving the issue of the public diplomacy of these activities on the margins of planning is short-sighted and unwise.

In my article “The Strategic Communication of Unmanned Warfare” (Serviam, June 2008), I explored the impact of ground robots, intentionally avoiding flying drones because, since World War II, flyers and targets were largely anonymous from each: death rained from above.  Today’s communication environment and technical advances are removing the “air gap” between the ground and the flyer, or drone in this case, allowing for direct links between policy and the people on the ground.

This topic requires a deeper discussion.  Public diplomacy and strategic communication must be on the take-offs of drones, not just the landings, crash landings or otherwise.  In lieu of an organization that could look at this, I invite comments and articles on the subject to be posted at MountainRunner.us.

See also Unintended Consequences of Armed Robots in Modern Conflict from October 2007.

Robots as Strategic Corporals

This week, the Complex Terrain Laboratory, or CTLab, hosted another of its brilliant online symposiums. The topic of this one is Peter W. Singer’s book Wired for War and robots in warfare.

There are a lot of good posts over there to read. Go check them out.

My first of at least two posts just went up: Robots as Strategic Corporals. The second post will look at justifying the robots based on what can be done according to Western notions which creates, counter intuitively, an engagement model that is too permissive and detrimental to the mission as a whole. Certain acts, justifiable under international law, could backfire if the information effects are not anticipated, planned for, and managed effectively.

Read Robots as Strategic Corporals at CTLab.

Event: Online Symposium on P.W. Singer’s Wired For War

Over at CTLab next week, I’ll be in an online discussion built around about Peter W. Singer’s outstanding book, Wired for War. Read the CTLab announcement:

CTlab’s second symposium in its 2009 series starts next week, on Monday, 30 March, and will run for four days, until 2 April (or until participants run out of steam, which might take longer). The subject: Peter Singer’s new book, Wired For War: The Robotics Revolution and  Conflict in the 21st Century (Penguin Press: 2009).

This is going to be an exciting booklab, on a work that’s been getting broad exposure, in an out of the blogosphere. Peter Singer, a Brookings Institution Senior Fellow for Foreign Policy, and Director of its 21st Century Defense Initiative, will be participating on day 1. Proceedings will be compiled and indexed on a separate page for ease of reference, here.

Confirmed participants include:

  • Kenneth Anderson (Law; American University)
  • Matt Armstrong (Public Diplomacy; Armstrong Strategic Insights Group)
  • John Matthew Barlow (History; John Abbott College)
  • Rex Brynen (Political Science; McGill University)
  • Antoine Bousquet (International Relations; Birkbeck College, London)
  • Charli Carpenter (International Relations; UMass-Amherst)
  • Andrew Conway (Political Science; NYU)
  • Jan Federowicz (History; Carleton University)
  • John T. Fishel (National Security Policy; University of Oklahoma)
  • Michael A. Innes (Political Science; University College London)
  • Martin Senn (Political Science; University of Innsbruck)
  • Marc Tyrrell (Anthropology; Carleton University)

Quite a few of our guest participants are active on the web, as well. Many participate in theSmall Wars Council, and write online about highly topical security issues. Blogs represented:

Robots!

Check out David Axe’s video series on military robots at GOOD magazine. It’s a good overview for anyone interested in unmanned systems, autonomous and tele-operated. Note: would have liked if he mentioned other “robot” systems (by his implicit definition) such as Patriot to AEGIS that had notable accidental kills: an allied pilot and a civilian airliner, respectively.

Think the U.S. is the only country with robots? Lots of other countries are deploying unmanned systems, like Pakistan. From Danger Room:

“Al-Qaida and Taliban fighters use not just mobile and satellite phones for communication, but also sophisticated military radios,” Defense News notes. So companies like East West Infiniti are building SIGINT [signals intelligence] for small drones and robotic blimps, to capture those conversations.

  • Site note: Whenever I post on robots, hits from China, Singapore, Korea, Pakistan, and Indonesia spike.

Doolittle’s spies: Pigeons, Squirrels… time again for Project ACORN

In July 2007 it was spying squirrels from Israel. Now, it’s pigeon spies:

Iranian security forces have apprehended a pair of "spy pigeons," not far from one of the country’s nuclear processing plants. If local media reports are to be believed, that is.

One of the pigeons was caught near a rose water production plant in the city of Kashan, down the road from the Natanz uranium enrichment facility.  It had "a wired rod" and "invisible threads… fixed to its body," an unnamed source tells the Etemad Melli newspaper. A second, black pigeon was nabbed earlier in the month. …

Time once again for Project ACORN, the Autonomous Coordinated Organic Reconnaissance Network (first fielded July 2007):

Robots on the Radio: interviews with Arkin, Asaro, and Armstrong on warbots

In the first of a two part program broadcast in England, Dr. Noel Sharkey interviews Dr. Ron Arkin, Dr. Peter Asaro and me on his Sound of Science program.  Stream or download the interview from England here.  (Note: two minutes of station promotion precedes the discussion.)  The interview series looks at the ethics issues of using military robots that are allowed to apply lethal force on their own terms, the Laws of War and the international laws on discrimination, as well as their role in war.

This first episode includes:

  • Dr. Ron Arkin, Regents’ Professor, College of Computing, Georgia Tech about some of the dangers facing us in the near-future with robots that decide who to kill. Professor Arkin tells us about his work on developing an Artificial Conscience for a robot and about some of the difficult ethical decisions that both soldiers and robots have to make in war.
  • Dr Peter Asaro, the exciting young philosopher from Rutgers University in New York. Peter talks about a range of issues concerning the dangers of using autonomous robot weapons. He cautions us about the sci-fi future that the military seems to be heading towards and how a robot army could take over a city. Interestingly he makes the provocative claim that one of the first uses of insurgency was the early Americans against the British redcoats.
  • Matt Armstrong, an independent analyst specialising in public diplomacy and strategic communications working in California. Matt writes a famous blog called MountainRunner. On the programme he discusses the “hearts and minds” issues, a term he dislikes and the problems with having a robot as the “strategic corporal” of the future.

My segment begins around the 44-minute mark. Briefly, I don’t want to comment in depth on the interviews now, but my views on the subject are based on public diplomacy, counterinsurgency doctrine, and civil-military relations. To be more specific, I am looking at the informational effect of these systems, the need to build trust and show commitment among local populations, and the impact of the commodification of violence, and the reduced the cost of violence, on Congressional oversight and Executive decision-making, among other considerations (see more here).

This was my very first radio interview so unsuprisingly there were a couple of significant points I didn’t get to, but hopefully, the essential points were captured. Listening to my interview again, there are a few words and phrases I will avoid next time (like referring to “passages” in FM3-24), as well as other changes. Live and learn.

The second episode will include interviews with Rear Admiral Chris Parry, Richard Moyes from Landmine action and military robotics people from NATO, the German and Swedish Armies as well as from the French Defense Ministry.

An important correction: I unintentionally demoted Lieutenant Colonel Chris Hughes, when I referred to him as a Captain in the Najaf example.

Also, a clarification from Ron after listening to my interview:

For future reference though I’d like to point out, that I have never advocated that robots be used as prison guards. I only use Abu Ghraib as an illustration of the propensity of ethical violations by human beings. A system capable of independently monitoring human performance would be helpful I’d suspect – but I agree completely that humans should not be removed.
I further advocate, as you do, that robots should *never* fully replace the presence of soldiers, but rather serve as organic assets beside them for very specialized missions such as room clearing, countersniper, and others as pointed out in my scenarios. These are also not intended (at least in my work) where active civilian populations are present, but only for full-out war (declared). The systems I am working on are for the next conflict (not the current one) whatever that may be – and also for the so-called “Army after next”.

As Noel and Ron said, the more we talk about this in the open, the smarter we’ll be in the deployment of robots.

Your comments are appreciated.

See also:

The Strategic Communication of Unmanned Warfare

Modern conflict is increasingly a struggle for strategic influence above territory.  This struggle is, at its essence, a battle over perceptions and narratives within a psychological terrain under the influence of local and global pressures.  One of the unspoken lessons embedded in the Counterinsurgency Manual (FM3-24) is that we risk strategic success relying on a lawyerly conduct of war that rests on finely tuned arguments of why and why not.  When too much defense and too much offense can be detrimental, we must consider the impact of our actions, the information effects.  The propaganda of the deed must match the propaganda of the word.

As Giulio Douhet wrote in 1928,

“A man who wants to make a good instrument must first have a precise understanding of what the instrument is to be used for; and he who intends to build a good instrument of war must first ask himself what the next war will be like.”

Secretary of Defense Robert M. Gates has said that there is too much spending geared toward the wrong way of war.  I find this to be particularly true in the area of battlefield robots.  Much (if not all) of the unmanned systems planning and discussion, especially with regards to unmanned ground combat vehicles, is not taking into account the nature of the next war, let alone the current conflict.

Last year I posted an unscientific survey that explored how a ground combat robot operating away from humans (remote controlled or autonomous) might shape the opinions of the local host family.  The survey also explored the propaganda value of these systems to the enemy, in the media markets of our allies, Muslim countries, and here in the United States.  The survey results weren’t surprising.

Serviam Magazine just published what could be construed as an executive summary of a larger paper of mine to be published by Proteus later this year.  That paper is about four times longer and adds a few points with more details.  In the meantime, my article that appeared in Serviam, “Combat Robots and Perception Management,” is below.

Continue reading “The Strategic Communication of Unmanned Warfare

Article: Combat Robots and Perception Management

Robots will figure prominently in the future of warfare, whether we like it or not. They will provide perimeter security, logistics, surveillance, explosive ordinance disposal, and more because they fit strategic, operational, and tactical requirements for both the irregular and “traditional” warfare of the future. While American policymakers have finally realized that the so-called “war on terror” is a war of ideas and a war of information, virtually all reports on unmanned systems ignore the substantial impact that “warbots” will have on strategic communications, from public diplomacy to psychological operations. It is imperative that the U.S. military and civilian leadership discuss, anticipate, and plan for each robot to be a real strategic corporal (or “strategic captain,” if you consider their role as a coordinating hub).

Source: my article “Combat Robots and Perception Management”, published in the 1 June 2008 issue of Serviam Magazine. The magazine’s website is no longer available, so it is reposted here: The Strategic Communication of Unmanned Warfare.

Somebody, Prove my theory

Today is Veteran’s Day here in the United States and a good time to wonder something out loud. Actually, I’ve been saying this in meatspace for a while, but I don’t think I’ve put it on the blog yet.

As you think about our country’s veterans, ask yourself how many veterans you actually know. It’s very likely that you, as a reader of this blog, know (or are) a veteran: you are reading what some call a milblog after all.

Here’s my theory: more Americans know a mercenary, but don’t know it, than know a vet, adjusting for sheer numbers. In other words, contractors our "outside" in the public more than current or former serving members of America’s military.

I’d like to see a study that looks at how many people know a veteran and compare that to how many people know a contractor (i.e. merc). Like it or now, private security companies has brought back the citizen-soldier. The All-Volunteer Force, on the other hand, has created an increasingly insular sub-group distanced from the larger population on several levels.

The voluntarily association of contractors makes it easier for its members to slip in and out of military duty and into the role of your neighbor, your co-worker, that IT recruiting manager you worked with, the cop who’s a brother of a friend, or that dad you met at a BBQ.

No longer do you need need to live near a military base or work in the defense industry to meet someone sanctioned by the state to carry a weapon into a conflict zone. In other words, while the public is increasingly separated from serving military personnel, it is increasingly in contact with contractors but does not know it.

What to think about this? First, Congress and the media doesn’t care about the people who don’t officially wear a flag on their shoulder. Second, this indicates a depersonalization of war, an argument Kohn and Gelpi make. Third, the already scarce personal links between the public and its soldiers will continue to diminish as conflict is outsourced to machines.

With fewer Americans who know somebody presently serving or even directly impacted by the conflicts after 9/11, there is a redevelopment of a distinct and professional warrior class in the United States proficient in the conduct war that harkens back to professional mercenary soldiers of before. The modern All Volunteer Force (AVF) is far removed from the modern political and social spheres of power in the United States, leading to suggestions that non-veteran civilians may be more "interventionist" and simultaneously placing more constraints on the use of military force while at the same time the American citizen-soldier is increasingly an endangered species as soldiers and their families turn inward and focus on their own support networks. National Guard recruiting trends reinforce this point as they are increasingly drawn from the ranks of former military and not from the general public. It is likely robots will support and increase pressure on this trend, just as private security companies do.

Just something to think about on this Veteran’s Day.

(Major G, first round’s on me tonight, second round too if you’re reading this…)

Automation not a factor in ‘Robotic rampage’?

From New Scientist Tech:

A female soldier tried to free the shell, but another shell was accidentally fired, causing some rounds in the gun’s two near-full ammunition magazines to explode. The gun began firing again and swung in a circle, leaving nine soldiers dead and eleven wounded.

Blogs and other online news sources have suggested the incident may be due to software problems, highlighting the danger of automated weapon systems. But Jim O’Halloran of defence publication Jane’s Land-Based Air Defence says the incident is more likely the result of a simple mechanical failure.

Interesting, this emphasizes my point that the presence of automation has the power to changes the debate over an incident, which is in the survey.

Unintended Consequences of Armed Robots in Modern Conflict

There is more on the robot killing in South Africa.

A South African robotic cannon went out of control, killing nine, “immediately after technicians had finished repairing the weapon,” the Mail & Guardian reports.

In light of this event, as well as Ron Arkin’s “ethical controls” on robots, and that I’m returning to the subject to finish a report, I re-opened a survey on unmanned systems in conflict, primarily ground vehicles. The survey is expanded with the addition of a few questions left out of the earlier iteration. If you filled out the survey before, you might be able to edit your previous answers.

Click here for an informal survey on unmanned warfare, your participation is appreciated. If you have already taken the survey, provided your haven’t cleared the cookie, you should be able to pick up the survey from where you left off. No, this isn’t the proper way to do a survey, but this is an informal query. The results will be included in a report I am completing on the subject (an early and rough draft presented earlier).

The draft findings so far, many of which you’ll find are in direct opposition to Ron Arkin “ethical controls” report above, are:

1. Robots reduce the perceived cost of war and may result in increased kinetic action to defend the national interest. Robots may be used like President Clinton’s lobbed cruise missiles against Afghanistan and Sudan. They also be used to facilitate a more expeditionary foreign policy with less public and Congressional oversight. To some, the value of private security contractors will pale in comparison to that of robots.

2. Robots may reduce local perceptions of US commitment and valuation of the mission. If the US isn’t willing to risk our own soldiers, do we value our people more than local lives? Is the mission not important enough to sacrifice our lives?

3. Robots reduce or remove the human from the last three feet of engagement and with it opportunities to build trust and understanding, as well as gain local intel and get a “feel” for the street (mapping the human terrain). There’s a reason why urban US police departments put cops on foot patrol and bikes. There is an analogy here with the difference of armored Humvees / MRAPs and Jeeps: the latter forces a connection / dialogue with locals. FM3-24 highlights the problem of too much defensive posturing. In American run detention centers in Iraq, General Stone noted the importance of skilled human contact with prisoners and not a sterilized warehouse run by robots replacing untrained personnel. Noteworthy is this anecdotal measurement of engaging “hearts and minds.”

4. Robots continue the trend of increasing the physical distance between killer and killed. Even if the robot is teleoperated, the operator will not have the nuances in the environment. The robot may not know when not to engage or when to disengage. The psychological cost of killing will decrease and targets will continue to be dehumanized.

5. Technological failures, or induced failures (i.e. hacking), would result in more negative press as the US continues to “hide behind” technology. Errors or accidents would likely be described by USG communications in a way that satisfies the US domestic audience. Before the South African robot-cannon, other examples high profile accidental killing of civilians, ostensibly by technology, including the KAL 007 and Iran Air 655 (USS Vincennes), both of which are notable for the different public diplomacy/communications strategies employed to address the particular incident.

6. Robot rules of engagement are being designed around Western / Machiavellian Laws of War (see Arkin’s report on ethical controls). This lawyer-on-lawyer based on facts and is ignorant of perceptions generated from actions. This perfect world model may become more a liability than asset in 21st Century Warfare. This is not to say that a more permissive environment should be created, but that the Machiavellian model of the end justifies the means creates too permissive of an environment to the detriment of the mission. The “new” U.S. Counterinsurgency manual notes the same when it says too much force as well as too much defensive posturing may be counterproductive. 

Implementation of ethical controls on robots

Dr. Ron Arkin, who runs the Mobile Robot Lab at the Georgia Institute of Technology, released his report “Governing the Lethal Behavior“.

This article provides the basis, motivation, theory, and design recommendations for the implementation of an ethical control and reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Laws of War and Rules of Engagement. It is based upon extensions to existing deliberative/reactive autonomous robotic architectures, and includes recommendations for (1) post facto suppression of unethical behavior, (2) behavioral design that incorporates ethical constraints from the onset, (3) the use of affective functions as an adaptive component in the event of unethical action, and (4) a mechanism in support of identifying and advising operators regarding the ultimate responsibility for the deployment of such a system.

Ron and I have discussed his report and have agreed to disagree. I encourage you to the report if you’re at all interested in this stuff.

Our disagreement mostly hinges on his mid-19th – mid-20th Century view of war, i.e. Lawfare or industrial age warfare based on rules of war. To Ron, justifications matter and have time to be discussed. To me, perceptions matter more than fact and an engagement model based on the Laws of War might actually create too permissive of an environment. The era of Lawfare is passed.

To me, the fungibility of force decreases as the asymmetry of perception management increases (some might call that “controlling the narrative”).

For more on our points of disagreement, see this post.

If you participated in my survey on the subject, which I will be expanding, you’ll see I have a very different approach based on 21st Century Struggles for Minds and Wills where the facts matter little, if at all, and perceptions can turn the tide.

Robot kills 9, injuries 14

The South African National Defence force is asking whether a software glitch in an automated anti-aircraft cannon killed 9 and injured 14.

Mangope told The Star that it “is assumed that there was a mechanical problem, which led to the accident. The gun, which was fully loaded, did not fire as it normally should have,” he said. “It appears as though the gun, which is computerised, jammed before there was some sort of explosion, and then it opened fire uncontrollably, killing and injuring the soldiers.”

…in the 1990s the defence force’s acquisitions agency, Armscor, allocated project money on a year-by-year basis, meaning programmes were often rushed. “It would not surprise me if major shortcuts were taken in the qualification of the upgrades. A system like that should never fail to the dangerous mode [rather to the safe mode], except if it was a shoddy design or a shoddy modification.

Privatization and Unmanned Systems

Kent’s Imperative discusses the value of privatization and unmanned systems for geographic intelligence, or GEOINT (see Tanji for def’s of like terms). I agree with Kent on his points that privatization adds surge capacity (just as the airline industry does for strategic transport under federal law) and innovation. There are limits, as we both agree, but that’s for another post.

Not so long (chronologically) but an eternity (in terms of privatization concepts) ago, the first generation of high resolution commercial space based imagery systems for intelligence came into existence…The very idea of private sector capabilities usurping the government monopoly on overhead systems was so unthinkable for many within the community that had the Long War not gone hot and every second of imaging capacity been desperately needed, we might never have seen the development of the industry – let alone the remarkable directions that it has trended towards in the hands of the Google / Keyhole team.

At the dawn of its earliest, hard fought, and tentative acceptance, another new technology was emerging. Unlike the expensive and arcane world of satellites, the UAV offered an immediate, accessible, and understandable tool to the community. More importantly, it was a technology they could directly control throughout its full life-cycle – and that is critical to a certain kind of procurement and operations mindset.

Needless to say, the UAV has been a Very Good Thing for the GEOINT community – and at the same time, opened new frontiers in the mix between collection, analysis and warfighting. But these systems largely remain dedicated to looking at the battlespace through a soda straw. It is for that reason that many of the proponents of imagery intelligence continue to dismiss the idea that UAV’s will ever compete with the better resourced national technical means – or even their commercial imaging counterparts – in providing theatre-level and strategic IMINT.

Read the rest of Kent’s post at his blog. By the way, Kent, check out the new Second Life

Battle of the Minds: an interview with Major General Douglas Stone

Walter Pincus in the Washington Post wrote about the Blogger Roundtable conference yesterday with Major General Doug Stone (transcript here). Motivated slightly by Pincus’s backhanded, but honest, comment yesterday on how none of the four bloggers attending, including MountainRunner, (out of 60 invited) on the call had as of yet blogged on the interview. 

I had the opportunity to ask the General two questions. The first was on his thoughts of using unmanned systems in detainee operations. In the battle of minds, it is not surprising that he looked at robots as not as an opportunity to reduce human contact with detainees but to increase it. The second question was on how he’s communicating his plans to State and involving other non-mil resources. Out of that came his thoughts, actually those of Iraq VP Hashimi, of a Work Projects Administration for Iraq. Each of those, as well as other great questions by my three comrades in digital space, Jarred “Air Force Pundit” Fishman, CJ “Soldier’s Perspective” Grisham, and Charlie “Wizbang” Quidnunc, deserve more commentary, context, and analysis, but unfortunately time is short.

Continue reading “Battle of the Minds: an interview with Major General Douglas Stone

Unmanned vehicle (UxV) news

In no particular order…

From a Spanish University press release: EU project builds artificial brain for robots (courtesy Kurzweil)

Scientists in Spain have achieved a giant leap for robotkind by building the first artificial cerebellum to help them interact with humans. The cerebellum is the portion of the brain that controls motor functions.

The project will now implant the man-made cerebellum into a robot so as to make its movements and interaction with humans more natural. The overall goal is to incorporate the cerebellum into a robot designed by the German Aerospace Centre in two year’s time. The researchers hope that their work will also result in clues on how to treat cognitive diseases such as Parkinson’s

David Axe reported a few months ago the Marines wanted a drone with lethal and non-lethal capabilities to surprise capabilities to “take the fight to anti-Iraqi forces in areas where they currently perceive sanctuary.”

The concept is to take an existing “Tier II” medium-size drone in the vein of the 10-foot-wingspan Boeing/InSitu Scan Eagle, and fit it with two 40-millimeter grenade launchers, two green-laser dazzlers and a focused sound device similar to the Long-Range Acoustic Device manufactured by American Technology Corporation. This suite would give Marine operators “escalation of force options,” according to the briefing.

In other words, the drone would be able to first warn off suspected insurgents by beaming a verbal message in Arabic. If the suspects don’t disperse, the drone can dial up the intensity of its sound broadcast, causing pain and disorientation. If that doesn’t work, there are the laser dazzlers, which can cause temporary blindness from up to a mile away. If, after all of this, the suspects are still behaving threateningly, the drone can fire its grenade launchers.

Continue reading “Unmanned vehicle (UxV) news