There is some importance in watching laws surrounding robots, including connections to and from private security contractor laws as non-state actors. What, afterall, the question of citizenship in the modern world is changing with vastly increased numbers of and shifting identities held by individuals at any one moment.
The BBC news article I sent to Jason @ Armchair Generalist hints at the martial implications of robots:
The paper says a “monumental shift” could occur if robots develop to the point where they can reproduce, improve themselves or develop artificial intelligence.
The research suggests that at some point in the next 20 to 50 years robots could be granted rights.
If this happened, the report says, the robots would have certain responsibilities such as voting, the obligation to pay taxes, and perhaps serving compulsory military service.
The last sentence of course assumes the currently being phased out notion in Western liberal democracy of compulsory service holds tight. Some questions come up:
- Ignoring “compulsory”, what of just the option of military service and allegiance?
- What about political (decisionmakers) and legal (criminal) accountability as robots enter the battlefield with increasing autonomy. Even semi-autonomous robots raise questions of accountability leading back to who? the operator? the owner? the government behind the use of the robot?
- What if the remote controller of the robot, bringing in all forms of unmanned vehicles here from fully to not at all autonomous, is a civilian contractor?
Back in April I wrote an intro to a section of this blog I’ve not been posting on for a while, but this article reminds the importance of, Unmanned Warfare. This category is intended to highlight and comment on unmanned & remote warfare, including aerial, ground, and naval vehicles (and vessels) that may be not, semi-, or full-autonomous. Further, these vehicles may be controlled from a variety of locations, including within the in, near, or far from the battlespace, including back on the homeland by military personnel or a contractor.
The pace of development of unmanned vehicles, with their increasingly effective and broad sensors, increasing autonomy will have a real impact in the near future on the conduct of war. Just as private military companies affect foreign policy, military effectiveness, and national images through intended and unintended means, unmanned or remotely manned vehicles will alter policy, effectiveness and perception (both in how we are perceived and likely de-humanizing of warrior and targets).
Perhaps PMC use is a foreshadowing of the mass deployment of unmanned vehicles. I suggest that decision-making modeling for the AI of semi- and full-autonomous unmanned vehicles be based on private security contractor decision-making.
In the deployment of unmanned vehicles, what are we to expect? There are some hints today, including failures to communicate (including between robot and personnel… hopefully not to be reminiscent of RoboCop). What about the Laws of War when an unmanned vehicle, “driven” by a civilian near the battlefield, in the US, or even sitting in an allied country kills a US soldier, a civilian, IGO/NGO personnel, or an allied soldier, or non-combatant?
In a not surprising coincidence, look for a book on robotics by P.W. Singer coming out later.
One thought on “Linking Robots and Mercenaries”
Indeed – “You have four seconds to comply…” isn’t something I’d ever want to hear.But, seriously, “rights for robots” is a ridiculous idea. Even fully-autonomous actors would only ever have a syntax, never a semantics. High-level CI (employing fuzzy logic and/or genetic algorithms and the like) is an inrcreasingly prevalent thing, while AI, as referenced above, is, and shall forever remain, science fiction. So, whether active remote control is used or it’s a case of just having programmed/built a robot and ‘turned it loose’, responsibilty points back to a person. Not to be too sarcastic about it, but, robots don’t kill people; people with robots kill people. For a good brief, read (a synopsis of) What Computers Still Can’t Do, by Hubert Dreyfus. I’ll admit bias – he was a professor of mine. Robots, and animals for that matter, don’t enjoy second-order thought, e.g. ‘I’m sad about being sad’. They’re, at best, just sad or ‘x’ or whatever… Don’t get me wrong, I love both animals and robots, but remember that it was Murphy’s HUMAN BRAIN that stepped in in the end…
Comments are closed.