Saturday, May 16, 2015

Autonomy of military robots: assessing the technical and legal ('Jus in Bello') thresholds...Hints of a recent article...

Two broad issues are becoming essential for our 'hyper-technological' societies.
The first relates to the cyber world, with its 'dark side' of menaces, like cyber hacking, cyber terrorism, or even cyber warfare. The second refers to robots. 
However, in the case of robots, the 'dark side' is still in the realm of nightmarish visions in blockbuster movies like Terminator 1, 2, 3...In fact, robots have failed to come into being, at least until now, in our daily life.
While everyone experienced, at least once in a while, cyber hazards like spamming, very few people, except proud owners of Roomba vacuum cleaner, had any close encounters with robots. And a 'dark side' concerning Roomba seems quite challenging to find...

Even though robots are spreading on battlefields, but only as remote-controlled platforms, with no real autonomy. Apparently, the 'missing' autonomy explains the absence of robots from our world. 
Hence the reason for my article about the Autonomy of military robots: assessing the technical and legal ('Jus in Bello') thresholds,  visible at  http://ssrn.com/abstract=2602160.

It starts by examining the autonomy/ automation divide and the degrees of autonomy in robots on the pathway of metrics developed by the US Department of Defense.
These metrics are used to assess the autonomy in 'state of the art' robots, such as Google’s self-driving car or other DARPA projects. 
Based on public sources, one can get a picture of the functioning, the general architecture, and, most of all, the limits of today's robots. These systems are almost 'blind' because they lack a deep 'perceptive intelligence.' This situation allows reliable predictions of future performances for autonomous military robots during navigation, reconnaissance, or kinetic attacks (lethal missions). 

This analysis was pursued even further. If robots become truly autonomous for lethal missions on the battlefields, they must also 'obey' the rules of Humanitarian Law (Rules of ‘jus in Bello’/ Rules of engagement) and act as 'artificial moral agents.' 
The required moral or legal evaluations belong to higher cognitive/emotional processes (specifically human) than those needed for 'perceptive intelligence.' Given the technical limitations in implementing the relatively simple tasks of autonomous navigation, reconnaissance, or kinetic attacks, one can reasonably assess the much more severe difficulties in creating such 'artificial moral agents' on the future battlefields. 

For all other details, please refer to the article. Any observations or commentaries will be more than welcome.