Army of None: Autonomous Weapons and the Future of War, Paul Scharre

Discusses semi- and fully-autonomous weapons, our experiences with them, and the debates about their degree of autonomy and the design of policies regarding them. The author is a Pentagon defense expert, who began as an Army Ranger and developed into a defense analyst.

The book is quite interesting. It was published in 2018, so given the advances in AI since then, it feels a little dated. But still much of interest. Could have been significantly shorter. But glad to have read.

Points of Interest

  • Stanislav Petrov: prevented a US/USSR nuclear exchange. Value of having a human in the loop.
  • AI (e.g. face recognition) has improved drastically since this book was published in 2018.
  • AI is still not perfect.
  • Even without AI, any complex, tightly-coupled system will experience errors (AVs, aircraft, nuclear power plants)
  • Human in/on the loop offer a control point, but only if the activity is happening in human-time
  • Patriot system: tries to depend against incoming missles… implements designers’ intent.
  • “Loitering missles.”
  • Aegis system: Tries to capture intent of commanders, and uses commander-created ‘doctrins’ to mix and match capacilities and autonomy. 
  • OODA Loop: Observe. Orient. Decide. Act. 
  • A flaw in an autonomous or semi-autonomous weapon will rarely be a problem with a single weapon, but with all instances of the weapon using that software. 
  • Furthermore, if there is a bug in weapon’s software, it seems likely that security protocols will make it non-trivial to update. 
  • 2010: Stuxnet: Worm that infected systems via thumb drive and neworks, and took over control of centrifuges, destroying them, even as it provided false every-is-ok information to controllers. Intended to damage Iran’s nuclear fule enrichment programme. First autonomous cyberweapon. 
  • Autonomous Commerce/Trading
    • The Flash Crash of 5/26/2010: crash and rebound of stock market in under an hour. An automatic switch finally cut off trading for 5 seconds, allowing the market to re-set; later, a regulatory agency canceled 10’s of thousands of trades. à Now there are circuit breakers on individual stocks… these are tripped daily.
    • 7/31/2012: Knightmare on Wall Street. Deployment of faulty algorithm resulted in 460 million loss over 45 minutes, and bankrupciy for the company
    • 23 million dollar book due to dueling algorithms
  • To the extent that military exchanges use physical weapons, they are operating in something closer to human time. But in electronic warfare, the failures that happen in autonomous trading situation are an apt analogy. 
  • Suites of patches to fix cybersecurity vulnerabilities: patches often tradeoff a systems’s security for operational speed 
  • Mayhem (a system) autonomously discovers vulnerabilities and creates patches. This type of system can put hacking out of the reach of ordinary individuals, though not well-resourced organizations/countries.
  • Next step is counter-autonomy, where patches include exploits that target common hacker tools. 
  • Hacking an automomous weapons system could hand over control of entire fleets of AWs.
  • Why bans succeed or fail: perceived horribleness; perceived utility; number of actors who must collaborate for success.
  • Mad robot theory. “The threat that leaves something to chance” – Thomas Schnelling
  • IMO: Cannot ban development of AW’s – perhaps, sometimes, can ban their use, if their use is identifiable (e.g. poison gas). …You can only ban things whose development or use is detectable.  Maybe a ban on weapons that automatically target individual humans?
    • Ban autonomous weapons à not likely
    • Ban anti-personnel AW’s à might work…
    • Establish rules of the road (an AW should not fire first; return fire must be proportionate) – this could prevent escalation in tense situations, even though the rules would collapse in war à probably would work
    • Establish general principal about role of human judgment in war à not likely
    • On the other hand, codes of conduct do sometimes work in war, even if they are sometimes violated.
  • The lethal automation paradox: A random death caused by an automated system is more aversive than a death caused by human error or misdeed. 

Views: 0