« In the Press | Main | Uncertainty, Complexity, and Taking Action (revisited) »

High-Frequency Combat

MILITARY ROBOT v2Science and technology luminaries Stephen Hawking, Elon Musk, and Steve Wozniak count among the hundreds of researchers pledging support of a proposed ban on the use of artificial intelligence technologies in warfare. In "Autonomous Weapons: an Open Letter from AI & Robotics Researchers", the researchers (along with thousands of citizens not directly involved with AI research) call on the global community to ban "offensive autonomous weapons beyond meaningful human control." They argue that the ability to deploy fully-autonomous weapons is imminent, and the potential dangers of a "military AI arms race" are enormous. Not just in the "blow everything up" sense -- we've been able to do that quite nicely for decades -- but in the "cause havoc" sense. They call out:

Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

They don't specify in the open letter (which is surprisingly brief), but the likely rationale as to why autonomous weapons would be particularly useful for assassinations, population control, and genocide is that they wouldn't say "no." Despite the ease with which human beings can be goaded into perpetrating atrocities, there are lines past which some of us could never cross, no matter the provocation. During World War II, only 15-20 percent of U.S. soldiers in combat actually fired upon enemy troops, at least according to Brigadier General S.L.A. Marshal; while some debate his numbers, it's clear that a significant fraction of soldiers will say "no" even to lawful orders. Certainly a higher percentage of troops will refuse to carry out unlawful and inhumane orders.

Autonomous weapons wouldn't say no.

There's another problematic aspect, alluded to in the title of this piece: autonomous military systems will make decisions far faster than the human mind can follow, sometimes for reasons that will elude researchers studying the aftermath. The parallel here is to "high-frequency trading" systems, operating in the stock market at a speed and with a sophistication that human traders simply can't match. The problem here is manifold:

  • High-speed decision-making will push against any attempt by human leaders to think through consequences -- not by making that consideration impossible, but by making it inefficient or even dangerous. If your opponent is using "high-frequency" military AI (HFMAI), a slow response may be detrimental to your future.
  • HFMAI can make opaque decisions, again with the result of potentially undermining longer-term strategic thinking. Note that "autonomous weapons" and "high frequency military AI" does not mean fully-self-aware, Singularity-style super-intelligent machines able to consider long-term possible consequences. HFMAI in the near term will be complex software designed to make specific kinds of on-the-spot decisions in the moment. If you've ever experienced a game AI doing something that gains a quick benefit but weakens its long-term position, or is simply utterly inscrutable, you'll understand what I mean.
  • Worst of all is that, just like high-frequency trading systems, opponents will be able to figure out how to spoof, confuse, or otherwise game the HFMAI software. Think about zero-day exploits tricking your weapons into making bad decisions.

    Although I signed the open letter, I do think that fully-autonomous weapon systems aren't quite as likely as some fear. I'm frankly more concerned about semi-autonomous weapon systems, technologies that give human operators the illusion of control while restricting options to pre-programmed limits. If your software is picking out bombing targets for you, that you tap the "bomb now" on-screen button may technically give you the final say, but ultimately the computer code is deciding what to attack. Or, conversely, computer systems that decide when to fire after you pull the trigger -- giving even untrained shooters uncanny accuracy -- distance the human action from the violent result.

    With semi-autonomous weapons, the human bears responsibility for the outcome, but retains less and less agency to actually control it -- whether or not he or she recognizes this. That's a more subtle, but potentially more dangerous, problem. One that's already here.

  • Post a comment

    All comments go through moderation, so if it doesn't show up immediately, I'm not available to click the "okiedoke" button. Comments telling me that global warming isn't real, that evolution isn't real, that I really need to follow [insert religion here], that the world is flat, or similar bits of inanity are more likely to be deleted than approved. Yes, it's unfair. Deal. It's my blog, I make the rules, and I really don't have time to hand-hold people unwilling to face reality.

    Archives

    Creative Commons License
    This weblog is licensed under a Creative Commons License.
    Powered By MovableType 4.37