Skynet is here already . . . or not

Source of illustration : Anton Petrus/Getty Images

 

Rogue AI

The Royal Aeronautical Society’s (RAeS) held its Future Combat Air and Space Capabilities Summit in London on 23 – 24 May. During the event, US Air Force Colonel Tucker Hamilton, chief of AI test and operations, relayed a troubling story about what he referred to as a US Air Force “simulation” in which an AI enabled uncrewed aerial system (UAS) turned on its human operator to maximize its mission effectiveness.

According to reporting from The Drive, the UAS was tasked to identify and—after confirmation from human operators—strike adversary surface-to-air missile sites. However, according to Hamilton, the UAS determined it could be more efficient in destroying targets if it could eliminate human operators who were ordering it not to strike some suspected targets. As a result, it attacked its human teammates. After this behavior was excluded from the so-called simulation, the UAS then went after the communications infrastructure connecting it to the operator.

Not so fast

The story clearly conjures up and amplifies the worst fears associated with the use of AI-enabled weapons systems: out of control killer robots optimizing efficiency absent considerations of ethical, strategic, and humanitarian considerations.

However, after RAeS published Colonel Hamilton’s comments-(and picked up by industry press) under the report header “AI-is Skynet here already?”, Colonel Hamilton reached out to RAeS to clarify his comments. According to Hamilton, he “mis-spoke” and that the scenario described was “a thought experiment” “based on plausible scenarios and likely outcomes rather than an actual real world” exercise. The idea that the US Air Force never carried out this exercise was repeated by Air Force spokesperson Anne Stefanik who told The Drive “this was a hypothetical thought experiment, not a simulation.”

Still . . .

While the walking back part of Colonel Hamilton’s does offer a measure of relief and reduce some of the the immediacy and intensity of concern about the threat of rogue AI-enabled weapons systems, it does reveal important lessons for defense planners, operators, and industry. Certainly, it highlights the increasing need for precision of language when speaking about emerging capabilities, especially those that incorporate AI.

Ineffective communication about the status of a system or nature of a capability can drive escalatory dynamics between countries competing to develop AI-enabled weapons and, as happened in this case, it can also generate headwinds for the development and adoption of any AI – related system.

Most notably, though, the episode demonstrates that even the most advanced militaries are considering the ethics and safety risks of the use of AI-enabled systems, especially those involved in the “kill chain” associated with finding, fixing, targeting, and tracking adversary assets and personnel. As Colonel Hamilton noted in comments designed to correct the record, “we’ve never run that experiment, nor would we need to in order to realize that this is a plausible outcome . . . despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability.

 

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert