The Autonomy Story in Three Parts

Three separate February 2023 events offered a snapshot of the opportunities, implications, and concerns associated with the development and use of autonomous systems for military purposes.

Part One: Another Autonomy Milestone Achieved

Reporting from 13 February revealed that a joint US Department of Defense (DoD) team successfully executed 12 flight tests in which AI agents piloted a heavily modified F-16D Fighting Falcon and also engaged in autonomous dogfighting maneuvers. The tests took place from 1 – 16 December.

The VISTA aircraft, which is short for Variable In-Flight Test Aircraft, used two complex algorithms to carry out dogfighting maneuvers against digital adversaries that were both beyond-visual-range and within-visual-range, marking an interesting and impressive step forward for the use of AI-enabled fighter aircraft.

The tests are linked to a broader DoD effort to build an adaptable autonomy tool that can be applied to multiple planes across the US Air Force fleet and also to “develop concepts for operating autonomous aircraft along or with other aircraft” in human-machine and machine-machine teams.

Part Two: Understanding Implications and Risks

Also in February, the book “Four Battlegrounds: Power in the Age of Artificial Intelligence” was released. The book from Paul Scharre, vice president and director of studies at Center for a New American Security, explores competition in AI across data, computing power, talent, and institutions. It also offers insight on how AI is shaping the future fight.

In an excerpt published on Big Think, Scharre highlights the importance of swarming in future military doctrine. He asserts that AI will allow swarming of autonomous systems such as those demonstrated in the VISTA test described above to move from “merely a tactic used in certain situations” to a capability that could “completely restructure how militaries fight at the operational level of war.”

Scharre also noted that the use of fast moving swarms and AI more generally could—over time—increase the scale and speed of conflict so dramatically that AI-enabled capabilities “could begin to push warfare out of human control”, creating the possibility that humans may “lose the ability to control escalation or terminate a war at the time of their choosing.”

Part Three: Calls for Responsibility and Regulation

On February 15-16, 2,000 delegates from 100 countries gathered in The Hague, Netherlands for the first ever Responsible AI in the Military (REAIM) conference. The Republic of Korea co-hosted the event.

In a press event before the summit, Dutch Foreign Minister Wopke Hoesktra stressed that the conference “is an idea for which the time has come. We’re taking the first step in articulating and working toward what responsible use of AI in the military will be.” He later added during the conference’s closing that “We are in time to keep AI from spiraling out of control”, expressing similar concerns as Scharre’s about humans losing control of the actions of their AI creations.

At the conclusion of the conference, the United States released a Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. The document includes 12 “best practices” that endorsing states should implement in their development and use of AI for military purposes that emphasize concepts such as safe and secure development, extensive testing, and human control and oversight, among others. Endorsing nations are asked to further reengage the rest of the international community to promote these practices.

The Moral of the Story

Military development of autonomous systems is unlikely to halt anytime soon, especially in a geopolitical environment characterized by competition and on-going conflict and the accelerating development of commercial AI with such diverse, obvious, and impactful military applications.

However, this development must be accompanied by layered efforts to better understand the strategic, operational, and tactical implications and, crucially, risks of military AI—including efforts such as Scharre’s book or even previous fiction intelligence (FICINT) efforts such as Ghost Ship or Burn In. These visions of the future of conflict, in turn, should be used to inform the necessary, urgent, and growing effort to develop norms for responsible development and use of AI that evolve as the technology, use cases, and risks do.

Leave a Reply

Your email address will not be published. Required fields are marked *