Experimentation key to technology adoption and building human-machine trust

As artificial intelligence becomes increasingly integrated into more aspects of military and security operations, concerns over the ethics and safety of algorithms, especially involved in targeting decisions, has rightfully increased as well.

Indeed, this blog has focused on the topic multiple times, most recently in reference to a US Air Force officer who wrongly indicated the service had conducted a human-machine teaming exercise in which an autonomous uncrewed system turned on its human teammate to optimize its efficiency in striking identified targets.

Most of these discussions of AI ethics and safety, though, are about how AI, in conjunction with other technologies, can act independently and create unintended, unwanted, and even undetermined outcomes.

However, a recent experiment carried out by the Australian Defence Forces Academy (ADFA) and the University of New South Wales (UNSW) in Australia examined how AI vision systems can help military personnel make better decisions in high-pressure and time sensitive operational environments.

In the experiment, ADFA cadets have been outfitted with a commercial vision-based AI system known as Athena AI, which provides identifying labels for items within its view. In a military context, the system would see a tank and label it as such on the display the soldier is using, allowing it to distinguish from other types of equipment or recognized disguised threats. But the exercise also included labels for “items of ethical significance, such as protected symbols like the red cross or red crescent.”

A screenshot from one of the experiment’s scenario. Source: UNSW / ADFA and

According to Christine Boshuijzen-Van Buren, senior researcher in ethics of autonomous military systems at UNSW, “what we want is something that can tell us if it is a threat, but also something that can tell us something must be protected.

The technology is still developing of course, but even more mature and accurate iterations of vision-based AI will not necessarily serve as a game changer or silver bullet for improving decision-making in combat. As with nearly all new technologies and certainly with most military systems, technological development must be accompanied by a rigorous and iterative effort to build trust of human operators in the reliability of the system.

Experimentation is a crucial part of this trust building effort as individuals get comfortable with the technology and also come to better understand its limitations. As Ms. Boshuijzen-Buren astutely noted “we need t0 find out how our soldiers are going to deal with an AI system that tells them, “that is a gun”, when it might actually be a camera with a big lens.”

Leave a Reply

Your email address will not be published. Required fields are marked *