The Military’s Recruitment of AI Has Already Begun

There are a few clear paths for how we’re poised to see AI used on the battlefield very soon.

 

On Aug. 10, the Department of Defense announced it was launching a task force to look into generative AI—programs like ChatGPT, DALL-E, and others—to produce finished work like code or answer questions or develop specific images when asked to do so. The announcement is part of the U.S. military’s ongoing effort to keep pace with modern technologies, studying and incorporating them as they prove useful, while at least taking some time to determine what risk the use of AI for military purposes poses.

AI is an ungainly catch-all term for a family of distantly related technologies, but it’s nevertheless being heavily pushed onto consumers by Silicon Valley techlords who are convinced they’ve found the next big thing. As governments and especially militaries follow suit, it’s important to ask the question: What, if anything, can AI offer for understanding and planning war?

Algorithmic analysis, especially that based on language learned models (like what ChatGPT is based on), has been heralded as a way for computer processes to use training data and respond to new circumstances. When people express fears of “Killer Robots,” that fear is focused on the tangible: What if AI lets a robot with a gun select who to kill in battle, and gives the robot the speed and authority to pull the trigger? Algorithms can fail in ways that are opaque and unpredictable, leading not just to error on the battlefield, but novel error.

And military interest in AI won’t remain confined to the tactical or battlefield level. The Pentagon’s expressed interest in generative AI is expansive.

“With AI at the forefront of tech advancements and public discourse, the DoD will enhance its operations in areas such as warfighting, business affairs, health, readiness, and policy with the implementation of generative AI,” the Chief Digital and Artificial Intelligence Office said in statement about the announced generative AI task force.

So how should we expect to see the military pursue AI as a new tool? Two recently published academic papers offer perspective to help understand the shape and limits of AI, especially when it comes to policy.

Predicting the Next Battle

One of the most vexing challenges facing a state and its security forces is predicting when and where battles will occur. This is especially true when it comes to fighting against non-state actors—armed insurgencies may operate from within a geographic expanse, but strike at targets of opportunity throughout the area that can reach.

In a paper entitled “Discovering the mesoscale for chains of conflict,” published on Aug. 1 by PNAS Nexus, authors Niraj Kushwaha and Edward D. Lee, both of the Complexity Science Hub of Vienna, Austria, created a model that inputs existing conflict data, matches it to time and space, and can then be used to predict how previous incidents will cascade into larger waves of clashes and fighting.

“The promise of bringing big data and algorithmic analysis to data sets like this is that the models built can spot connections otherwise invisible to human perception.”

Kushwaha and Lee started with public data on political violence incidents recorded by the Armed Conflict Location & Event Data Project, constrained to just events in Africa and from 1997 through 2019. The authors then matched that data across grids of space and slices of time. A battle in one place in the past was a good sign there would be new battles in adjacent or nearby locations in the future, depending on the time scales chosen for a given query.

“In a way evocative of snow or sandpile avalanches, a conflict originates in one place and cascades from there. There is a similar cascading effect in armed conflicts,” Kushwaha said in a news release. One such example of the model’s insight is how it identified how violence from Boko Haram in Nigeria displaced herders, leading to further conflict on the periphery of where Boko Haram operates. But the model can also identify events linked to a different group, the Fulani militia. These forces, which can be seen as distinct groups, can both take advantage of a strained government response to any of a number of insurgencies in the country, and lead to cascading violence in the future.

By repeating the process across other conflicts, the authors found that different events in the same place can be traced to different conflicts. By just using the model at hand, they were able to find and connect later violent incidents to earlier ones, inferences that can be found in the data, but hard to parse without a model of conflict cascade teasing it out.

The promise of bringing big data and algorithmic analysis to data sets like this is that the models built can spot connections otherwise invisible to human perception. While much of Kushwaha and Lee’s work is built on more reproducible algorithmic tools, the others are keenly aware that AI offers further depth for such research.