Sitemap

AI in Military Decision Support: Balancing Capabilities with Risk

2 min readMay 14, 2025

Military decision-makers will increasingly rely on AI systems even though they can’t eliminate the fundamental uncertainty of warfare. CDS Faculty Fellow Tim G. J. Rudner and his colleagues at Georgetown’s Center for Security and Emerging Technology (CSET) have published a policy brief that outlines how militaries can responsibly deploy AI in operational decision-making while navigating the technology’s limitations.

AI for Military Decision-Making: Harnessing the Advantages and Avoiding the Risks” examines the growing global interest in applying artificial intelligence to battlefield decision support. The paper, co-authored with Senior Fellow Emelia S. Probasco, Director of Strategy and Foundational Research Grants Helen Toner, and Horizon Junior Fellow Matthew Burtell, addresses the critical balance between leveraging AI’s capabilities and acknowledging its inherent risks.

“Military commanders struggle to obtain accurate data about their own forces, let alone information about the enemy,” the authors note in the brief. This challenge presents a fundamental obstacle for any AI system attempting to provide reliable decision support in combat situations.

The paper identifies three key considerations for evaluating AI-enabled decision support systems (AI-DSS): scope, data quality, and human-machine interaction. Each dimension presents unique challenges that military leaders must address when deploying these technologies.

Rudner’s technical expertise in robustness and transparency of machine learning systems provides a foundation for the policy recommendations in the brief. His research career has been motivated by seeing AI’s growing impact on the world.

“I was drawn to machine learning research to a significant extent because I foresaw that machine learning could be a technology that would have significant broad impacts on the world, and there would be policy questions that would arise from the use of machine learning systems,” Rudner said.

The paper presents five practical recommendations for military organizations looking to implement AI decision support systems responsibly: establishing context-based deployment criteria, implementing rigorous training for system operators, creating continuous certification processes, designating responsible AI officers, and documenting incidents systematically.

“While AI-DSS might help humans make better decisions under stress or avoid harmful biases, harnessing that potential takes awareness of the strengths and weaknesses of both AI and human operators,” the authors emphasize.

The recommendations highlight that successful implementation of AI in military contexts requires not just technical innovation, but also thoughtful organizational and human governance structures. As militaries worldwide pursue these technologies, the framework provided by Rudner and his co-authors offers a blueprint for responsible deployment that balances technological advantage with essential human judgment.

By Stephen Thomas

--

--

NYU Center for Data Science
NYU Center for Data Science

Written by NYU Center for Data Science

Official account of the Center for Data Science at NYU, home of the Undergraduate, Master’s, and Ph.D. programs in Data Science.

No responses yet