Strategic Risks of AI-Enabled Automation: Implications for Crisis Management and Systemic Resilience
DOI:
https://doi.org/10.66050/xhmjy003Keywords:
artificial intelligence, military decision-making, risk governance, socio-technical systems, escalation riskAbstract
Artificial intelligence is increasingly embedded in military decision-making systems, transforming not only operational performance but the structural conditions under which risk is produced and governed. This article develops a conceptual socio-technical analysis of AI-enabled automation through the lenses of systemic risk, risk governance, and resilience. Rather than treating AI as a discrete tool, the paper conceptualizes automation as a structural intervention in high-risk decision infrastructures. Four analytical findings are advanced. First, AI-mediated workflows redistribute practical control by shaping what information is salient, how threats are classified, and which actions appear feasible. Second, automation compresses decision time, reducing space for deliberation and increasing reliance on algorithmic outputs under uncertainty. Third, tighter coupling and accelerated interaction across domains enable localized errors or manipulations to propagate into broader escalation dynamics. Fourth, distributed human–machine architectures contribute to responsibility diffusion, weakening the alignment between accountability and effective decision influence. The article argues that prevailing “human-in-the-loop” safeguards are insufficient when control is reduced to technical override rather than institutional capacity for judgment and accountability. Effective integration of AI in high-risk environments requires governance arrangements that preserve human judgment as a strategic resource for crisis management and systemic resilience.
Downloads
References
1. Afina, Y., & Persi Paoli, G. (2024). Governance of artificial intelligence in the military domain: A multi-stakeholder
perspective on priority areas. Geneva, Switzerland: United Nations Institute for Disarmament Research.
2. Beriša, H., Cvetković, V., & Pavić, A. (2024). Implications of artificial intelligence and cyberspace on risk management capabilities. International Journal of Disaster Risk Management, 6(2), 279–295. https://doi.org/10.18485/ijdrm.2024.6.2.18.
3. Blanchard, A., Thomas, C., & Taddeo, M. (2025). Ethical governance of artificial intelligence for defence: Normative tradeoffs for principle to practice guidance. AI & Society, 40, 185–198. https://doi.org/10.1007/s00146-024-01866-7.
4. Boin, A., ’t Hart, P., Stern, E., & Sundelius, B. (2017). The politics of crisis management: Public leadership under
pressure (2nd ed.). Cambridge, UK: Cambridge University Press. https://doi.org/10.1017/9781316339756.
5. Defense Innovation Board. (2019). AI ethical principles for the Department of Defense. Washington, DC: U.S. Department of Defense.
6. Dekker, S. (2011). Drift into failure: From hunting broken components to understanding complex systems.
Farnham, UK: Ashgate.
7. Hanspal, M. S., & Behera, B. (2024). Role of emerging technology in disaster management in India: An overview. International Journal of Disaster Risk Management, 6(2), 133–148. https://doi.org/10.18485/ijdrm.2024.6.2.9.
8. Helbing, D. (2013). Globally networked risks and how to respond. Nature, 497(7447), 51–59. https://doi.org/10.1038/nature12047.
9. Herrmann, T., & Pfeiffer, S. (2023). Keeping the organization in the loop: A socio-technical extension of human-centered artificial intelligence. AI & Society, 38, 1523–1542. https://doi.org/10.1007/s00146-022-01391-5.
10. Hollnagel, E., Woods, D. D., & Leveson, N. (2006). Resilience engineering: Concepts and precepts. Aldershot, UK: Ashgate.
11. Jovičić, R., Gostimirović, L., & Milašinović, S. (2024). Use of new technologies in the field of protection and rescue during disasters. International Journal of Disaster Risk Management, 6(1), 111–122. https://doi.org/10.18485/ijdrm.2024.6.1.8
12. Leveson, N. G. (2011). Engineering a safer world: Systems thinking applied to safety. Cambridge, MA: MIT Press.
13. Linkov, I., & Trump, B. D. (2019). The science and practice of resilience. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-04565-4.
14. Milenković, D., Cvetković, V., & Renner, R. (2025). Community resilience indicators based on the BRIC method. International Journal of Disaster Risk Management, 7(1), 79–104. https://doi.org/10.18485/ijdrm.2024.6.2.6.
15. Perrow, C. (1984). Normal accidents: Living with high-risk technologies. Princeton, NJ: Princeton University Press.
16. Phillips-Wren, G., & Adya, M. (2020). Decision making under stress: The role of information overload, time pressure, complexity, and uncertainty. Journal of Decision Systems, 29(2), 99–112. https://doi.org/10.1080/12460125.2020.1768680
17. Reason, J. (1990). Human error. Cambridge, UK: Cambridge University Press.
18. Renn, O. (2008). Risk governance: Coping with uncertainty in a complex world. London, UK: Earthscan.
19. Renn, O., Klinke, A., & van Asselt, M. B. A. (2011). Coping with complexity, uncertainty and ambiguity in risk governance: A synthesis. Ambio, 40(2), 231–246. https://doi.org/10.1007/s13280-010-0134-0.
20. Scharre, P. (2018). Army of none: Autonomous weapons and the future of war. New York, NY: W.W. Norton.
21. Ulal, S., Saha, S., Gupta, S., & Karmakar, D. (2023). Hazard risk evaluation of COVID-19: A case study. International Journal of Disaster Risk Management, 5(2), 81–101. https://doi.org/10.18485/ijdrm.2023.5.2.6.
22. Unal, B., & Richard, U. (2024). Governance of artificial intelligence in the military domain (UNODA Occasional Papers No. 42). New York, NY: United Nations Office for Disarmament Affairs.
23. Vaughan, D. (1996). The Challenger launch decision: Risky technology, culture, and deviance at NASA. Chicago, IL: University of Chicago Press.
24. Weick, K. E. (1995). Sensemaking in organizations. Thousand Oaks, CA: Sage.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Dejan Vuletić (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.
This journal operates under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits unrestricted use, distribution, reproduction, adaptation, and transformation in any medium, provided the original author and source are properly credited.
Authors retain the copyright of their articles.
The International Journal of Disaster Risk Management (IJDRM) encourages and permits authors to:
-
Post pre-print (submitted version), post-print (accepted version), and publisher’s version/PDF of their articles on personal websites, institutional repositories, disciplinary repositories, and academic networks such as ResearchGate, Academia.edu, or departmental websites,
-
Do so at any time, including before or after publication,
-
Provided that appropriate credit is given to the original publication in this journal, including:
-
Full bibliographic details
-
A clear mention of the journal name
-
A direct link to the article’s DOI (as an HTML link)
-
No prior permission is required from the publisher or editors for such actions, as long as the terms of the CC BY 4.0 license are followed.