AI in Warfare — Is the New Revolution Here? (Part 1)

Aco Momcilovic
5 min readMay 31, 2022

How long before it becomes a decisive tool to gain supremacy?

AI Military Wordcloud

Talks about different military-connected AI systems have been going around progressively in the last few years. From the first alleged AI coordinated drone strikes by the Israeli military in Gaza to an announcement by USA and EU high officials that more money will be invested in this field of research, things have obviously been happening and changing.

What is the actual situation, we still can't be entirely sure? Logically most of that information is considered to be of top national security importance and highly classified, and we will probably never see many transparent reports. But we might see some developments in practice.

The current war in Ukraine allowed us to peak in the future of AI-fueled warfare. So, what can we conclude about it so far?

First, let's just be aware of the context in which UN established group that was discussing limitations or banning of Lethal Autonomous Weapon Systems did not get support for its decisive actions, from the following states: USA, Russia, UK, India, and Israel. So basically, from those who are most advanced in AI development. What could be the reasons?
Parts of the answers might be in already well know predictions, one from Vladimir Putin back in 2017 — that “whoever becomes the leader in AI will become the ruler of the world”, but also from Kai-Fu Lee who predicted that AI will be the third revolution in warfare after gunpowder and nuclear weapons. And to underline that autonomous weapons are one aspect, but AI also has the potential to scale data analysis, misinformation, and content curation beyond what was possible in major conflicts historically.

Although it might bring the next revolution, it does not mean that it is similar to previous innovations. A key difference is that AI itself is not a weapon but a range of functions and technologies. As Michael C. Horowitz has noted, AI can be defined as enabling technology: “AI is not a single widget, unlike a semiconductor or even a nuclear weapon.” In other words, AI is many technologies and techniques.


Economy and investment reactions were followed shortly, one study found that between 2005 and 2015, the United States had 26 percent of all new AI patents granted in the military domain, and China, 25 percent. In the years since China has outperformed America. China is believed to have made particular advancements in military-grade facial recognition, pouring billions of dollars into the effort. NATO last October launched an AI strategy and a $1 billion fund to develop new AI defense technologies.

It seems Russia’s invasion of Ukraine is shaping up to be a key proving ground for artificial intelligence and its military applications. But what are the options and potential applications? They could be listed in a few ways, so I propose this one:

1. Development of Autonomous Weapons (LAWS) (autonomous tanks, swarming munitions/drones, etc.)

2. Military operations and their optimization (logistics, command and control, resources planning)

3. Platforms for intelligence collection and analysis (data from TikTok and Telegram to news reports and publicly available satellite imagery).

4. Detection of disinformation / or creation of it (posts and videos generated by troll farms on Social media — Twitter, Tik Tok, YouTube, Telegram, etc.)

What is the usage?

Right now, those weapons and systems are still in their infancy and with a huge potential for development. The reality is that AI-guided weapons that once were the stuff of science fiction — and were still largely in that realm when the UN committee first began talking about autonomous weapons in 2014, are now being deployed on battlefields around the globe.

It is yet to be seen if some mentioned aspects/applications (of AI) could allow civil society groups to fact-check the claims made by every side in the conflict as well as to document potential atrocities and human rights violations. That could be vital for future war crimes prosecutions and have many legal consequences.

The problem that many, I included, have been warning about is that developing technology can’t grasp the implications of what they are building and how it might be used in the future. To use comparison, when anybody first creates fully autonomous weapons — the catchall description for algorithms that help decide where and when a weapon should fire — it will make the human-commandeered drone strike of recent years look as outdated as an attack with a bayonet.

As Daan Kayser from the Dutch group Pax for Peace said: “I believe it’s just a matter of policy at this point, not technology. “Any one of a number of countries could have computers killing without a single human anywhere near it. And that should frighten everyone.”

There are some optimistic predictions about the influence of AI in the military. For example, some militaries believe using AI will help shorten the length of the fighting and boost effectiveness and speed in gathering targets using super-cognition, combined with a greater level of precision of attacks, and achieve a lower number of civil casualties.

It will be an interesting field for ethicists, philosophers, psychologists, and sociologists to explore different aspects of it, and its potential consequences. Nancy Sherman, a Georgetown professor who has written numerous books on ethics and the military said: “Just cause in going to war is important, and that happens because of consequences to individuals. When you reduce the consequences to individuals you make the decision to enter a war too easy.”

Who will fight?



Aco Momcilovic

Ph.D. Student. National AI Capital Researcher. Human Resoucres, Psychology, Entrepreneurship, MBA…