AI and the Dangers That Are (Not) Discussed

Aco Momcilovic
6 min readSep 12, 2022


Some are well documented… but some…?

In past articles, I have mentioned a number of ways in which AI can be used, or is already being used, in military activities. Although for now there are not too many reports that these systems have started to bring crucial importance, more and more voices are appearing that warn of the dangers it carries and of these global hotspots where their use could occur. Also interesting are the recent analyzes of Indian experts who are afraid of a conflict between India and China, in which they estimate that India is at least a decade behind China in terms of development connected with new technologies. US-China relations and their destabilization, especially after Nancy Pelosi’s last visit to Taiwan, are still on the daily agenda.

However, the dangers are not limited to the military industry and application, and this should not be forgotten.

Dangerous AI?

The real danger from artificial intelligence is not the rise of cinematic examples like SkyNet taking over the world (at least for now). Although this scenario is entirely science fiction, there are some real, present-day dangers. Various authors and researchers suggest several dangers, so let’s list some of them on which the majority agree:

1. Deepfake

“In our modern society, information is available in unlimited quantities. By opening a browser and surfing the Internet for half an hour, you come across more information than the average person in the Middle Ages encountered in a lifetime. Information is power. You are immediately informed about events happening on the other side of the globe, but there is one catch: how do you know what to believe and which sources are credible? Videos and images created by artificial intelligence are becoming so real that we are in danger of uncritically taking them as real.” The fact that they are recognized as a threat to national security in America speaks volumes for their seriousness.

2. Algorithmic bias

“If you’ve ever applied for a job in the past few years in Western countries, you’ve probably been influenced by algorithmic bias, either positively or negatively. It might be a good idea to develop an algorithm based on artificial intelligence to screen candidates for job applications, however, there are significant problems with this. Machine learning needs historical data to know which candidates are worth hiring. The problem is that past acceptance and rejection data are heavily influenced by inherent human bias, mostly against women and underrepresented minorities. The algorithm only learns from what it is presented with, and if previous hiring practices were discriminatory (as is often the case), the algorithm will behave similarly. These systems continue to make bolder claims that they will soon be able to measure people’s intelligence, political orientation, and criminal tendencies or sexual orientation based on, for example, their facial images. Microsoft created the bot to learn more about conversation and design an automated bot that learns from users and mimics their language. Very quickly he began to reflect bias in his comments and tweets. You can imagine all the potential consequences and biases and mistakes that can happen as well as privacy rights.”

3. Mass surveillance

“Face recognition has been a key problem in computer vision for decades. However, since the deep learning revolution, not only can we recognize faces more accurately, but we can do it instantly. If your smartphone is lying around, try opening the camera app and taking a selfie: it will immediately put a frame around your face, which means it has successfully detected it. If you own a newer iPhone, you can even use your face as a password for your phone.

This technology has its own dangers if used for the wrong purposes. In China, this is being used for mass surveillance on an unprecedented scale. Combined with the recently introduced social credit system, where your behavior is scored and you can lose points for example by jaywalking or simply participating in certain events, nothing can remain hidden from the eye in the sky.”

4. Recommendation systems

Every time you open YouTube, Facebook, or any other social media, their goal is to keep you as long as possible. To put it simply, in this case, you are the product: you can use the website for free if you occasionally view ads. So the goal is not to provide quality content for users, but to keep them indefinitely on those platforms.

Given how human brains have evolved, engagement is unfortunately driven by evoking strong emotional responses. It turned out that, for example, politically radical content is particularly suitable for this. The recommendation systems and the algorithms behind them have not only learned to present extreme content but also to slowly radicalize each user, making their job easier once the user’s preferences change. This is known as the radicalization channel.

There is concrete, solid scientific evidence for this. For example, recent work looked at millions of comments from certain users on YouTube to show how they slowly migrate towards extreme radical content. In essence, users deliberately keep themselves in an intellectual bubble, never questioning their own beliefs about the outside world. Thinking is hard, instinctive responses are easy.

World in danger?

But there are also some dangers that are not obvious and that I mentioned in some of my lectures. They might be more important than those earlier mentioned and have even bigger consequences. For example:

5. Danger of creating a New Colonial world order

As stated in the series of MIT Technology Review articles, “the more users a company can acquire for its products, the more subjects it can have for its algorithms, and the more resources — data — it can harvest from their activities, their movements, and even their bodies.”. While traditional colonialism is driven by political and government forces, algorithmic colonialism is driven by corporate agendas. While the formerly used brute force domination, colonialism in the age of AI takes the form of ‘state-of-the-art algorithms’ and ‘AI-driven solutions to social problems. Not only is Western-developed AI unfit for African problems, but the West’s algorithmic invasion simultaneously impoverishes the development of local products while also leaving the continent dependent on Western software and infrastructure. This new colonialism would have a significant economic impact and with time separate countries into basically 2 groups — AI developing countries and AI consuming countries.

6. Widening the gap in Human Potential connected with the AI

And this is where the story ends. After widening the current gap between countries, we could have hubs that will attract all skilled people that could be provided with enough material/economic resources to continue developing AI and continue to learn and gain experience. And for the others, the best option will be to at least maintain a level of AI-related education, to recognize and use the main features, and be able to differentiate it from magic 😊

The number of dangers will of course increase with time, and it is certainly not limited to these listed ones. Practical ones, in the beginning, are easy to predict in comparison to the last ones which are more complex and have unpredictable socio-economic consequences. How we recognize them and deal with them will also depend on our education and knowledge of how AI systems work but also on the further development of AI-related philosophy.

But we should not forget, that although the potential dangers and downsides are numerous, everything that the development of artificial intelligence can bring, with timely planning and making the right decisions, can positively affect both individuals and society as a whole. Perhaps we will deal with one of these topics in future columns.



Aco Momcilovic

Ph.D. Student. National AI Capital Researcher. Human Resoucres, Psychology, Entrepreneurship, MBA…