Croatia and AI, what is the status, and is there any progress?
At the end of last year and the beginning of the new one, interesting AI applications such as Midjourney, DallE2, and Chat GPT were commented on, along with predictions about what 2023 will bring in AI development. Global trends are as interesting as they are difficult to predict, but local trends should be somewhat easier to foresee. What can we expect from the development of artificial intelligence in the context of Croatia?
The first thing that stands out is that, according to data collected by the OECD Policy Observatory, there have still been no real advances in defining and implementing a quality strategy and policy for the development of AI in our country. Although this fundamental undertaking has been discussed for several years, it seems that official work began on it in 2020. This is still the only document and information about such a project in Croatia. For comparison, Austria has had a strategy for AI development until 2030, and it was developed in 2019. Ireland published its strategy in 2021 along with a series of accompanying documents, campaigns, working groups, and best practices documents, as did Austria and a number of European countries.
Although overall investments in AI are increasing cumulatively, and the number of research studies in this area is increasing, particularly at the University of Zagreb, which is followed by the University of Rijeka and the University of Split, it is interesting that contributions to public projects have been slowly declining for a few years, from a not very high level.
Apart from initiatives coming from the private sector, such as the Croatian AI Association — CROAI, very little is happening systematically and thoughtfully at the state level. According to their data and the CroAI Landscape report, from 2020 to 2022, the number of startups increased from 70 to 130, and companies involved in some way with AI increased from 250 to 430.
While our ecosystem is growing without any involvement from the state, new laws are being prepared at the European level to regulate some aspects of the application of AI in business. Issues of responsibility and supervision of created products are being considered, whether they are solely the responsibility of manufacturers or also those who use them. There is also a move towards banning some types of use, such as the detection of human emotions. The ethical issues that arise are numerous, and unfortunately, we are once again failing to make an active contribution to the processes that define them. Can we hope that some body will be formed at the state level that will proactively deal with questions of development, implementation, and coordination at the international level? I am not too optimistic, nor are there any indications of it at this time.
The issue of the future governance of artificial intelligence, at the national as well as international or global level, is becoming an increasingly important question. Artificial intelligence has been at the center of many controversial debates, especially over the last 12 months, often among scientists and policymakers in Europe and beyond, without a clear understanding of what makes it unique. It has been described differently as a “technology of the future”, widely used “general purpose technology”, or a “key technology” or a “collection” of very diverse technologies. Such different concepts and definitions not only lead to academic discussions but also have important implications for stakeholders’ rights. When laws and regulations assign certain rights and obligations to users of artificial intelligence (and others affected by the use of artificial intelligence), those rights depend on what is considered an AI application or system, as we have seen in recent legislative proposals at the EU level.
Ambiguity about the definitional boundaries of artificial intelligence also contributes to very different reactions: some categorically reject artificial intelligence as highly disruptive or see AI as a threat to important traditions such as transparency and accountability in government. Others often admire it uncritically — and indeed AI enables phenomenal advances in, for example, pattern recognition-based predictions, using the wealth of data that characterize the digital age and achieving otherwise unattainable gains in economic efficiency or environmental sustainability, or medical treatment efficiency. Others again may think that adopting AI is simply inevitable, similar to previous technologies that gave early users a competitive advantage in the global economy and world politics, although they could still warn of risks of dangers such as an AI-fueled arms race that I have written about in previous columns. In short, how we define artificial intelligence is also important for normative evaluation. Therefore, before we start attempts to participate in regulation, it would be good to determine our definition, and our value systems, and then build relevant institutions.