The missing link — a global Artificial Intelligence governance system?
I previously mentioned the importance of defining artificial intelligence, as the first step in setting its development as a priority and then creating a regulatory and governance framework. With the recent additional growth of some areas of AI, mostly related to generative AI, the discussion about management models with artificial intelligence has started to become even more topical.
In response to concerns about the consequences that may arise from the use of artificial intelligence and to calls for greater governance of artificial intelligence systems by the AI ethics movement, laws have begun to emerge that deal with the use of these technologies.
Regulation of artificial intelligence usually takes one of two approaches:
Horizontal regulation — refers to all applications of artificial intelligence in all sectors and applications. The regulatory authority for this often rests with the central government.
Vertical regulation — refers only to a specific application of artificial intelligence or a specific sector. The regulatory authority for this may be delegated to an industry body.
These two approaches can be compared in their flexibility, standardization, and need for coordination, and each has its advantages and disadvantages.
What are examples of horizontal regulation?
EU AI Act — proposes a horizontal regulatory framework for the European Union, in which AI in all sectors is subject to the same risk assessment criteria and legal requirements. According to the Artificial Intelligence Law, all low-risk algorithms are subject to transparency requirements, high-risk algorithms are subject to stricter compliance measures, and a further category of algorithms is completely prohibited.
US Algorithmic Accountability Act — adopts a horizontal approach by requiring the Federal Trade Commission (FTC) to mandate assessments of the impact of AI systems across all sectors (depending on the size and reach of businesses).
The UK’s National Artificial Intelligence Strategy — can be seen in terms of a (potentially) horizontal approach to regulation. One of the three pillars of the text is ‘Effective management of artificial intelligence’, in which two imperatives are especially present: encouraging innovation and entrepreneurship and designing standards and a regulatory regime that reflects this innovation agenda.
There are also examples of vertical regulation
New York Bias Review Mandate — Relates to the use of automated employment decision tools, therefore only regulates the employment sector. Under this law, employers are required to commission a third-party audit of their systems to identify and mitigate bias
Illinois Artificial Intelligence Video Interview Act — applies to the use of artificial intelligence to judge video interviews conducted during the employment process, therefore it only regulates certain activities in the employment sector. The law requires employers to provide notice of the use of artificial intelligence, obtain consent and report demographic information.
California Workplace Technology Accountability Act — Limits workplace surveillance and use of automated decision-making systems, and also requires algorithmic impact assessments (and data protection impact assessments).
Of course, from the technical side, we can also talk about specific areas, such as — data management, management of ML systems, or management of their architecture.
One of the ideas offered by the EU, which is in the “AI Act”, is the establishment of a European Committee for Artificial Intelligence.
The Committee provides advice and assistance to the Commission in these three ways:
(a) contributes to the effective cooperation of national supervisory authorities and the Commission in relation to matters covered by this Regulation;
(b) coordinates and contributes to the guidelines and analyzes of the Commission and national supervisory authorities and other competent authorities on emerging issues in the internal market in relation to issues covered by this Regulation;
© assist national supervisory authorities and the Commission in ensuring the consistent application of this Regulation.
If we look at the highest (global) level of artificial intelligence management, which has wide economic and social consequences, a number of questions arise that would be good to answer. This is a selection of the first 3 that I think would be interesting to answer:
1. What are the implications of artificial intelligence for global security and stability, and how can international agreements and norms be developed to mitigate these risks?
2. How can AI governance frameworks effectively balance the need for innovation and technological progress with the potential risks and negative consequences of AI?
3. How can we ensure that AI governance frameworks are flexible enough to adapt to the rapidly evolving nature of AI and technological innovation?
From all of the above, it can be concluded that the topic of artificial intelligence management is very broad and can be viewed through multiple levels. From the individual level, where it is a skill that will be increasingly in demand (according to the statements of IBM experts), to the level of management at the company level, all the way to the level of management at the state level, related to national AI strategies, and the lack of them in Croatia. At the highest level, of course, it is also about attempts to have a global or at least international influence on the management of artificial intelligence, within already existing organizations, or through new ones that could be created. The information that is very fresh is that just at the beginning of 2023, there was an initiative of an organization from several countries, which could soon establish a society for the management of AI, and the headquarters could be in Morocco. If this idea comes to fruition, readers of this column will be among the first to learn the details.