Navigating the Upcoming European Union AI Act

This is a personal copy of a column in IEEE Software (Jan/Feb 2024). Republished with permission.

Brace yourselves, AI providers, as significant changes are on the horizon with the European Union’s AI Act. Similar to the General Data Protection Regulation (GDPR) – the toughest privacy and security law in the world – the AI Act is set to impose stringent regulatory requirements on organizations offering AI systems to the European market. Non-compliance could lead to hefty fines. In June 2023, a staggering 771 amendments to the originally proposed AI Act were published. This column is your essential guide, distilling the most critical elements that will shape the regulatory AI landscape in 2024 – Markus Borg


Matthias Wagner, Markus Borg, and Per Runeson

This column provides the first map to navigate the upcoming AI Act, which will significantly shape the EU’s legal landscape. We shed light on what many practitioners still struggle to grasp. All the information provided is backed up by precise legal references (using the format <X>); see Table 1 at the end of this post, to allow for easy traceability. We distilled down hundreds of pages of legal paragraphs to what really matters when complying with the new regulatory requirements.

First, we highlight where we are now in the legal process and what has recently changed. Next, we explain how the classification of systems affected by the act works. With that said, grouped by affected system types, we dive right into the core of this regulation by explaining its requirements in detail. Finally, we end this column by highlighting what will matter for the operationalization of the AI Act.

The figure below summarizes the main points of this column, helping to navigate the overall structure of the regulation.

An overview of the EU AI Act.

AI Act in late 2023 – Where are we now?

More than two years have passed since the European Commission (EC) released its proposal of harmonized rules on Artificial Intelligence (AI) in April 2021, commonly known as the AI Act. Since then, it has increasingly become the focus of public attention. The upcoming regulation aims to promote human-centric and trustworthy AI as well as to ensure protection from potentially harmful effects of AI-enabled systems, all while supporting innovation <1>.

On June 14, 2023, the European Parliament adopted 771 amendments to the Commission’s proposal, introducing various significant changes. Moreover, new rules were added for foundation models (such as those used in ChatGPT) used in generative AI systems – an area with its “iPhone moment” since the original AI Act proposal. The amendments mark the European Parliament’s negotiating position, and it remains to be seen if all of them make it into the final act. However, when considering these amendments, we now have a clearer notion of how the final regulation might look after the ongoing trilogues – the last step before the AI Act’s entry into force.

Classification as High-risk AI System

The AI Act’s main requirements primarily apply to so-called high-risk AI systems. To start with, we need to know how the term “AI system” is defined by the European Union (EU). As of June, it is defined as “[…] a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments;”<48> While still being very broad, this is already a narrowed down version of the previous definition. Consequently, knowing what classifies as high-risk is fundamental for the final classification.

There are two ways to classify as high-risk. First, AI-enabled products or safety components thereof that (1) have to undergo a third-party conformity assessment due to risks for health and safety and (2) are already covered by EU harmonization legislation listed in Annex II<2><46><47> Second, AI systems that classify as critical use cases according to Annex III, but only if “[…] they pose a significant risk of harm to the health, safety or fundamental rights of natural persons […]”<3> or “[…] if it poses a significant risk of harm to the environment.”<3>

This by the amendments introduced conditionality, in addition to the critical use cases listed in Annex III, is essential, as it makes it more vague on what might classify as high-risk. However, the EC will clarify this condition with guidelines, including concrete examples<4>. An important recent addition to Annex III is AI-based recommender systems of very large social media platforms with over 45 million average monthly active recipients<5>. Overall, it can be said that the range of AI systems potentially covered as high-risk is considerable, emphasizing the impact of this upcoming legislation even more.

Requirements for High-risk AI Systems

There are nine key requirements that high-risk AI systems must comply with (Art 8–17):

  1. Risk management system
  2. Data governance
  3. Technical documentation
  4. Record-keeping
  5. Transparency and provision of information
  6. Human oversight
  7. Accuracy, robustness, and cybersecurity
  8. Quality management system
  9. EU database registration

Many of these requirements will be very challenging to operationalize. This is because of the inherent uncertainty through non-determinism, the reliance on data, and the continuous evolution and self-adaptation of AI-based systems [6]. In a previous requirements column, we also pointed out the challenge of the dynamic characteristics of AI systems [7].

The risk management system not only requires fostering adequate design and development of the system for the minimization of identified risks, but also encompasses, together with the quality management system, a substantial testing requirement. The testing procedures must ensure compliance with all requirements set out in the regulation and consistent performance in line with the system’s intended purpose<8>. Defining suitable testing metrics in this context is challenging and also needs to be addressed from a requirements engineering perspective. Moreover, the quality management system mandates systematic quality assurance and testing during the whole development lifecycle as well as after the AI system has been put into service in the form of a post-market monitoring system<9>. To achieve this, appropriate state-of-the-art procedures and techniques must be identified.

Appropriate accuracy, robustness, and cybersecurity are other key requirements of the AI Act<10>. The goal is to measure consistent performance throughout the lifecycle while guaranteeing resilience against attacks. Types of attacks that need to be counteracted are data and model poisoning, adversarial examples or model evasion, confidentiality attacks, and model flaws<11>. High-risk AI systems must also be resilient against the exploitment of system vulnerabilities by unauthorized third parties that could influence their performance, outputs, behavior, or their use in general<12>. Furthermore, technical as well as organizational measures against possible errors, faults, and inconsistencies are necessary<13>. To achieve robustness, both user input and technical redundancy in the form of backups or fail-safe plans are suggested in the regulation<14>. Special consideration must be put on continuous learning systems to avoid negative feedback loops. This means that during the operation of such systems, mitigation measures have to be taken against possibly biased outputs as new input and against malicious input manipulation in general<15>.

A major cornerstone of the AI Act is the requirement for transparency and provision of information. First, the system’s instructions for use must be transparent about the tested and validated level of accuracy, robustness, and cybersecurity, concerning all of the aspects above, including foreseeable circumstances that could negatively affect that expected level<16>. Generally, high-risk AI systems are required to operate transparently enough so that providers and users can reasonably understand the system’s functioning. Interpretability of the output is also demanded. Users shall be able to understand, on a general level, how the system works and what data is processed<17>. More important, however, is the right to an explanation of individual decision-making. Any person affected by a decision based on the output of a high-risk AI system can request an explanation that must inform on the role of the AI system in that decision and the main parameters of the decision, including the related input data. However, this right only applies to decisions producing legal effects for the affected person or similarly severe impacts, like health, safety, fundamental rights, or socio-economic well-being.<17><18> To help affected persons exercise their right to explanation, providers are required to inform these persons that they are subject to the output of such a system and about their right to an explanation of individual decision-making<19>. Last but not least, the degree to which explanations for the system’s decisions can be made also needs to be stated in its instructions for use<20>.

More important than the obligatory comprehensive technical documentation<21><22> is the requirement for record-keeping, because of the technical complexity involved. It requires automatic logging during system operation, to ensure traceability throughout the entire system lifetime.<23>An interesting recent addition to the regulation is a dedicated logging capability of the system’s environmental impact concerning energy consumption and resource use<24>.

With data being an integral part of modern AI systems, regulations on data and data governance have also been deployed. Data used for training, validation or testing of high-risk AI systems must meet certain quality criteria as far as technically feasible<25><26>. Amongst others, these include bias detection and mitigation<27>, transfer context bias prevention<28>, data gap and shortcomings identification<29>, data vetting for errors, as well as fostering relevancy, representativeness, and completeness of the data<30>. Only under very strict conditions is it allowed to use personal data attributes for negative bias detection and correction<31>.

Furthermore, high-risk AI systems need to come with appropriate human-machine interfaces that allow them, proportionate to the risks involved, to be effectively overseen by humans<32>.

Finally, high-risk AI systems must be registered in a public EU database<45>.

Foundation Models

The addition of so-called foundation models is the EU’s attempt to cover the landscape of AI systems on a more general level, apart from just high-risk systems. A foundation model is defined as “[…] an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks;”<33> These foundation models can in turn be used in various downstream AI systems. Therefore, the term general purpose AI system has been introduced in the AI Act, meaning “[…] an AI system that can be used in and adapted to a wide range of applications for which it was not intentionally and specifically designed;”<34> Only as a part of a general purpose AI system can a foundation model be considered high-risk according to the classification criteria outlined in this column<35>. However, the important takeaway is that the AI Act still sets out requirements that apply to all foundation models, no matter if they are part of a high-risk AI system or not<36>.

The core requirement for foundation models concerns the whole system’s lifecycle and asks for appropriate performance, predictability, interpretability, corrigibility, safety, and cybersecurity. Extensive testing and analysis – involving independent experts – shall be done to achieve this. Apart from that, risk mitigation must be achieved through adequate design, analysis and testing. Appropriate data governance shall ensure suitable data sources and bias mitigation. As for high-risk systems, the environmental impact must be logged, including resource use and energy consumption. However, besides just logging the environmental impact, foundation models must use applicable standards for energy efficiency and for reducing resource and energy use. Extensive technical documentation and instructions for use are also demanded. Again, a quality management system must be established to ensure compliance. Lastly, foundation models must be registered in a public EU database.<37>

While these requirements on foundation models are not as extensive as for high-risk AI systems, they are still considerable and pose new challenges to providers of such systems.

Additional Requirements for Generative AI

Additional obligations are set out for foundation models used in generative AI systems<37>. First, safeguards that prevent generating content that breaches EU law must be implemented<37>. Second, a transparency requirement applies if the system is intended to be used by natural persons. These persons need to be made aware that they are interacting with an AI system, if not already obvious from the context. <38>

Finally, foundation models used in generative AI systems shall be transparent about copyright protected training data, meaning that a summary regarding affected data has to be made publicly available<37>.

Operationalization of the AI Act

Penalties of up to 20 million EUR or, for companies, 4% of the worldwide annual turnover shall be an incentive to comply with the requirements of the AI Act<39>. When it comes to the operationalization of the requirements set out in the regulation, there are five aspects to consider that can be found throughout the act:

  1. generally acknowledged state-of-the-art<40>
  2. guidelines developed by the EC<41>
  3. relevant harmonized standards<42>
  4. common specifications where no harmonized standards are available<43>
  5. benchmarking guidance and capabilities<44>

This means that state-of-the-art research will be a cornerstone for coming up with appropriate solutions to comply with the requirements set out in the AI Act. Future research that deals explicitly with these requirements and evaluates possible solutions is needed. Cooperation between industry and academia is especially suited for developing new methods and techniques in this challenging environment.

We invite you to reach out if you’re interested in collaborating on this topic. Whether the legislation directly impacts your organization or not, we hope you’ve found this summary to be informative and valuable. A basic understanding can help you avoid costly surprises later.

References

6. M. Felderer and R. Ramler, “Quality Assurance for AI-Based Systems: Overview and Challenges (Introduction to Interactive Session),” in Software Quality: Future Perspectives on Software Engineering Quality, D. Winkler, S. Biffl, D. Mendez, M. Wimmer, and J. Bergsmann, Eds., in Lecture Notes in Business Information Processing. Cham: Springer International Publishing, pp. 33–42, 2021.

7. M. Borg, “Pipeline Infrastructure Required to Meet the Requirements on AI,” IEEE Software, vol. 40, no. 1, pp. 18–22, 2023.

Legal References