Background
In my last post, I discussed the key components of the EU AI Act, focusing on its implications for businesses and outlining steps they can take to achieve compliance. Apart from the general compliance requirements, the Act introduces specific obligations for General Purpose AI Models (GPAI).
According to the Act, a GPAI model is defined as an AI model capable of performing a wide range of distinct tasks with significant generality, regardless of how it is deployed in the market. This includes models trained using large datasets through self-supervision at scale, which can be integrated into various downstream systems or applications. It's important to note that GPAI models used solely for research, development, or prototyping activities before market release are exempt from these regulations. Furthermore, the Act distinguishes between GPAI models and GPAI systems. A GPAI system is defined as an AI system that is built on top of a GPAI model, extending its functionalities for specific applications or industries.
Let us understand the different obligations for a GPAI Model and a GPAI system under the Act.
Obligations for GPAIs
The first step towards managing obligations for GPAI providers is to identify whether a particular AI model is a GPAI model or not. An GPAI model with systemic risk is identified based on specific criteria. It may be classified as such if it possesses high-impact capabilities evaluated through technical tools and benchmarks or based on the Commission's decision informed by scientific panel alerts.. A model is presumed to have high-impact capabilities if its training computation exceeds 1025 floating point operations. The Commission can amend these thresholds and benchmarks to reflect technological advances via delegated acts.
Providers must notify the Commission within two weeks if their model meets high-impact criteria, including evidence to support this. Providers may argue that their model, despite meeting these criteria, does not present systemic risks due to its specific characteristics. The Commission will review these arguments and can reject them if unsubstantiated, thereby confirming the model's systemic risk status. Additionally, the Commission can designate models as presenting systemic risks based on its own assessment or scientific panel alerts. Providers can request a reassessment of the systemic risk designation with new reasons at least six months after the initial decision. If the designation is maintained, further reassessment requests can be made every six months. The Commission will maintain and update a public list of AI models with systemic risk, while protecting intellectual property and confidential information.
In terms of obligations, providers of general-purpose AI models must:
Maintain up-to-date technical documentation, including training and testing processes and evaluation results.
Provide documentation to AI system providers who use their models, ensuring they understand the model's capabilities and limitations, while protecting intellectual property and trade secrets.
Providers must implement policies to comply with EU copyright laws and make publicly available a detailed summary of the training content according to AI Office templates. However, these obligations do not apply to AI models released under a free and open-source licence that allows access, usage, modification, and distribution of the model, provided the model's parameters and usage information are publicly available. This exception does not apply to general-purpose AI models with systemic risks.
Providers of general-purpose AI models established in third countries must appoint an authorized representative within the Union before placing their models on the Union market.
This representative, empowered by a written mandate, is responsible for verifying that the provider has prepared the necessary technical documentation and has fulfilled all related obligations.
The representative must keep this documentation and the provider’s contact details available for the AI Office and national authorities for ten years after the model is marketed.
Upon request, the representative must provide information and documentation to demonstrate compliance and cooperate with authorities on any actions related to the model, including its integration into AI systems within the Union.
The mandate allows the AI Office or national authorities to address the representative for compliance issues instead of or in addition to the provider.
If the representative believes the provider is not meeting its obligations, they must terminate the mandate and inform the AI Office immediately, explaining the reasons. These obligations do not apply to providers of AI models released under a free and open-source licence that allows access, usage, modification, and distribution of the model, provided the model’s parameters and usage information are publicly available. However, this exception does not apply to models presenting systemic risks.
While traditional GPAI models have a set of obligations, the ones with systemic risks has to follow additional obligations, such as:
Ensure that AI models are evaluated using the latest standard protocols and tools, including adversarial testing to identify and mitigate risks
Assess and mitigate potential risks at the Union level from developing or using AI models
Keep track of and report serious incidents and corrective measures to the AI Office and national authorities promptly
Maintain strong cybersecurity protection for the AI model and its physical infrastructure.
Regulatory Sandboxes
The EU AI Act discusses regulatory sandboxes under Article 57. The intention behind the regulatory sandboxes is to facilitate the development, training, testing, and validation of AI systems in a secure environment before deployment. They aim to ensure compliance with the EU AI Act, promote cross-border cooperation and sharing of best practices, foster EU AI innovation and competition, support regulatory adaptation through evidence-based learning, enhance market accessibility for SMEs and start-ups, and allow controlled real-world AI testing. Additionally, these sandboxes provide a setting for AI providers to use lawfully obtained personal data for model development, ensuring robust access control, data security, and privacy protocols.
Member States are required to establish at least one national AI regulatory sandbox within 24 months from the regulation's start date. This sandbox can be created either independently or in collaboration with other Member States. The European Commission may offer technical support, advice, and tools for its establishment and operation. Additionally, participating in an existing sandbox that offers equivalent national coverage is also acceptable to meet this requirement.
There are certain obligations that AI providers should fulfil in order to operate under a regulatory sandbox. AI providers must meet the EU Commission's criteria to join regulatory sandboxes and create a sandbox plan with their National Competent Authority (NCA) outlining objectives, methods, and conditions. They are liable for damages to third parties caused by failing to follow NCA guidelines during sandbox testing.
For real-world testing outside the sandbox, including high-risk systems, AI providers must:
1. Submit a testing plan for approval by their Member State Authority (MSA).
2. Obtain informed consent from participants and protect vulnerable groups.
3. Communicate testing procedures to cooperating AI deployers.
4. Oversee testing procedures continuously.
5. Ensure decisions made by the AI system can be reversed.
Providers must inform their MSA about the testing location, report progress, termination procedures, and final outcomes. They must also report any serious incidents immediately and may need to suspend or terminate testing if issues cannot be resolved. Providers are liable for any harms resulting from real-world testing.
Concluding Remarks
The EU AI Act represents a significant step forward in regulating the rapidly evolving field of artificial intelligence. By defining specific obligations for General Purpose AI Models (GPAI) and establishing regulatory sandboxes, the Act aims to ensure that AI development is both innovative and safe. The Act's requirements for detailed documentation, compliance with copyright laws, and the appointment of EU representatives for third-country providers reflect a comprehensive approach to managing AI risks. The introduction of regulatory sandboxes facilitates the controlled testing and validation of AI systems, promoting cross-border cooperation and fostering a competitive AI ecosystem. As AI continues to advance, the EU AI Act provides a robust framework to balance innovation with responsibility, ensuring that AI systems are developed and deployed in a manner that protects public safety and fosters trust in AI technologies.