Introduction
The European Union AI Act (EU AI Act) was initially proposed by the European Commission on April 21, 2021. Since its inception, it has evolved significantly through feedback from various stakeholders. The Act gained approval from the European Parliament on March 13, 2024, and was formally adopted on May 21, 2024.
As the world's first comprehensive legislation on AI, the EU AI Act aims to establish a robust regulatory framework for the development of AI technologies in a safe and ethical manner. Its primary objectives include stimulating investment and innovation in AI, improving governance and enforcement mechanisms, and promoting a unified EU market for AI. As per the Act, it aims to create a unified legal framework within the EU for the development, market entry, operation, and use of artificial intelligence systems. It promotes the adoption of human-centric and trustworthy AI while safeguarding health, safety, fundamental rights, and environmental protections. By doing so, it supports innovation and ensures the free movement of AI-based goods and services across borders within the EU.
The Act will be fully applicable after a 24-month transition period, giving businesses time to initiate compliance efforts, although certain provisions may require earlier adherence. The scope of the EU AI Act extends to entities operating within the EU, encompassing providers, users, importers, distributors, and manufacturers of AI systems. This broad coverage ensures accountability across all stages of AI development, deployment, importation, distribution, and manufacturing within the EU. It applies globally, meaning businesses worldwide, including those in India, that provide or deploy AI systems or general-purpose AI models in the EU market must adhere to its regulations. Exceptions include activities conducted for defence, military, or national security purposes.
Salient Features and Impact on Businesses
The Act defines AI system1 to’ mean a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
The Act defines General Purpose Model (GPAI)2 as an AI model trained extensively using self-supervision with large datasets. It shows broad versatility, able to perform many different tasks effectively and can be easily integrated into various downstream applications or systems. Examples of GPAI AI models include LLM models. When such GPAI models are placed on AI systems for performance of a variety of tasks, it is referred to as GPAI system. The Act provides for a set of compliance for GPAI providers, be it the same provider or any third-party who integrates these GPAI models.
The Act categorizes AI systems into four risk classes: Unacceptable Risk, High Risk, Limited Risk, and Low Risk. Unacceptable risk3 pertains to AI systems that pose significant threats to fundamental rights, safety, or security, such as those designed for social scoring or surveillance. Such AI systems are completely prohibited. Some examples include social scoring systems, systems that aim to manipulate children, or any real-time remote biometric systems.
High-risk AI systems4 have the potential to cause significant harm, impacting critical sectors like infrastructure, law enforcement, or healthcare. Examples of high-risk AI systems include those used in products regulated under EU product safety laws such as toys, aviation, cars, medical devices, and lifts. Additionally, AI systems in sectors like biometric identification, management of critical infrastructure, education, employment management, access to essential services, law enforcement, migration control, and legal interpretation are considered high risk. These AI applications require registration in an EU database and are subject to stringent regulatory oversight to ensure safety, privacy, and adherence to fundamental rights within the European Union.5 Additionally, a high risk AI system is required to keep in place technical documentation before that system is placed on the market or put into service.6 A high risk system is also required to keep automatic records/logs over the entire lifetime of the AI system.7
Limited-risk AI systems carry minimal potential for harm, typically used in non-critical applications where errors have limited consequences. Examples of AI systems falling under the EU's regulations include chatbots and generative AI used for creating videos or audio content. These systems must notify users about their AI nature and offer them the choice to continue using them. This transparency requirement aims to ensure that users are aware of interacting with AI and can make informed decisions about their usage. Finally, Low-risk AI systems pose negligible harm potential, commonly applied in non-critical scenarios with minimal impact on individuals or society. This includes spam filters and AI enabled video games.
The impact on businesses will vary depending on the type of AI systems they operate, governed by the risk model. AI systems posing unacceptable risks are prohibited outright. For high-risk AI systems, businesses must implement robust risk management and human oversight. It is crucial for companies using high-risk AI to ensure rigorous governance over the data used for training, testing, and validation. This involves implementing cybersecurity measures and controls to maintain the system's integrity and fairness. These regulatory requirements aim to mitigate risks associated with AI deployment while promoting responsible and safe innovation within the business sector.
Next Steps for Businesses
For companies/startups engaged in the AI space, these are some of the important steps that can be taken for initiation of compliance:
Role Assessment: The Act categorizes entities as either providers or deployers based on their role in the AI ecosystem. Businesses developing AI systems are classified as providers, while those using existing AI models for market deployment are referred to as deployers. Understanding these roles is the first step for businesses navigating compliance with the Act. Irrespective of the roles that the AI systems play, if the AI entity generates output within the EU, the Act shall be applicable.
Risk Assessment: Entities must assess whether their AI systems fall into one of the risk categories defined by the Act: unacceptable risk, high risk, limited risk, or low risk. Compliance requirements will vary depending on the assigned risk level.
Understand the Compliances: As discussed above, high risk AI system providers must ensure their AI systems pass a conformity assessment before entering the EU market. They must maintain comprehensive technical documentation and implement robust risk management throughout the AI system's 8lifecycle. This includes data governance to ensure quality and mitigate bias. Transparency is key, with clear instructions for deployers and disclosure of capabilities and limitations. For certain AI systems, a fundamental rights impact assessment is required. Post-market monitoring and incident reporting are mandatory, alongside measures for human oversight, accountability, cybersecurity, and safety. Cooperation with national authorities to demonstrate compliance is also essential.
Similarly, deployers of high risk AI systems should adhere to the AI system provider's instructions. They must ensure input data are appropriate for the system's intended use and disclose its use to affected parties. Human oversight is necessary for safe use, especially in sensitive contexts requiring clear explanations for decisions influenced by AI outputs.
Businesses operating limited risk and low risk AI systems should notify users about the AI that they are using. Such businesses are required only to maintain transparency.
The Act imposes penalties9 for non-compliance. Businesses that provide or deploy AI systems posing unacceptable risks could face fines up to 35 million EUR or up to 7% of their annual global turnover. Non-compliance with obligations for high-risk AI systems under the Act could result in fines up to 15 million EUR or up to 3% of annual global turnover. Startups and small businesses will face lower fines compared to the percentages specified.
Article 3(1)
Article 3(63)
Article 5
Article 6
Article 8,9 and 10
Article 11
Article 12
Article 16 and 26
Article 99