Industry

The EU AI Act: Here's what to expect

From AI-generated content to voice assistants, there has been a massive increase in the adoption of AI systems in our day-to-day lives. As the rise in AI applications continues, the concerns around the potential risks are at the forefront of the European Union’s (EU) regulatory agenda. 

In response to these concerns, the European Commission drafted a proposal to regulate AI in April 2021. After years of discussion and the emergence of unprecedented capabilities of novel models like ChatGPT, the European Council formally approved the AI Act in May 2024.

After three years of legislative debate, the Council of the EU cast its final vote on July 12, 2024, marking a significant milestone in AI. The EU AI Act has officially been published, establishing the world's first comprehensive regulatory framework for AI, expected to shape the future of AI regulations and governance from both within and beyond the EU.

Focusing on addressing issues such as algorithmic transparency and the ethical use of AI, the EU AI Act aims to foster the development and implementation of AI whilst ensuring a human-centric approach to AI. The final text consists of 50,000 words, divided into 180 recitals, 113 Articles and 13 annexes - but we will save you the hassle and break down exactly what you need to know. 

Whether you are a developer, exporter or deployer, you will want to read this. 

Understanding the EU AI Act's risk-based approach

Agreed upon by the European Parliament and the Council in December 2023, the EU AI Act aims to address the current risks arising from the use of AI such as health, safety and fundamental rights of citizens.  

The AI Act established a risk-based approach, categorising AI systems into 4 risk areas; unacceptable risk, high risk, limited, and minimal or no-risk. The higher the risk the AI system poses, the stricter the rules and regulations. 

However, for organisations to comply with the AI Act, they need to understand how their AI is categorised.

EU AI Act Risk System
EU AI Act Risk System

Unacceptable risk

AI systems that fall into the ‘unacceptable risks’ category are prohibited AI practices that violate fundamental EU rights and values. 

This includes AI systems that manipulate or use deceptive techniques to exploit vulnerabilities, such as age, disability or social situation. It also covers AI systems that infer emotions in the workplace, create biometric categorisation or predict the likelihood of an individual committing a crime. 

High risk

High-risk AI systems, which impact one's health, safety and fundamental rights must undergo a third-party conformity assessment and post-market monitoring to ensure societal safety. 

These AI systems include those used for profiling, safety components in critical infrastructure (electricity, gas and water), biometric identification and categorisation, and determining access to education, job applications and public/private services. 

Limited risk

The limited risk category includes AI systems that pose risks of impersonation, manipulation or deception, such as AI-generated content. These systems are subject to disclosure and transparency obligations.

This can range from generating deep fakes of images, audio and video to using generative AI to create text to inform the wider community of public matters. Most generative AI outputs must be watermarked in a machine-readable format.

Minimal risk

All other AI systems that create minimal risk are not subject to regulations by the AI Act. This includes common AI systems such as recommender systems and spam filters. The Commission notes that the majority of AI systems will fall into this category, although transparency and regulatory calibration will require clarifications. 

Penalties for non-compliance

Failing to comply with the regulations imposed by the EU AI Act can lead to severe fines and penalties. Depending on the severity of the breach, these have been split into three-level sanctions. 

  • Unacceptable Risk: Violations carry fines of up to the greater of (or, in the case of SMEs, the lesser of) €35 million or 7% of global revenues
  • High-Risk: Violations carry fines of up to the greater of (or, in the case of SMEs, the lesser of) €15 million or 3% of global revenues. 
  • Limited Risk: Violations can trigger fines of up to the greater of (or, in the case of SMEs, the lesser of) €7.5 million or 1.5% of global revenues. 

The EU AI Act timeline

The EU AI Act officially enters into force today, August 1, 2024. However, the majority of rules will apply 24 months later, in August 2026, while others will have a longer period of 36 months, becoming effective in August 2027, depending on the risk level.

The overall timeline of the EU AI Act is as follows:

EU AI Act Timeline
The EU AI Act Timeline

Who is affected by the Act?

Is it just the EU?

To prompt a uniform level of transparency and safety, the EU AI Act applies to all AI system providers based in the EU or who provide their services in the EU, irrespective of their location. This means that companies outside of the EU must comply with the Act if they wish to provide AI services to EU residents. 

Similar to the General Data Protection Regulation (GDPR), the EU AI Act may naturally adopt the “Brussels Effect”, suggesting that it is likely countries outside of the EU will adopt similar regulations. 

Developers, deployers and end-users

To ensure clarity and accountability in the use of AI systems, the EU AI Act distinguishes between three different categories of actors: 

  • Developers: These are organisations or individuals that create and provide AI systems to the market. They bear the strictest obligations to ensure transparency, safety and the ethical use of their AI systems. 
  • Deployers: These are professionals or organisations that deploy AI systems into their operations. Although their obligations are less stringent, they are still required to ensure the AI system used complies with regulations. 
  • End-user: These are individuals who use a service or product that incorporates AI. The Act imposes no obligations on end users as they are the recipients of AI services. 

Defining the different roles and responsibilities in the use of AI systems ensures that those who create and deploy AI systems are held accountable and safeguard the rights of end users. 

So, what’s next?

As the EU AI Act comes into force, stakeholders across the AI ecosystem need to prepare for compliance. Evaluating your AI system will determine how to develop your compliance roadmap to meet the necessary requirements within the timelines. 

Integrating ethical guidelines and best practices into AI development and deployment processes will enhance trust and acceptance among users, whilst staying informed about amendments or additional guidelines issued by regulatory bodies will keep you on top of things. 

Organisations should embrace this opportunity presented by the AI Act to innovate responsibly whilst being able to differentiate their AI products. Companies should anticipate similar regulations beyond the EU due to the ‘Brussels Effect” and position themselves ahead of global regulatory trends. 

By taking these proactive steps, developers, deployers, and end-users can navigate the complexities of the EU AI Act and contribute to the responsible and ethical advancement of AI technologies. The countdown to compliance has begun, promising a safer, more transparent, and human-centric AI landscape.

Nisha Arya
Head of Content
Bio

Data scientist turned marketer with 6+ years of experience in the AI and machine learning industry.

Access thousands of GPUs tailored to your requirements.