The European Union has recently enacted such regulatory framework for AI – the so-called AI Act. This regulation – the worldwide first comprehensive law to regulate AI – is intended to establish consistent rules for the development and use of AI systems in the European Union. Thereby, a risk-based approach was pursued: The higher the potential risk posed by the use of AI systems to the rights and legal interests of EU citizens, the stricter the legal requirements.

In the following, we will take a look at the most important aspects of the AI Act.

Who is affected by the regulation?


The AI Act is aimed at three groups in particular:

  • Providers who place AI systems on the market or who put them into service in the EU,
  • EU-based AI users, and
  • Providers and users of AI systems established or located in a third country, where the system’s output is used within the EU.

What risk classes are differentiated?

As mentioned at the beginning, the AI Act follows a risk-based approach and therefore assesses AI systems based on their potential risk to the safety, health and fundamental rights of EU citizens. In general, the regulation makes a differentiation between four levels of risk.

The requirements and obligations become more extensive as the potential risk of using an AI system increases. These are e. g. transparency requirements, documentation requirements, submission of an EU declaration of conformity, information requirements and continuous monitoring of AI systems by the operator.

In order to determine which risk level an AI system belongs to in a specific case, it is essential to conduct an appropriate risk assessment.

The following risk levels are specified in the AI Act:

Level 1 – Impermissible Risk

The AI systems covered by this risk level pose an exceptionally high risk to the interests protected by the regulation, which means that their provision and use will be prohibited throughout the EU after a transitional period of six months after the regulation comes into force. This includes, for example, the following AI systems:

  • AI systems for “social scoring”,
  • real-time biometric remote recognition AI systems for law enforcement in public areas,
  • AI systems that create or expand facial recognition databases by randomly reading facial images from the internet or video surveillance records,
  • AI systems for emotion recognition in the workplace and educational settings.

Level 2 – High Risk

If an AI system poses a potentially high – but still acceptable – risk to protected rights and legal interests, it is considered a high-risk AI system and therefore falls under the second risk level. Vendors of such systems must, for example, conduct a conformity assessment and then issue a declaration of conformity. Examples of systems in this category include:

  • AI systems for biometric identification and categorization of individuals,
  • critical infrastructure management and operations,
  • employment screening and performance monitoring.

Level 3 – Limited Risk

This category includes all AI systems that interact with humans. This includes chatbots such as “ChatGPT” or emotion recognition applications.

Providers of such AI systems are subject to transparency requirements. The main requirement is that an individual must be informed that he or she is interacting with an AI system rather than another human being.

Level 4 – Low Risk

The fourth and lowest risk level under fall AI systems whose potential risk is small. This includes applications such as AI-based spam filters or predictive maintenance systems for machines.

There are no specific requirements for such AI systems in the AI Act.

What are the sanctions for violations?

Fines will be imposed in accordance with the content of the regulation for breach of the obligations described in part above.

Similar to the European General Data Protection Regulation (GDPR), the amount of potential fines is based on the global annual turnover of the offending company; alternatively, the regulation provides for fixed fines, with the respective higher amount representing the upper limit of the sanctions.

Also in this respect, the risk levels defined by the AI Act are relevant. Fines for the use of prohibited AI systems (1st risk level) amount to up to 35 million euros or 7% of the global annual. Violations of the requirements of the second and third risk levels are punishable by fines of up to 15 million euros or 3% of annual turnover.

Providing incomplete or inaccurate information to the supervisory authorities may result in a fine of up to 7.5 million euros or 1% of annual turnover.

For small and medium sized companies as well as for Start-Ups there are proportional upper limits.

When does the regulation come into force?


After the European Parliament approved the draft on March 13, 2024, the Council of the European Union also approved the AI Act on May 21, 2024.

The regulation will enter into force 20 days after its publication in the Official Journal of the European Union. It is expected to be published during the calendar month of July 2024.

Once the AI Act comes into force, a staggered system of transition periods will apply:

  • Six months after entry into force, the provisions on banned AI systems will take effect, meaning that their use must be discontinued by that time (at the latest).
  • 24 months after entry into force, the other provisions of the Regulation, such as the transparency requirements for generative AI systems, will also take effect and must be complied with from that point onwards.

Practical advice


Ideally, companies which already use AI systems or are planning to use such technologies in the near future should begin dealing with the specifications and requirements of the AI Act now. In particular, companies that use AI systems provided by third parties should, as a first step, conduct an analysis of their current situation and check to what extent they will be affected by corresponding obligations in the future.

Although the transitional periods provided for in the regulation may still seem rather lengthy at the moment, experience with the introduction of the GDPR has shown that companies which only start implementing the regulation right before it comes into force will often struggle to meet the requirements on time and in line with requirements.

Photo: shutterstock / WINEXA

Author

Topics


Browse More Insights

Sign up to receive emails about new developments and upcoming programs.

Sign Up Now