Introduction
In one of the most ambitious legal texts to date, the European Parliament has reached a milestone: there has been political agreement on the text of the famous Artificial Intelligence Act since the end of 2023. AI in Europe must be safe, respect fundamental rights and preserve democracy without stifling innovation.
Compliance with the rules is important with any new regulation. And yet: the AI Act places some crucial obligations on AI systems based on their potential risks and impact levels. The penalties are not minus. With fines of up to 7% of global revenue for noncompliance, it is more important than ever for companies to be proactive and comply with AI Act regulations.
However, complying with the AI Act need not be viewed solely as a burden. The introduction of the AI Act boosts confidence in AI and, therefore, consumer stakeholders’ trust in companies that use or offer AI. Moreover, the AI Act also opens the door to new opportunities for innovation and growth. Indeed, in addition to obligations, the legislation also provides measures to support innovation, including a so-called regulatory sandbox and real world testing, which allows companies to develop and train AI solutions before they are brought to market.
The AI Act thus enables companies to play a leading role in the evolution of artificial intelligence. Be a pioneer and use AI to boost your business.
Objectives of the legislation
The primary goal of EU AI legislation is to provide a technology-neutral definition of AI systems and establish harmonized horizontal rules applicable to them. In addition, the legislation mandates the establishment of various bodies to control and monitor AI systems within the EU (including the European Artificial Intelligence Council (AI Council) and AI offices).
Classification of AI systems
EU AI legislation classifies AI systems into different risk categories. Each classification entails specific requirements and obligations:
- AI systems with unacceptable risk, namely systems that violate fundamental rights, e.g., these involved in social scoring and real-time biometric identification, will be banned.
- High-risk AI systems, being those that pose significant risks to health, safety and fundamental rights, are subject to strict obligations, including human oversight, among others.
- AI systems with minimal risk, such as AI-enabled video games and spam filters, among others, can be used without restriction within the EU.
- AI systems with transparency risk, such as chatbots, among others, must meet transparency requirements to enable informed user decisions.
Implications for businesses
EU AI legislation applies to all parties involved in the development, implementation and use of AI, extending across different sectors and even beyond the EU if products are intended for use within the EU.
As mentioned, the obligations for companies vary depending on what type of AI system is being used or developed.
The main commitments are:
- conduct risk assessments;
- use high-quality data;
- document technical and ethical choices;
- records tracking the performance of the AI system;
- Inform users about the nature and purpose of AI systems;
- enable human supervision and intervention;
- ensure accuracy, robustness and cybersecurity;
- Testing the AI systems for compliance with the rules; and
- Registering the AI systems in an EU database.
Risks and challenges
Enforcement of EU AI legislation will be carried out by member states. The fines that can be imposed also vary depending on whether there is a prohibited AI system or a high, minimal or transparency risk AI system. For non-compliance with the rules on prohibited AI systems, fines can reach €35,000,000 or 7% of a company’s total annual turnover. For non-compliance with other obligations, fines can also be high, up to €15,000,000 or 3% of total annual turnover. The law does provide lower amounts for small providers, start-ups and SMEs.
Opportunities
EU AI legislation seeks to establish a framework that ensures trust in AI for customers and citizens, while promoting investment and innovation in AI technologies. Regulatory sandboxes provide controlled environments for testing innovative technologies, while initiatives such as networks of AI excellence centers and digital innovation hubs support the development of AI in the EU.
Next steps
Companies are advised to evaluate their AI systems and assess the associated risks. Based on that assessment, it will then be necessary to comply with the obligations imposed by the AI legislation, among other things, by drafting codes of conduct and codes of practice.
Using AI is good. Using AI smartly and compliantly is better.
IFORI can assist you in advising on the application and implementation of these new regulations. Contact us now.