EU’s AI Act takes effect, introduces risk-based approach for AI systems

LONDON, UNITED KINGDOM — The European Union’s landmark AI Act officially came into force on August 1, 2024, marking a significant shift in the regulation of artificial intelligence across Europe and beyond.
This comprehensive legal framework aims to ensure the safety and transparency of AI systems released and used within the EU market.
Understanding the EU AI Act’s risk-based approach
The EU AI Act introduces a tiered, risk-based framework for AI systems:
- Minimal risk: Most AI systems fall into this category, with no mandatory requirements.
- High risk: These systems must comply with strict requirements, including risk mitigation, data quality assurance, and human oversight.
- Unacceptable risk: AI systems that clearly threaten fundamental rights, such as those manipulating human behavior or enabling social scoring, will be banned.
- Specific transparency risk: Also known as limited-risk AI systems, these must meet transparency requirements, including clear labeling of AI-generated content.
Jonathan Armstrong, a partner at Punter Southall, noted that even before the AI Act, “AI was not completely unregulated in the EU thanks to the GDPR (general data protection regulation).”
Previous enforcement actions under the GDPR included bans on chatbots and fines for using AI algorithms.
Enforcement mechanisms and penalties for non-compliance
Market surveillance authorities (MSAs) will oversee the implementation of the EU AI Act at the national level. Member states must appoint their MSA by August 2, 2025. A new European AI Office within the European Commission will coordinate matters at the EU level.
Penalties for non-compliance are substantial:
- Up to €35 million ($38.4 million) or 7% of global annual turnover for violations of banned AI applications
- Up to €15 million ($16.5 million) or 3% for violations of other obligations
- Up to €7.5 million ($8.2 million) or 1.5% for providing incorrect or misleading information
Global impact and push for international AI governance
The AI Act’s reach extends beyond EU borders, affecting organizations worldwide. Armstrong explained, “If a U.S. company’s website has a chatbot function that is available for people in the EU to use, that U.S. business will likely be subject to the EU AI Act.”
In a related development, Japanese Prime Minister Fumio Kishida has unveiled an ambitious plan to establish a new international framework for developing rules on AI use. During his keynote address at a high-level meeting of the Organisation for Economic Cooperation and Development (OECD) in Paris, Kishida called upon nations to unite in addressing the “universal opportunities and risks” posed by generative AI, emphasizing the need for collaboration to achieve “safe, secure, and trustworthy AI.”
With the AI Act now in effect, businesses worldwide must carefully consider its implications and ensure compliance to avoid hefty penalties and maintain access to the EU market. As global efforts to regulate AI intensify, companies should stay informed about evolving international standards and prepare for a future of more stringent AI governance.