The EU AI Act: It's time to get prepared
The world's first legal framework for AI is here. Ensure your organisation is compliant, competitive, and ready for the future.
What is the EU AI Act?
The EU AI Act is a landmark regulatory framework designed to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.
It adopts a risk-based approach, meaning the stricter the risk, the stricter the rules. Non-compliance can lead to significant financial penalties.

The AI Act introduces a risk-based approach, distinguishing between four kinds of AI systems:
1. Unacceptable
AI applications that are incompatible with EU values and fundamental rights. They will be prohibited.
2. High Risk
Highly regulated AI Systems that could cause significant harm if they are failing or misused, or that are safety components.
3. Limited Risk
Applications that pose a risk of manipulation or deceit. They are less regulated, but have transparency obligations.
4. Minimal Risk
All remaining AI systems. While they have no mandatory requirements, transparency and ethical use are encouraged.
The Cost of Non-Compliance
For prohibited AI practices
For other non-compliance
For incorrect information
Implementation Timeline
August 2024
The AI Act entered into force.
February 2025 (6 Months)
Prohibitions on unacceptable risk AI systems apply.
August 2025 (12 Months)
Obligations for General Purpose AI (GPAI) models become applicable.
August 2026 (24 Months)
Full application of the AI Act, including rules for high-risk systems.
Ready to ensure compliance?
BoardX helps you navigate these regulations with automated compliance tracking and risk management.