Comprehensive regulation to safeguard rights and foster innovation across EU
The European Artificial Intelligence Act, the world’s first comprehensive regulation designed to protect citizens’ fundamental rights while encouraging investment and innovation in the AI industry has entered into force.
Years in the making, the AI Act is a comprehensive rulebook for governing AI in Europe and could serve as a model for other governments developing regulations for this rapidly advancing technology.
The AI Act applies to any AI product or service offered in the EU, from Silicon Valley tech giants to local startups. The restrictions are based on four levels of risk, with most AI systems expected to fall into the low-risk category, such as content recommendation systems or spam filters.
“The European approach to technology puts people first and ensures that everyone’s rights are preserved,” said European Commission Executive Vice President Margrethe Vestager. “With the AI Act, the EU has taken an important step to ensure that AI technology uptake respects EU rules in Europe.”
The provisions will come into force in stages, starting from Thursday, marking the countdown for their implementation over the next few years.
AI systems posing “unacceptable risk,” such as social scoring systems that influence behaviour, some types of predictive policing, and emotion recognition systems in schools and workplaces, will face a blanket ban by February.
Rules covering general-purpose AI models like OpenAI’s GPT-4 system will take effect by August 2025. Brussels is establishing a new AI Office to enforce these general-purpose AI rules.
OpenAI stated in a blog post that it is “committed to complying with the EU AI Act and will work closely with the new EU AI Office as the law is implemented.”
By mid-2026, the full set of regulations, including restrictions on high-risk AI systems like those that determine loan eligibility or operate autonomous robots, will be in force.
A fourth category includes AI systems posing a limited risk, which will face transparency obligations. Chatbots must be identified as machines, and AI-generated content like deepfakes must be labelled.
Companies that do not comply with the rules face fines of up to 7 per cent of their annual global revenue.

