
Europe Introduces Stringent Tech Rules as AI Act Enforcement Commences
The EU has officially commenced enforcement of AI law. Implementation of the new EU AI Act paves way for stringent restrictions and potentially huge fines for any violation of the law, CNBC has reported. The AI act is the first tech regulatory framework of its kind to be enforced formally in the EU since August 2024.
Official Lapse of Deadline
The deadline for prohibitions of specific AI systement and requirements to ensure adequate literacy of technology lapsed on February 2. This means that tech companies are expected to comply with the new restrictions. Companies that fail to comply with the new law will attract AI Act penalties stipulated in the regulatory framework.
EU AI Act enforcement could see companies that violate the Act receive fines as high as $35.8 million or 7% of their global yearly revenues, whichever is higher. The stipulated fines are possibly higher than those provided for in EU’s data protection law, the General Data Protection Regulation (GDPR). Under the GDPR, companies can be fined a maximum of 20 million Euros or 4% of the company’s annual global revenue.
The law pegs penalty amounts on the size of the company and the nature of infringement. Under the AI Act, various manipulative AI applications are banned in the EU. These include real-time facial recognition applications, social scoring systems, and other kinds of biometric identification apps that categorize citizens by sex, race, sexual orientation, and other attributes. in the EU. These applications are deemed to pose unacceptable risk to EU residents.
Investor Concerns
Some investors and tech executives have expressed concerns over the stringent provisions contained in the AI Act for fear that they could constrain innovation. In mid last year, Netherland’s Prince Constantijn raised concerns over the focus that the EU was giving towards AI regulation.
“Our ambition seems to be limited to being good regulators. It’s good to have guardrails. We want to bring clarity to the market, predictability and all that. But it’s very hard to do that in such a fast-moving space,” Constantijn said.
Even with these concerns, there are tech executives who view clear AI regulation as critical in fiving the EU a leadership edge in the industry.
“While the U.S. and China compete to build the biggest AI models, Europe is showing leadership in building the most trustworthy ones. The EU AI Act’s requirements around bias detection, regular risk assessments, and human oversight aren’t limiting innovation — they’re defining what good looks like,” Diyan Bogdanov, Director of Engineering Intelligence at Payhawk said.
Much Needed Action
The EU says that the AI Act is not in full force yet and that these are just the initial steps. Mozilla Head of EU Public Policy, Tasos Stampelos says the AI Act is much needed.
“It’s quite important to recognize that the AI Act is predominantly a product safety legislation. With product safety rules, the moment you have it in place, it’s not a done deal. There are a lot of things coming and following after the adoption of an act. Right now, compliance will depend on standards, guidelines, secondary legislation or derivative instruments that follow the AI Act that will actually stipulate what compliance looks like,” Stampelos said.
Last year, the newly created EU AI Office posted the second draft of the practice for general-purpose AI models. These are systems such as OpenAI’s GPT large language models. The draft provided for exemptions for companies that provide specific open-source AI models.
It also highlighted the requirements for systemic general-purpose AI model developers to undergo meticulous risk assessment. The EU AI Office was set up to regulate the use of AI models in line with the AI Act.