EU AI Act News Today: Major Changes You Need to Know

Europe has been a global pioneer in AI law – its EU AI Act (adopted in 2024) is the first comprehensive AI regulation in the world. Under the original timeline, bans on “unacceptable risk” AI uses (like covert surveillance or social scoring) took effect on Feb. 2, 2025, and high-risk AI systems have until 2027–28 to meet strict rules. Now, the European Commission is revising the AI Act as part of a new Digital Omnibus package (Nov. 2025) aimed at cutting red tape and boosting competitiveness. The omnibus bundles changes not only to the AI Act but also to the Data Act, GDPR, and other digital laws. In short, Brussels wants to simplify AI governance – a response to industry pressure and internal debate, as analyzed by Tech Policy Press.

Why the EU AI Act Is Facing a Major Shake-Up Techdecodedly

The drivers behind the shake-up are clear. US tech giants (via the CCIA coalition of Apple, Meta, Amazon, etc.) have lobbied intensely for simplification, and even the Trump administration has engaged Brussels to ease rules. Within Europe, member states are split: Denmark and the Netherlands push hard for overhauling the law to help startups, while Germany insists the Act’s core structure must stay intact. Meanwhile, 56 EU-based AI firms (like Mistral AI and Aleph Alpha) signed an open letter urging a pause and simplification of the Act, warning that compliance costs could choke innovation. These public and private pressures, along with an EU “Competitiveness Compass” strategy, set the stage for reform.

EU AI Act News Today: Major Updates Explained

Against this backdrop, the Commission Omnibus proposals (published Nov 19, 2025) outline several key amendments to the AI Act, often geared toward easing burdens on businesses and SMEs. Crucial changes include extended timelines for high-risk AI compliance, new data-processing flexibilities, and shifts in enforcement. Here are the highlights:

  • Delaying high-risk obligations – The most notable tweak is pushing back deadlines for high-risk AI systems. Under the original AI Act, full compliance would start around 2027–2028; the Omnibus proposal ties those dates to the availability of technical standards. In practice, this means companies get “extra grace” until late 2027 or early 2028 to meet the rules. For example, systems listed as high-risk in Annex III (like certain biometric ID tools) would have 6 months after the EU adopts needed standards, and others (Annex I uses like medical AI) get 12 months, no later than Dec. 2027 or Aug. 2028. In effect, this could buy up to 16–24 extra months of compliance prep (a “postponement of up to 2 years”).
  • Bias mitigation (Article 4a) – The Omnibus creates a new Article 4a that relaxes rules on using sensitive personal data (race, health, etc.) for bias testing. Currently, firms can only use such data for bias detection when “strictly necessary”. The Omnibus lowers that bar to just “necessary”. In plain English, AI developers can more easily analyze demographic data to find and correct model bias – provided they implement safeguards and rights (like opt-out by individuals). This extended derogation, aligned with GDPR, should help improve fairness checks on AI without as many legal hurdles.
  • Transparency obligations (Article 50) – Under the Act, providers of generative AI must label AI-generated content. The Omnibus delays this marking requirement: systems placed on the market before Aug 2, 2026 will have until Feb 2, 2027 to comply. The proposal also changes who writes the rules. Instead of the Commission issuing binding “Codes of Practice” (hard law) for transparency, the new plan lets the AI Office develop voluntary Codes of Practice and only makes the Commission intervene if issues arise. In short, EU-supplied compliance guidelines become more advisory and flexible, with public codes expected in 2026.
  • Registration & self-assessment – Small mercy for non-high-risk systems: If an AI system is exempt from being classified as high-risk (say it’s only used for internal prep tasks), under the original Act the provider still had to register it in an EU database. The Omnibus scraps that mandatory registration. Instead, exempt providers need only do an internal self-assessment before deployment. This removes what lawyers call a “disproportionate burden” on many companies building low-risk AI.
  • Relief for SMEs and SMCs – The Omnibus broadens carve-outs originally meant only for micro/small businesses. Simplified requirements for quality management systems (Art. 17) and other SME-support measures will extend to all SMEs and small mid-caps (SMCs). In practice, faster-growing but still relatively small firms (SMCs) get the same lighter-touch compliance options that used to apply only to the littlest start-ups.
  • AI literacy and training – The AI Act initially required providers and deployers to ensure their staff have sufficient AI knowledge. The Omnibus removes this obligation. Instead of a binding rule, it simply tasks the Commission and member states with promoting AI literacy via training and best-practice sharing. This shift is meant to ease companies’ paperwork, though some worry it weakens proactive safeguards, according to The Verge.
  • Centralized enforcement (AI Office) – A big governance change: the Commission’s AI Office (currently overseeing only general-purpose AI models) would get new power. The proposal makes the AI Office the exclusive supervisor for all GPAI-based systems developed by a provider, as well as AI systems built into designated “very large” platforms under the Digital Services Act. In short, Brussels wants to avoid fragmented national enforcement by putting one EU body in charge of major AI systems. This centralization means EU-wide pre-market checks and penalties by the Commission, rather than a patchwork across 27 countries.

Below is a summary table of these shifts. It compares key AI Act provisions before the Omnibus with the proposed changes. (All citations in this table point to official analyses of the Digital Omnibus proposals.)

Feature

Original AI Act

Omnibus Proposal

High-risk AI timelines

High-risk obligations were to apply Aug 2026 (codes, transparency by 2025, high-risk by 2027/28).

High-risk requirements are deferred until compliance tools (standards/guidelines) exist. In effect, full rules kick in by Dec 2027/Aug 2028 at latest.

Sensitive data for bias

Use of special-category data for bias checks allowed only if “strictly necessary.”

Introduces new Art. 4a: providers/deployers may use sensitive personal data for bias detection under “necessary” threshold, with safeguards (matching GDPR).

Registration of exempt systems

Many exempt AI systems still required registration in EU database.

Drops that registry requirement. Exempt providers only need to document a self-assessment before placing the AI on the market.

SME/SMC relief

Quality-management simplifications and support measures reserved only for micro/small firms.

Extends SME-like compliance relief (e.g. simpler QMS rules) to all SMEs and SMCs (small mid-caps).

AI literacy & training

Providers/deployers must ensure sufficient AI knowledge/training for their staff.

Removes mandatory literacy requirement. Instead, Commission/MS encouraged to promote AI training voluntarily.

Oversight & enforcement

AI Office oversaw only GPAI models; high-risk checked nationally.

AI Office gains exclusive authority to supervise all GPAI-based systems by a single provider and those in major online platforms (centralizing EU oversight).

These proposals come with trade-offs. Supporters say the delays and simplifications will spare businesses from “compliance overload” and let Europe’s AI sector grow. Critics, however, warn of a “deregulation” risk. Civil society groups fear that postponing key safeguards leaves Europeans unprotected by high-risk AI rules. An open letter by 120+ NGOs even calls the Omnibus a “rollback of digital rights”. As one expert put it, the term “simplification” can become a euphemism for watering down important protections.

Meta Probe Highlights EU’s Hard Line on Big Tech

Meta Probe Highlights EU’s Hard Line on Big Tech TechDecodedly

Meanwhile, the EU’s tough stance on Big Tech continues. For example, on Dec. 4, 2025 the European Commission opened a wide-ranging antitrust probe into Meta plan to block third-party AI bots on WhatsApp. This shows Brussels is not backing off on oversight – it just wants to balance competition and innovation. Indeed, EU antitrust chief Teresa Ribera said the probe aims to prevent “irreparable harm to competition in the AI space”. In context, Europe is simultaneously regulating AI and challenging dominant tech firms, underscoring its determination to shape AI’s future both legally and commercially.

What’s next?

The Digital Omnibus is currently a proposal. It must be approved by the European Parliament and Member States, and it could change in that process. Adoption is expected by late 2026, with actual implementation perhaps in 2027–28. In fact, Parliament’s committees have already begun debating the AI Act amendments. If all goes as planned, the updates will enter into force quickly after approval; but they may also be amended or delayed if political gridlock persists. The EU’s year-long “Digital Fitness Check” (consultation open until March 2026) might also influence the final text.

For businesses and developers, the message is clear: treat these proposals as an early warning. Companies should map their AI products against the upcoming rules, update compliance timelines, and watch the evolving guidance from the EU AI Office. 2026 will likely be a preparation year – the AI Office is slated to issue transparency and other guidelines during this period. In practice, organizations may want to conserve resources on any compliance work that might soon be simplified or postponed.

Looking forward, the Omnibus forms just one piece of the EU’s digital strategy puzzle. It also repeals aging laws (like the Platform-to-Business Regulation) that overlap with newer acts, and it unifies data rules: several data-sharing laws (Data Governance Act, Open Data Directive, etc.) are folded into a single updated Data Act framework. The overall goal is a more coherent “digital rulebook” for Europe, with AI governance as a core part.

Key takeaways: The EU is fine-tuning its AI law to reduce friction while maintaining safety. High-risk AI deadlines are extended, bias-detection data use is eased, and small companies get more help – but enforcement shifts more power to Brussels and safeguards like AI literacy are loosened. These changes reflect industry demands and political debate. As one analyst noted, Europe still wants strong AI rules, but it also wants them to be workable. The coming months of legislative wrangling will determine exactly how far the AI Act is reined in or reimagined.

For more on how global players are navigating AI innovation, see our other updates. Tech companies and chipmakers are racing ahead: for example, read about AMD’s high-performance AI hardware and Microsoft’s evolving Copilot features in our recent posts. Also, check out our AI Regulation News Today – Key Trends and AI Regulation News Today – US Updates for a broader perspective on how different regions are tackling AI policy. Europe’s AI Act may be in flux, but one thing is sure: we’ll keep tracking every important news in AI law and technology as it unfolds.

FAQS

1. What is the EU AI Act?

The EU AI Act is Europe’s landmark regulation designed to ensure safe, transparent, and trustworthy AI systems across all sectors.

2. What is the latest news about the EU AI Act?

Current updates focus on Omnibus reforms, pressure from Big Tech, and new EU reviews aimed at simplifying requirements for startups.

3. Why is the EU AI Act being revised?

Revisions are driven by innovation concerns, lobbying from tech companies, and EU member states pushing for clearer, lighter compliance rules.

4. When will the updated EU AI Act take effect?

Most provisions begin rolling out between 2025 and 2026, with high-risk AI rules having longer transition periods.

5. How will the EU AI Act affect AI companies?

Companies must follow stricter transparency and risk-management standards, but simplified rules may ease compliance for smaller firms.

6. Does the EU AI Act apply to non-EU companies?

Yes. Any company offering AI services or products within the EU market must comply, regardless of where it is based.

7. What are high-risk AI systems under the Act?

High-risk systems include AI used in healthcare, employment, law enforcement, and critical infrastructure, requiring tight oversight.

8. Why is Meta under EU investigation right now?

The EU launched a probe into Meta’s plan to block third-party AI bots on WhatsApp, citing potential harm to competition.

9. How does the EU AI Act impact consumers?

Consumers gain stronger data protection, transparency rights, and safeguards against harmful or deceptive AI systems.

10. Will the EU AI Act slow down AI innovation?

The EU says the Act will protect innovation by ensuring fair competition, though some tech companies argue compliance may increase costs.

Conclusion: What the EU AI Act News Means Going Forward

The latest updates to the EU AI Act show that Europe is actively reshaping its AI rules to balance safety, competition, and innovation. With pressure from Big Tech, demands from EU startups, and new antitrust actions like the Meta probe, the EU is signaling a firm but flexible approach. As reforms continue, businesses operating in the AI space must stay alert to compliance changes, while users can expect stronger protections and more transparent AI systems. The coming months will define how Europe leads globally in safe, competitive, and responsible AI development.

Please follow and like us:
Pin Share

2 thoughts on “EU AI Act News Today: Major Changes You Need to Know”

  1. Pingback: Microsoft Copilot News Today: Latest AI Updates

  2. Pingback: AI Regulation News Today: Latest US & Global Updates 2026

Leave a Comment

Your email address will not be published. Required fields are marked *

Enjoy this blog? Please spread the word :)

Scroll to Top