On this page you will read detailed information about EU Artificial Intelligence Act (AI Act)
What is the EU AI Act?
The EU Artificial Intelligence Act is the world’s first comprehensive, horizontal law for AI systems placed on the EU market. It follows a risk-based approach: the higher the risk an AI system poses to health, safety, or fundamental rights, the tighter the obligations. The Act was published in the EU Official Journal on 12 July 2024 and entered into force on 1 August 2024.
From there, requirements phase in over several years. The EU’s implementation timeline confirms: prohibited AI practices apply 6 months after entry into force; General-Purpose AI (GPAI) rules at 12 months; most high-risk system duties at 24–36 months; codes of practice targeted at 9 months.
The Risk Tiers (at a glance)
- Unacceptable risk (prohibited)
Certain use cases are banned outright once the prohibition start date kicks in (e.g., specific forms of manipulative or exploitative AI and some types of remote biometric identification, subject to narrow exemptions). Non-compliance with bans carries the heaviest fines. - High risk
“High-risk” AI systems—think safety components in regulated products or systems used in sensitive domains (e.g., critical infrastructure, employment, access to essential services)—must meet stringent requirements on risk management, data governance, documentation, logging, human oversight, robustness, and post-market monitoring. Many of these duties start 24 months after entry into force for Annex III use cases and 36 months for those tied to product-safety legislation in Annex I. - Limited risk
Transparency duties apply (e.g., users should know they’re interacting with AI or synthetic content). Providers still need to assess and mitigate foreseeable risks. - Minimal risk
Most everyday AI tools fall here (e.g., spam filters). The Act doesn’t add mandatory obligations beyond baseline EU law, but the EU encourages codes of practice and voluntary good governance.
What’s new in 2025?
GPAI (foundation model) guidance and timelines. The Commission has issued Q&A guidance and clarifications for General-Purpose AI—covering core obligations (technical documentation, copyright compliance, transparency about training data summaries) and extra duties for GPAI models with “systemic risk.” Systemic-risk models face deeper evaluations, adversarial testing, incident reporting, and cybersecurity controls.
Regulatory and industry updates through mid-2025 emphasize that GPAI obligations begin one year after entry into force (August 2025), with further expectations for systemic-risk models thereafter. Press coverage notes debate over readiness—including calls for delays and an evolving Code of Practice to help companies comply—yet the Commission has signaled it is pushing ahead while publishing additional guidance.
By August 2025, a new wave of obligations and the penalty regime began applying, making enforcement real for providers and deployers that fall within the early tranches.
Who is in scope?
- Providers (developers) that place AI systems on the EU market;
- Deployers (users) operating AI systems in the EU;
- Importers, distributors, authorized representatives in the EU supply chain;
- GPAI model providers whose models can be integrated downstream in many applications.
The Act applies extraterritorially in practice: if your AI system or GPAI model ends up on the EU market or is used in the EU, the rules can reach you.
What counts as GPAI—and “systemic risk”?
GPAI (often called “foundation models”) are models capable of serving a wide range of purposes, which others can fine-tune or deploy in downstream systems. The Act draws a line for systemic-risk GPAI, including models meeting a compute threshold on the order of 10²⁵ FLOPs (or otherwise designated based on significant impact). Those models must implement additional risk management, evaluation, red-teaming, and reporting.
The Commission’s FAQs and guidelines clarify how obligations shift if you fine-tune an existing GPAI model: your duties focus on what changed (e.g., supplementing technical documentation).
Penalties
The AI Act’s fines are among the stiffest in EU tech regulation:
- Up to €35 million or 7% of global annual turnover (whichever is higher) for infringements related to prohibited practices;
- Up to €15 million or 3% for other violations (e.g., obligations for high-risk systems);
- Member States must ensure penalties are effective, proportionate, and dissuasive, with attention to SMEs and startups.
Key compliance dates you should track
- 1 Aug 2024 — Entry into force (clock starts; no obligations yet).
- ~6 months later — Prohibitions become applicable.
- ~9 months — Codes of practice targeted (non-binding but influential).
- ~12 months (Aug 2025) — GPAI obligations begin; guidance for systemic-risk models continues to roll out.
- ~24 months (Aug 2026) — High-risk (Annex III) obligations generally apply.
- ~36 months (Aug 2027) — High-risk tied to product-safety law (Annex I) apply.
In the previous post, we had shared information about Telecommunications (Security) Act 2021, so read that post also.
(Exact “month + date” depends on the Official Journal publication and the “20 days after publication” entry-into-force rule; see the OJ and Commission timeline for precision.)
What providers should be doing now
1) Map your portfolio
Inventory models and systems: which are GPAI, which are high-risk, which are limited/minimal risk? Capture origin (own vs third-party), deployment contexts, and EU touchpoints.
2) Build the technical file
For GPAI and high-risk systems, prepare technical documentation that explains model purpose, training data summaries (for GPAI), risk management, testing, performance metrics, and lifecycle controls. The Commission’s GPAI Q&A explains documentation expectations and how they change when you fine-tune third-party models.
3) Operationalize risk management & evaluations
Stand up processes for model evaluations, red-teaming/adversarial testing, robustness and cybersecurity, bias and safety testing, and serious-incident reporting (especially for systemic-risk GPAI).
4) Human oversight & transparency
Design interfaces and workflows so humans can understand, intervene, and override where required; provide clear user information and AI-interaction disclosures in limited-risk contexts.
5) Contracts & supply chain
Update agreements with upstream model providers and downstream deployers: who does what (documentation, testing, incident reporting), and how updates propagate. Ensure sub-processors/vendors meet security and transparency expectations.
6) Governance & accountability
Appoint accountable owners, create cross-functional review (legal, security, data, product), and align with ISO/IEC 42001 (AI management) and existing ISO 27001 practices where useful.
What deployers (users) should be doing
- Classify use cases and check whether they land in Annex III high-risk domains;
- Perform fundamental-rights impact assessments where required;
- Keep logs, monitoring, and human-in-the-loop safeguards;
- Ensure data governance, including quality and representativeness for the intended context;
- Keep records of model provenance (especially when stacking GPAI + fine-tuning);
- Prepare to report serious incidents and cooperate with authorities.
FAQs we’re hearing in 2025
If you place AI on the EU market or your system is used in the EU, the Act likely applies—even if you’re headquartered elsewhere.
Open-source model availability can benefit from certain facilitation, but once models are placed on the market or integrated into products/services, relevant duties may apply—especially for systemic-risk GPAI or high-risk use-cases. Check the Act text and Commission guidance.
Yes—the penalty regime is live with upper ranges of €35m/7% for prohibited practices, and €15m/3% for other serious violations. Authorities are building capacity via national competent bodies and labs; August 2025 was a key step-up moment.
A voluntary Code of Practice for GPAI is being developed to give companies practical guardrails. Public reporting in 2025 notes delays and industry calls for more time, but the Commission is publishing guidance to help firms meet near-term obligations.
Conclusion
The EU AI Act is here and live—with obligations staggered through 2025–2027. In 2025, the spotlight is on GPAI providers, kick-starting documentation, transparency, copyright compliance, and (for systemic-risk models) robust evaluation and incident reporting. High-risk systems have a longer runway but require deeper, product-grade governance.
If you build or use AI that touches the EU, start now: inventory systems, tighten documentation and testing, align contracts, and stand up governance. The companies that treat the Act as a product-quality discipline—not just a legal hurdle—will move faster and more safely in the long run
Disclaimer
The information and services on this website are not intended to and shall not be used as legal advice. You should consult a Legal Professional for any legal or solicited advice. While we have good faith and our own independent research to every information listed on the website and do our best to ensure that the data provided is accurate. However, we do not guarantee the information provided is accurate and make no representation or warranty of any kind, express or implied, regarding the accuracy, adequacy, validity, reliability, availability, or completeness of any information on the Site. UNDER NO CIRCUMSTANCES SHALL WE HAVE ANY LIABILITY TO YOU FOR ANY LOSS OR DAMAGE OF ANY KIND INCURRED AS A RESULT OR RELIANCE ON ANY INFORMATION PROVIDED ON THE SITE. YOUR USE OF THE SITE AND YOUR RELIANCE ON ANY INFORMATION ON THE SITE IS SOLELY AT YOUR OWN RISK. Comments on this website are the sole responsibility of their writers so the accuracy, completeness, veracity, honesty, factuality and politeness of comments are not guaranteed.
So friends, today we talked about EU Artificial Intelligence Act (AI Act), hope you liked our post.
If you liked the information about EU Artificial Intelligence Act (AI Act), then definitely share this article with your friends.