eu-ai-act-hero

EU AI Act: What Companies Need to Know and Do Now

The grace period is over — what applies now and what must be in place by August 2026

The End of the Grace Period

For many companies, the EU AI Act seemed like a distant date on the calendar. But reality has caught up with the transition periods: Since February 2025, initial prohibitions and competency requirements are legally binding, since August 2025 further regulations for AI models and governance structures apply — and from August 2026, the remaining provisions for high-risk systems take effect [1, 2]. Anyone who hasn't acted yet has little time left.

This article provides a practical overview: What does the law regulate? Which deadlines apply when? And what does this mean for your company?

What Is the EU AI Act?

The EU AI Act (officially: Regulation (EU) 2024/1689) is the world's first comprehensive law regulating artificial intelligence [3]. It applies directly in all EU member states — no national transposition required. It covers all companies that develop, deploy, import, or distribute AI systems, regardless of industry or company size [1, 4].

Companies outside the EU are also affected if their AI systems are used in Europe — similar to GDPR (DSGVO), the law has extraterritorial reach [5]. Its goal is twofold: protecting fundamental rights, health, and safety of citizens on one hand, and fostering innovation and trust in AI technologies on the other [3]. It follows a risk-based approach: the higher the potential risk of an AI system, the stricter the requirements [4, 6].

The Phased Timeline: What Has Applied Since When?

eu-ai-act-timeline

A common misconception is that the EU AI Act "only starts in 2026." In fact, it has been in force since August 1, 2024 — and has been taking effect in stages ever since [2, 3].

August 1, 2024 – Entry into force. The EU AI Act was published in the Official Journal of the European Union on July 12, 2024 and entered into force 20 days later [7].

February 2, 2025 – First binding obligations. Prohibitions on unacceptable AI practices (including social scoring, covert manipulation, biometric mass surveillance in public spaces) are now fully in force. At the same time, the AI literacy obligation under Art. 4 applies: companies must ensure their staff has sufficient knowledge in handling AI systems [6, 8]. The law prescribes neither a specific number of hours nor a particular certification — but it requires companies to actively take and document appropriate measures [9].

August 2, 2025 – Second phase. Binding requirements for general-purpose AI models (GPAI) take effect: documentation obligations, transparency on training data, risk assessments. These obligations target the developers of such models, not companies that merely use them [2, 10]. National supervisory authorities also begin operations; sanctions for already applicable areas of the AI Act become enforceable for the first time [10]. In Germany, the Bundesnetzagentur is designated as the central market surveillance authority — the required national implementation law (KI-MIG) was approved by the Federal Cabinet on February 11, 2026 and still needs to pass through parliamentary proceedings [11, 12].

August 2, 2026 – Full application. The core obligations for high-risk AI systems take full effect: risk management system, technical documentation, CE marking, human oversight. Transparency obligations under Art. 50 (labeling of AI interactions, deepfakes, biometric categorization) also become binding [1, 3].

August 2, 2027 – Art. 6(1) takes effect. High-risk AI systems embedded as safety components in regulated products (e.g., medical devices, vehicles, aviation) become fully subject to the high-risk obligations from this date [2, 3].

The Risk Model: Four Tiers, Four Regulatory Levels

eu-ai-act-pyramide

The core of the EU AI Act is its risk classification. Not every AI application is subject to the same requirements — what matters is the context and the risk potential of a system's deployment [4, 6].

1. Unacceptable Risk — Prohibited

These systems have been fully prohibited since February 2025. They include [6, 8]:

  • Social scoring systems operated by public authorities
  • Manipulative AI that subconsciously influences people
  • Real-time biometric categorization in public spaces
  • Emotion recognition in the workplace or educational institutions

2. High-Risk AI — Strict Obligations

This is the most relevant category for most companies. High-risk systems fall into two groups: AI in safety-critical products (e.g., medical devices, vehicle safety) and AI in specific application areas under Annex III, including [1, 3]:

  • Recruiting and HR decisions
  • Creditworthiness and credit scoring
  • Education and examinations
  • Biometric identification
  • Critical infrastructure

From August 2026, comprehensive requirements apply to these systems: risk management system, data governance, technical documentation, human oversight, and a functioning quality management system [1, 13].

3. Limited Risk — Transparency Obligations

Chatbots, deepfakes, and other interactive AI systems where there is an increased risk of deception. Starting August 2026: users must be able to clearly recognize that they are interacting with an AI (Art. 50) [6, 8].

4. Minimal Risk — No Special Requirements

The majority of AI systems currently in use fall into this category — e.g., AI filters in emails or simple recommendation algorithms [4]. The law recommends voluntary codes of conduct but does not impose any requirements [1].

What Do Companies Need to Do?

Which requirements apply depends on two factors: the risk class of the AI system in use and the role of the company — whether it acts as a provider (develops and markets AI) or as a deployer (uses third-party AI systems in its own operations) [1, 13].

For All Companies — Immediately Applicable Law

Review prohibited practices (since Feb. 2025): Ensure that no deployed system falls under the prohibited practices per Art. 5 [6, 8].

Ensure AI literacy (Art. 4, since Feb. 2025): The law does not prescribe a number of hours or certification. However, companies must be able to document that they have taken measures — depending on the role (developer, decision-maker, user) and risk level of the deployed systems [9, 12].

As a Necessary Foundation for All

Since the EU AI Act is based on the principle of self-classification, every company must assess for itself which obligations apply [13]:

  • Create an AI inventory: Catalogue all AI systems in use — including purchased tools such as Microsoft Copilot or ChatGPT.
  • Perform risk classification: Check whether systems fall under Annex III — especially in HR, credit decisions, and processes with a direct impact on individuals.

For High-Risk AI Systems Only (from August 2026)

Providers bear the greatest responsibility, deployers have graduated but binding obligations [1, 13]:

  • Technical documentation: Purpose, functionality, architecture, and training methods of the system
  • Risk management system: Continuous identification and mitigation of risks
  • Data quality: Training and test data must be representative and checked for bias
  • Human oversight: Qualified personnel must be able to monitor the system and intervene
  • Conformity assessment: High-risk AI must in many cases be certified by independent notified bodies [12]

For Specific System Types Only (Transparency Obligations, Art. 50)

Companies operating chatbots, deepfake systems, or biometric categorization systems must clearly inform users that they are interacting with an AI [6, 8]. This obligation does not apply universally to every AI application.

Penalties for Non-Compliance

The EU AI Act's sanctions are proportionate to the severity of the violation [7, 9]:

  • Violations of prohibited AI practices: up to EUR 35 million or 7% of global annual turnover
  • Violations of obligations for high-risk systems: up to EUR 15 million or 3% of annual turnover
  • False information to authorities: up to EUR 7.5 million or 1% of annual turnover

An early enforcement case shows that authorities mean business: the Bundesnetzagentur has already imposed initial fines, including against a medical AI provider that could not present adequate risk assessments [9]. Beyond financial sanctions, companies face delays in market launches for new products and loss of trust among customers and partners.

Special Case: Does It Also Apply to Third-Party AI Tools?

Even as a deployer of purchased AI systems, a company bears responsibility. When a system like Microsoft Copilot or an external recruiting tool is used in a high-risk context, the corresponding deployer obligations apply — regardless of who developed the model [1, 5]. Those who merely use GPAI models do not bear the provider obligations for those models, but they do bear responsibility for the specific deployment context [10].

The EU AI Act as an Opportunity

Viewing the EU AI Act solely as a regulatory burden is short-sighted. The European Commission explicitly sees it as an innovation framework: AI regulatory sandboxes allow companies — especially start-ups and SMEs — to test new AI systems under real-world conditions before launching them on the market [11, 12]. In Germany, the Bundesnetzagentur will operate at least one such sandbox. Companies that build governance structures now, document AI transparently, and establish clear responsibilities gain a competitive advantage: greater trust among customers and partners, and a more solid foundation for scaling AI projects further.

How AKARA Solutions Can Help

As a digital transformation consultancy, we guide companies through structured preparation for the EU AI Act: from the initial inventory of deployed AI systems through risk classification to establishing appropriate governance and documentation processes. We rely on GDPR-compliant, locally operated solutions — an approach that aligns seamlessly with the AI Act's requirements.

Want to know where your company stands today? Get in touch — we'll help you develop a realistic roadmap.

References

  1. European Commission (2024). "AI Act." Shaping Europe's Digital Future. digital-strategy.ec.europa.eu
  2. Alexander Thamm (2025). "The Roadmap to the EU AI Act: A Detailed Guide." alexanderthamm.com
  3. IHK Schleswig-Holstein (2025). "AI Act: What Companies Need to Know Now." ihk.de
  4. IJONIS (2026). "EU AI Act 2025: What German Companies Need to Know Now." ijonis.com
  5. Kiwop (2026). "EU AI Act 2026: Complete Guide for Companies." kiwop.com
  6. Hamburg Chamber of Commerce (2025). "EU AI Act — What Matters from August 2025 in Theory and Practice." handelskammer-hamburg.de
  7. Alexander Thamm (2025). The AI Act was published in the Official Journal of the EU on July 12, 2024 and entered into force on August 1, 2024.
  8. SRD Rechtsanwälte (2025). "AI Act from August 2, 2025: New Obligations & Sanctions." srd-rechtsanwaelte.de
  9. Anwalt.de (2025). "EU AI Act Enforcement 2025: New Obligations for German AI Developers." anwalt.de
  10. SRD Rechtsanwälte (2025). Obligations for GPAI providers (Chapter V AI Act) take effect on August 2, 2025.
  11. Federal Ministry for Digital and State Modernisation (2025). "Act on the Implementation of the AI Regulation (KI-MIG)." bmds.bund.de
  12. AI Act Akademie (2026). "AI Act Implementation Law: National AI Supervision." aiact-akademie.de
  13. Bitkom (2024). "Implementation Guide for the AI Regulation (EU) 2024/1689." bitkom.org

Disclaimer: This article is for general information purposes and does not constitute legal advice. All information is based on Regulation (EU) 2024/1689 and available guidelines, as of March 2026.