The EU AI Act has been in force since August 2024 and is rolling out its full effect in stages through 2026. For German enterprises this means: AI governance is no longer an optional discipline but a legal obligation — and at the same time a strategic opportunity to build trust with customers, partners and regulators. This article explains the key requirements and presents a practical implementation framework.

What Is the EU AI Act? Key Terms

EU AI Act (EU Artificial Intelligence Act)
The world's first comprehensive legislative regulation of artificial intelligence. Regulation (EU) 2024/1689 applies to all AI systems deployed or made available in the EU — regardless of where the provider is based. It follows a risk-based approach: the higher the risk of an AI system, the stricter the requirements.
Risk Classes
The EU AI Act distinguishes four risk classes: (1) Unacceptable risk — prohibited (e.g. social scoring, biometric mass surveillance). (2) High risk — strict requirements (documentation, testing, human oversight). (3) Limited risk — transparency obligations (e.g. chatbots must identify themselves as AI). (4) Minimal risk — no specific requirements (e.g. AI spam filters).
General-Purpose AI (GPAI)
Large foundation models such as Anthropic Claude or Meta Llama fall under GPAI regulations. Providers like AWS must fulfil transparency and documentation obligations. Enterprises that use GPAI models (as deployers) have their own obligations in the application layer.

EU AI Act Timeline: What Applies When?

EU AI Act implementation deadlines
Date Regulation Affected parties
Aug. 2024 Regulation enters into force All
Feb. 2025 Prohibitions on unacceptable risks apply All enterprises
Aug. 2025 GPAI rules apply Foundation model providers & deployers
Aug. 2026 High-risk requirements fully applicable All high-risk AI deployers
Aug. 2027 Remaining high-risk systems (Annex I) Specific sectors

High-Risk AI: What Falls into This Category?

Many enterprise use cases fall under the high-risk category. Enterprises must assess this honestly:

  • Human resources: AI for screening applications, performance evaluation, promotion decisions
  • Credit: AI-powered credit scoring and risk assessment systems
  • Education: AI for evaluating examination performance or course admissions
  • Critical infrastructure: AI in energy, water and transport systems
  • Law enforcement: AI for predicting crime or assessing risk of individuals
  • Border management: AI for assessing traveller risk

Internal chatbots for knowledge search or code generation with Amazon Q Developer generally do not fall under high risk — only the transparency obligations of limited risk apply there.

Obligations for High-Risk AI Deployers

  1. Technical documentation: Complete documentation of the AI system including training data, evaluation methods and known limitations
  2. Conformity assessment: Conduct an assessment before putting the system into service — internally or via a notified body
  3. Human oversight: Implement mechanisms for human supervision and intervention
  4. Logging: Automatic event logs for all AI decisions, retained for at least six months
  5. Transparency to users: Clear information when an AI system makes or substantially contributes to a decision
  6. Market surveillance: Post-market monitoring and reporting of serious incidents to authorities

Internal AI Policies: What Enterprises Should Build Today

Regardless of the EU AI Act, an internal AI policy is good governance practice. Core elements:

  • AI usage policy: Which AI tools are approved? What data may be entered into which systems?
  • Use case approval process: Which AI applications require formal approval from Legal, IT Security and Data Protection?
  • Accountability: Who is responsible for AI systems — technically (AI owner) and functionally (business owner)?
  • Quality assurance: Regular evaluation of AI outputs for drift, bias and quality
  • Incident management: What happens when an AI system produces incorrect or harmful outputs?

Frequently Asked Questions about the EU AI Act

Which enterprises are affected by the EU AI Act?
All enterprises that deploy or make AI systems available in the EU, regardless of where they are headquartered. DACH enterprises sourcing AI services from the US or Asia must also ensure compliance.
What are high-risk AI systems under the EU AI Act?
High-risk AI systems include AI in HR (recruiting, performance assessment), AI for credit decisions, AI in law enforcement, AI in education and AI for critical infrastructure. Strict requirements for documentation, testing and human oversight apply to these systems.
How does the EU AI Act relate to GDPR?
The EU AI Act complements GDPR. While GDPR governs the protection of personal data, the EU AI Act addresses the risks of AI systems independently of data protection. Enterprises must comply with both frameworks in parallel.
What are the penalties for violating the EU AI Act?
Fines of up to EUR 35 million or 7% of global annual turnover (whichever is higher) for violations of the most serious prohibitions. For high-risk violations: up to EUR 15 million or 3% of turnover.
Does Amazon Bedrock help with EU AI Act compliance?
Amazon Bedrock supports compliance with built-in Guardrails (content filtering, PII redaction, audit logging) and lays groundwork for technical compliance requirements. Full compliance responsibility, however, rests with the deployer — the enterprise using the AI system.

Request AI Governance Consulting

Storm Reply helps DACH enterprises achieve EU AI Act compliance and build internal AI governance structures — practical and legally sound.

Request a consultation

More Insights