Research: Executive Guide to EU AI Act

As of February 2026, the global regulatory landscape for artificial intelligence has shifted from a period of anticipation to one of active enforcement. The European Union Artificial Intelligence Act (EU AI Act), the world’s first comprehensive legal framework for AI, is no longer a theoretical compliance target but an operational reality. For executives leading organizations in regulated sectors—finance, healthcare, education, nonprofit, and government—the implications are profound, immediate, and extraterritorial. This report provides an exhaustive analysis of the Act’s requirements, tailored specifically for leaders in Europe and North America who must navigate this complex new environment.

The transition into 2026 marks a critical inflection point. While the Act entered into force in August 2024, the staggered implementation timeline has now reached its most consequential phase. The prohibitions on “Unacceptable Risk” practices—such as social scoring and untargeted biometric scraping—are fully enforceable as of February 2025, carrying the threat of severe financial penalties and reputational damage.1 Simultaneously, the governance framework for General-Purpose AI (GPAI) models is active, and the deadline for “High-Risk” systems, originally slated for August 2026, faces potential adjustments under the proposed “Digital Omnibus” regulation, creating a dynamic and somewhat ambiguous planning horizon for enterprise leaders.2

For executives, the strategic imperative is to move beyond a checkbox compliance mentality. The EU AI Act is not merely a technical standard; it is a governance mandate that demands the integration of fundamental rights impact assessments, rigorous data governance, and human oversight into the core of business operations. The “Brussels Effect” ensures that these standards will likely become the de facto global baseline, influencing regulatory postures in the United States and beyond.3

Key Strategic Takeaways for 2026

The following eight points synthesize the critical actionable intelligence for executive leadership in the current fiscal year:

  1. Enforcement of Prohibitions is Live: The ban on specific AI practices deemed to pose an “unacceptable risk” is now active. This includes systems that manipulate human behavior, exploit vulnerabilities, or utilize emotion recognition in workplaces and educational institutions. Violations in these areas are subject to the maximum penalty tier, and regulators have signaled a zero-tolerance approach.4
  2. Extraterritorial Scope is Absolute: The Act applies to any organization, regardless of its geographic location, that places AI systems on the EU market or puts them into service in the EU. Furthermore, it applies if the output produced by an AI system is used in the EU, capturing North American companies that process EU data or service EU clients remotely.6
  3. High-Risk Classification is Broad and Sector-Critical: The “High-Risk” designation encompasses core business functions in regulated industries, including creditworthiness assessments in finance, medical triage in healthcare, and student admission systems in education. These systems require a Conformity Assessment, CE marking, and registration in the EU database.4
  4. The “Digital Omnibus” and Strategic Ambiguity: A legislative proposal known as the “Digital Omnibus” introduced in late 2025 suggests delaying the enforcement of obligations for certain High-Risk systems (Annex III) from August 2026 to late 2027. This is due to delays in harmonized technical standards. However, relying on this delay is a high-risk strategy; organizations must plan for the 2026 deadline to ensure resilience.2
  5. Governance Overrides Technology: Compliance requires a holistic governance approach. It is not enough for an algorithm to be accurate; the organization must demonstrate robust data governance, record-keeping, and human oversight. The Fundamental Rights Impact Assessment (FRIA) is a new, mandatory governance tool for deployers in the public sector and essential services.9
  6. Transparency is the First Line of Defense: Immediate obligations for transparency are in effect. Systems interacting with humans (chatbots) and AI-generated content (deepfakes) must be clearly labeled. Failure to disclose the non-human nature of an interaction is a direct violation of Article 50.7
  7. Penalties Threaten Financial Stability: The penalty structure is designed to be dissuasive, with fines reaching up to €35 million or 7% of total worldwide annual turnover, whichever is higher. This exceeds the 4% cap of the GDPR, positioning AI non-compliance as a top-tier enterprise risk.10
  8. The Rise of AI Literacy: There is a mandatory requirement for AI literacy among staff. Organizations must ensure that personnel operating AI systems are competent to interpret outputs and intervene when necessary. This requires a documented training curriculum for all relevant employees.11

2. The Regulatory Architecture: A Framework for Algorithmic Accountability

To effectively navigate the EU AI Act, executives must understand the architectural principles that underpin the regulation. Unlike sector-specific laws, the AI Act is a horizontal product safety regulation that applies across the entire economy, layered with fundamental rights protections.

2.1 The Risk-Based Approach

The central mechanism of the AI Act is its risk-based classification system. Obligations are not uniform; they escalate in proportion to the potential harm an AI system poses to health, safety, or fundamental rights. This approach allows for the unrestricted use of minimal-risk AI while imposing strict controls on high-stakes applications.

Risk CategoryDefinitionExamplesRegulatory Burden
Unacceptable RiskPractices deemed a clear threat to fundamental rights.Social scoring, real-time remote biometric identification (RBI) in public by law enforcement, emotion recognition in schools/workplaces.Prohibited. Banned from the EU market. 4
High RiskSystems creating adverse impacts on safety or fundamental rights.Medical devices, critical infrastructure, credit scoring, recruitment tools, border control.Strictly Regulated. Conformity assessment, risk management, data governance, human oversight, cybersecurity. 4
Limited RiskSystems with specific transparency risks.Chatbots, emotion recognition (outside prohibited areas), deepfakes.Transparency. Users must be informed they are interacting with AI; content must be labeled. 4
Minimal RiskThe vast majority of AI systems.Spam filters, video games, inventory management.None. No new obligations, though voluntary codes of conduct are encouraged. 4

2.2 The Implementation Timeline and the “Digital Omnibus” Disruption

The timeline for compliance has been a moving target, complicated by the operational realities of establishing a new regulatory infrastructure. The Act officially entered into force in August 2024, triggering a staggered rollout.

  • February 2, 2025: The first major milestone was reached with the enforcement of prohibitions (Article 5) and the AI literacy requirement (Article 4). Organizations found using prohibited systems after this date are currently non-compliant.1
  • August 2, 2025: Obligations for General-Purpose AI (GPAI) models and the governance structure (Notifying Authorities, AI Office) became applicable. This impacted providers of foundation models and the downstream entities integrating them.1
  • August 2, 2026 (The Critical Deadline): This date marks the intended application of rules for Annex III High-Risk AI Systems—standalone software used in sensitive areas like employment, education, and essential services.
  • August 2, 2027: The deadline for Annex I High-Risk AI Systems—those embedded as safety components in products regulated under other EU laws (e.g., medical devices, machinery, cars).1

The Strategic Complication: The “Digital Omnibus” In late 2025, the European Commission proposed the “Digital Omnibus,” a legislative package aimed at simplifying and harmonizing digital regulations. A key component of this proposal is a potential delay to the August 2026 deadline for Annex III systems. The rationale is pragmatic: the harmonized technical standards required for conformity assessments (covering robustness, accuracy, and cybersecurity) have faced delays in the CEN/CENELEC standardization process.2

If passed, this Omnibus could push the enforcement of Annex III obligations to late 2027, aligning it with Annex I. However, for executives, this creates a dangerous ambiguity. Assuming a delay could lead to unpreparedness if the legislation stalls or is amended. The prudent strategic course is to maintain the August 2026 target for internal readiness, using any potential delay as a grace period for rigorous testing rather than a reason to pause implementation.12

2.3 Key Definitions and Roles

Understanding one’s legal role is paramount, as obligations differ significantly between “Providers” and “Deployers.”

  • Provider: The entity that develops an AI system or has it developed and places it on the market under its own name or trademark. This includes US tech companies selling software in the EU. Providers bear the bulk of the compliance burden, including conformity assessments and technical documentation.7
  • Deployer: The entity using an AI system under its authority, except for personal non-professional use. This includes a bank using a credit scoring tool or a hospital using a diagnostic aid. Deployers are responsible for human oversight, monitoring input data, and conducting Fundamental Rights Impact Assessments (FRIAs).7
  • General-Purpose AI (GPAI) Model Provider: A distinct category for developers of models like GPT-4 or Claude. They face specific transparency and systemic risk obligations, distinct from the High-Risk system rules.13

3. Sector-Specific Strategic Analysis

The impact of the EU AI Act is highly sector-dependent. While the horizontal rules apply broadly, the specific use cases defined in Annex III create targeted compliance regimes for finance, healthcare, education, government, and nonprofits.

3.1 Financial Services: Managing Credit and Risk

The financial sector is heavily impacted by the designation of creditworthiness assessment and life/health insurance pricing as High-Risk use cases.

High-Risk Use Cases:

  • Credit Scoring: AI systems used to evaluate the credit score or creditworthiness of natural persons are High-Risk. This applies to both traditional banking algorithms and fintech alternative data models. The Act mandates that these systems must not reproduce historical biases or discriminate against protected groups.7
  • Insurance: Systems used for risk assessment and pricing in relation to natural persons for life and health insurance are High-Risk. (Note: Property and casualty insurance are generally excluded from this high-risk category unless they involve sensitive profiling).7

Compliance Imperatives:

  • Bias Mitigation and Data Governance: Financial institutions must implement rigorous data governance frameworks. The Act explicitly permits the processing of special categories of personal data (e.g., ethnicity) for the sole purpose of bias detection and correction, subject to strict safeguards. This is a significant deviation from strict GDPR minimization principles, acknowledging that one cannot fix bias without measuring it.14
  • Explainability vs. Black Boxes: The requirement for transparency and human oversight challenges the use of “black box” deep learning models in credit decisions. Deployers must be able to explain why a decision was made to the affected individual. This may necessitate the use of interpretable machine learning techniques or robust post-hoc explainability tools.
  • Integration with DORA: The Digital Operational Resilience Act (DORA) already imposes strict ICT risk management rules on financial entities. The AI Act compliance framework should be integrated with DORA implementation to avoid duplicative governance structures. AI risk is a subset of ICT risk.15

3.2 Healthcare and Life Sciences: The Dual Regulatory Pathway

Healthcare faces a complex intersection between the AI Act and the existing Medical Device Regulation (MDR).

High-Risk Use Cases:

  • Annex I Systems: AI systems that act as safety components of medical devices (e.g., an AI algorithm in a pacemaker or a robotic surgery arm) are High-Risk. These systems are regulated under the MDR, and the AI Act requirements are integrated into the existing conformity assessment (CE marking) process.16
  • Annex III Systems: AI systems used for medical triage (prioritizing patients for emergency services) are specifically listed as High-Risk in Annex III, even if they are not strictly medical devices under the MDR.16

Compliance Imperatives:

  • Data Quality and “Free of Errors”: Article 10(3) requires training, validation, and testing data to be “free of errors.” In the context of healthcare, where clinical data is inherently noisy, this requirement has caused significant concern. Guidance indicates this should be interpreted as a statistical quality standard—managing and mitigating errors to the best extent possible—rather than an absolute impossibility of zero errors.4
  • FDA Alignment: For US-based medical device manufacturers, alignment with FDA guidance is critical. The FDA’s “Good Machine Learning Practice” (GMLP) principles align well with the EU AI Act’s quality management requirements. However, the EU’s focus on fundamental rights (e.g., non-discrimination in patient outcomes) adds a layer not explicitly central to FDA safety reviews. Manufacturers should aim for a “superset” technical file that covers both FDA and EU requirements.18
  • Human Oversight: The concept of “human in the loop” is critical. Clinical decision support systems must be designed so that the medical professional can realistically verify or override the AI’s recommendation. “Automation bias”—the tendency of humans to over-rely on computer suggestions—must be actively mitigated through training and interface design.17

3.3 Education: Safeguarding the Student Journey

Education is a sensitive sector where AI can determine life trajectories. Consequently, it attracts significant scrutiny under the Act.

High-Risk Use Cases:

  • Access and Admission: AI systems used to determine access or admission to educational institutions are High-Risk.
  • Student Assessment: Systems used to evaluate learning outcomes (grading) or to steer the learning process are High-Risk.5
  • Proctoring: AI used for monitoring students during tests is High-Risk.

Prohibited Practices:

  • Emotion Recognition: The use of AI systems to infer emotions of a natural person in educational institutions is prohibited as of February 2025. This bans software that claims to monitor student “engagement,” “confusion,” or “boredom” via facial analysis. Exception: Systems used for medical or safety reasons.5

Compliance Imperatives:

  • Procurement Standards: Universities and school districts typically act as Deployers, buying software from vendors (Providers). They must update procurement policies to demand proof of AI Act compliance (Declaration of Conformity) from vendors. Deployers are liable if they use non-compliant high-risk systems.19
  • Transparency: Students must be informed when AI is used to grade their work or monitor their exams.
  • Human Oversight: Automated grading must be subject to human verification, especially for high-stakes exams. The “human in the loop” must have the competence and authority to overturn the AI’s grade.20

3.4 Public Sector and Law Enforcement: Balancing Security and Rights

Government use of AI involves the exercise of state power, necessitating the highest safeguards.

High-Risk Use Cases:

  • Law Enforcement: Systems used for individual risk assessments (profiling), polygraphs, evaluating the reliability of evidence, or profiling in the course of detection/investigation are High-Risk.5
  • Migration and Border Control: Systems used for verifying travel documents, examining visa applications, or assessing security risks of irregular migration are High-Risk.21
  • Administration of Justice: AI assisting judicial authorities in researching or interpreting facts and law is High-Risk.21

Prohibited Practices:

  • Real-Time Remote Biometric Identification (RBI): The use of “real-time” facial recognition in publicly accessible spaces by law enforcement is banned, with three narrow exceptions: (1) search for victims of abduction/trafficking; (2) prevention of a specific, substantial, and imminent threat to life or terrorism; (3) localization of a suspect in a serious crime (e.g., murder, trafficking). All exceptions require prior judicial or independent administrative authorization.4
  • Predictive Policing: AI systems used for making risk assessments of natural persons in order to assess the risk of them offending are prohibited if based solely on profiling or personality traits.5

Compliance Imperatives:

  • Fundamental Rights Impact Assessment (FRIA): Public bodies are the primary target for the FRIA obligation. Before deploying a High-Risk system, they must conduct an assessment detailing the intended use, the categories of people affected, the specific risks to fundamental rights, and the complaint mechanisms available. This assessment must be notified to the national market surveillance authority.9
  • Public Register: Public authorities must register the use of High-Risk AI systems in a public EU database, ensuring transparency for citizens.4

3.5 Nonprofit and Humanitarian Sector

Nonprofits often operate in high-stakes environments serving vulnerable populations.

High-Risk Use Cases:

  • Beneficiary Eligibility: AI systems used to evaluate eligibility for essential public assistance benefits and services (often administered by NGOs on behalf of the state) are High-Risk.21
  • Donor Profiling: While typically not High-Risk, sophisticated donor profiling using external data could trigger GDPR and AI Act transparency rules.

Compliance Imperatives:

  • Vulnerable Populations: The prohibition on AI practices that “exploit vulnerabilities” of specific groups (age, disability, social/economic situation) is particularly relevant for humanitarian NGOs. Systems used to allocate aid in refugee camps or disaster zones must be rigorously tested to ensuring they do not inadvertently exploit the desperate situation of beneficiaries to manipulate behavior.22
  • Cost of Compliance: For smaller nonprofits, the cost of conformity assessments for High-Risk systems can be prohibitive. Engaging with Regulatory Sandboxes established by Member States can provide a cost-effective pathway to compliance, offering priority access and technical support.23

4. Operationalizing Compliance: A Pragmatic Framework

For day-to-day business, compliance must be translated from legal text into operational processes. The following framework outlines the essential steps for “Deployers” and “Providers.”

4.1 The Fundamental Rights Impact Assessment (FRIA)

Who: Deployers in the public sector and private entities providing essential services (banking, insurance, education, healthcare) using High-Risk AI.9 When: Prior to the first use of the system. How:

  1. Scope and Context: Describe the intended purpose, the context of use, and the categories of affected persons.
  2. Risk Identification: Identify specific risks to fundamental rights (e.g., discrimination, privacy, right to good administration).
  3. Mitigation Measures: Detail the human oversight measures (who is watching the AI?) and complaint mechanisms (how can a person challenge the decision?).
  4. Integration: Combine with the GDPR Data Protection Impact Assessment (DPIA) to create a unified risk document.24

4.2 Data Governance and Quality Management

Who: Providers of High-Risk AI.

Requirement: High-quality data is the cornerstone of the Act.

How:

  • Data Lineage: Document the origin of all training, validation, and testing data.
  • Bias Examination: actively test data for bias against protected groups. The Act allows the processing of special category data (e.g., race, ethnicity) strictly for bias monitoring and correction—a “safe harbor” from GDPR restrictions for this specific purpose.14
  • Governance Framework: Establish a Quality Management System (QMS) that includes policies for data collection, labeling, cleaning, and aggregation.4

4.3 Technical Documentation and Record Keeping

Who: Providers.

Requirement: Demonstrate conformity through exhaustive documentation.

How:

  • System Architecture: Detailed diagrams of the model architecture, logic, and parameters.
  • Validation Reports: Results of testing against accuracy, robustness, and cybersecurity metrics.
  • Logging: The system must automatically generate logs of its operation (start times, input data, identification of the human operator). These logs must be kept for at least 6 months.4

4.4 Human Oversight (“Human-in-the-Loop”)

Who: Providers (design) and Deployers (implementation).

Requirement: High-Risk systems must be overseen by natural persons.

How:

  • Design: The Provider must build the system with a “stop button” or override capability.
  • Implementation: The Deployer must assign specific staff to oversight roles. These staff must be “AI Literate”—trained to understand the system’s output and, crucially, to recognize “automation bias” (the tendency to blindly accept the machine’s advice).4

4.5 The “Conformity Assessment” Procedure

Who: Providers.

Requirement: Before placing a High-Risk system on the market.

How:

  • Internal Control (Annex VI): For most Annex III systems, the Provider can perform a self-assessment of conformity.
  • Notified Body: For biometric systems and Annex I systems (safety components), a third-party audit by a “Notified Body” is required. This is a significant bottleneck; organizations should engage auditors early.25

5. Enforcement Landscape & Case Studies: Learning from Precedents

While the AI Act’s specific penalty regime is just coming online, European regulators have actively used the GDPR and other laws to enforce principles that mirror the AI Act: fairness, transparency, and data legality. The following five case studies serve as critical precedents for the types of violations that will attract the AI Act’s maximum penalties (€35M / 7% turnover).

Case Study 1: Clearview AI – The “Prohibited Practice” Proxy

  • Penalty: €30.5 Million (Dutch DPA, 2024/2025).26
  • The Violation: Clearview AI created a massive facial recognition database by scraping billions of images from the internet and social media without the subjects’ knowledge or consent. This database was then offered to law enforcement agencies.
  • Relevance to AI Act: This specific behavior—untargeted scraping of facial images from the internet or CCTV to create or expand facial recognition databases—is now explicitly codified as a Prohibited Practice under Article 5 of the AI Act.5
  • Key Lesson: The fine was issued under GDPR, but the behavior is the exact target of the AI Act’s most severe prohibition. Organizations must audit their supply chains to ensure they are not procuring data or services derived from such scraping activities. Using such a tool would now trigger the “Unacceptable Risk” liability tier.

Case Study 2: LinkedIn – The “Profiling” Precedent

  • Penalty: €310 Million (Irish DPC, October 2024).28
  • The Violation: LinkedIn was fined for its behavioral analysis of user data for targeted advertising. The platform tracked user interactions (e.g., how long a user hovered over an ad) to infer personal characteristics and target ads, without a valid legal basis or adequate transparency.
  • Relevance to AI Act: This case highlights the risks associated with profiling and inferring sensitive data, which are central concerns of the AI Act’s High-Risk classification. While this was a GDPR fine, it underscores the regulator’s intolerance for “invisible” AI processing. Under the AI Act, “subliminal techniques” or manipulative profiling are prohibited, and credit/insurance profiling is High-Risk.
  • Key Lesson: “Black box” profiling is legally toxic. Systems that infer user characteristics must be transparent, and consent must be informed and granular.

Case Study 3: Aena – The “Biometric” Warning

  • Penalty: €10 Million (Spanish AEPD, 2025).15
  • The Violation: Aena, the Spanish airport operator, was fined for the improper use of biometric identification systems. The agency cited a lack of security, proportionality, and a valid legal basis for processing biometric data of travelers.
  • Relevance to AI Act: Biometric identification is a High-Risk use case (Annex III). The AI Act imposes strict requirements for accuracy, robustness, and cybersecurity on such systems. This fine demonstrates that “security and proportionality” are not just IT goals but legal mandates.
  • Key Lesson: Organizations deploying facial recognition for access control (e.g., in stadiums, offices, or transport hubs) face dual scrutiny. They must meet GDPR consent rules and AI Act technical robustness standards.

Case Study 4: OpenAI / ChatGPT – The “Transparency” Enforcement

  • Penalty: €15 Million (Italian Garante, with subsequent actions).29
  • The Violation: The Italian DPA investigated ChatGPT for lacking a legal basis for training data, providing inaccurate information (hallucinations), and failing to verify the age of users (exposing minors to inappropriate content).
  • Relevance to AI Act: This directly maps to the General-Purpose AI (GPAI) obligations. Article 53 requires GPAI providers to respect EU copyright law and maintain detailed technical documentation. Article 50 requires that users know they are interacting with an AI.
  • Key Lesson: GPAI providers are not immune. The requirements for age verification and training data transparency are now codified. Downstream deployers must ensure their GPAI vendors are compliant to avoid service disruptions.

Case Study 5: Free Mobile – The “Security” Failure

  • Penalty: €27 Million (CNIL, January 2026).30
  • The Violation: The French regulator fined Free Mobile for failing to ensure the security of subscriber data, leading to a massive breach.
  • Relevance to AI Act: High-Risk AI systems have a specific statutory obligation for “Accuracy, Robustness, and Cybersecurity”.4 If an AI system is breached due to poor architecture or failure to test against adversarial attacks, it is now an AI Act violation in addition to a GDPR data breach.
  • Key Lesson: Cybersecurity is a condition of market access. AI systems must be stress-tested (“red teamed”) against adversarial attacks (e.g., model poisoning, evasion) as part of the conformity assessment.

6. Transatlantic Implications & Future Outlook

6.1 The “Brussels Effect” and North American Business

For North American executives, the EU AI Act is not a “foreign” law; it is a global constraint. Due to the “Brussels Effect,” multinational companies often adopt the strictest regulatory standard as their global baseline to simplify operations.

  • Extraterritorial Reach: Article 2 explicitly states the Act applies to providers outside the EU if they place systems on the EU market, and to providers/deployers anywhere if the output is used in the EU.6 A US bank analyzing the creditworthiness of an EU resident using a US-hosted AI model is subject to the Act.
  • US-EU Alignment: While the US approach (Executive Order on AI, NIST AI Risk Management Framework) is more policy-driven and less punitive, the underlying principles (validity, reliability, bias mitigation) are converging. Compliance with the EU AI Act generally positions a firm well for US compliance, but the reverse is not true due to the EU’s specific documentation and conformity mandates.31

6.2 The Future of AI Governance (2027-2030)

Looking ahead, the regulatory environment will continue to evolve.

  • The AI Office: The newly established European AI Office will grow into a powerful regulator, similar to the role the European Commission plays in competition law. It will oversee GPAI models and coordinate national supervisors.5
  • Standardization: The development of harmonized standards (CEN/CENELEC) will provide the “technical presumption of conformity.” Executives should monitor these standards closely, as adhering to them will be the safest harbor against liability.32
  • Litigation: We anticipate a wave of civil litigation. The AI Act enables individuals to complain to national authorities, and the forthcoming AI Liability Directive (though currently paused) may further ease the path for consumers to sue for damages caused by AI systems.33

Conclusion

The EU AI Act represents a fundamental restructuring of the digital economy. It transforms AI from an unregulated frontier into a managed industrial domain. For executives, the path forward is clear: identify your role (Provider vs. Deployer), map your inventory against the risk categories, and institutionalize governance. The cost of compliance is significant, but the cost of non-compliance—measured in eight-figure fines and existential reputational damage—is far higher. By treating these obligations as a blueprint for quality and trust, organizations can turn regulatory necessity into a competitive advantage in the age of responsible AI.

Works cited

  1. Timeline for the Implementation of the EU AI Act, accessed February 16, 2026, https://ai-act-service-desk.ec.europa.eu/en/ai-act/timeline/timeline-implementation-eu-ai-act
  2. EU Digital Omnibus Proposes Delay of AI Compliance Deadlines | Blogs | OneTrust, accessed February 16, 2026, https://www.onetrust.com/blog/eu-digital-omnibus-proposes-delay-of-ai-compliance-deadlines/
  3. The Brussels Effect: What US Enterprises Need to Know About the EU AI Act., accessed February 16, 2026, https://www.raisesummit.com/post/brussels-effect-us-enterprises-eu-ai-act
  4. High-level summary of the AI Act | EU Artificial Intelligence Act, accessed February 16, 2026, https://artificialintelligenceact.eu/high-level-summary/
  5. AI Act | Shaping Europe’s digital future – European Union, accessed February 16, 2026, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  6. How the EU AI Act affects US-based companies – KPMG International, accessed February 16, 2026, https://kpmg.com/us/en/articles/2024/how-eu-ai-act-affects-us-based-companies.html
  7. EU AI Act 2026 Updates: Compliance Requirements and Business Risks – Legal Nodes, accessed February 16, 2026, https://www.legalnodes.com/article/eu-ai-act-2026-updates-compliance-requirements-and-business-risks
  8. The Digital Omnibus changes to the AI Act – high-impact on high-risk AI?, accessed February 16, 2026, https://www.taylorwessing.com/en/global-data-hub/2026/the-digital-omnibus-proposal/gdh—the-digital-omnibus-changes-to-the-ai-act
  9. Zooming in on AI – #13: EU AI act – Focus on fundamental rights …, accessed February 16, 2026, https://www.aoshearman.com/en/insights/ao-shearman-on-tech/zooming-in-on-ai-13-eu-ai-act-focus-on-fundamental-rights-impact-assessment-for-high-risk-ai-systems
  10. Latest wave of obligations under the EU AI Act take effect: Key considerations | DLA Piper, accessed February 16, 2026, https://www.dlapiper.com/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect
  11. Rules on AI Literacy and Prohibited Systems Under the EU AI Act Become Applicable, accessed February 16, 2026, https://www.hunton.com/privacy-and-cybersecurity-law-blog/rules-on-ai-literacy-and-prohibited-systems-under-the-eu-ai-act-become-applicable
  12. EU’s Digital Omnibus offers AI regulatory relief, but questions remain – PwC, accessed February 16, 2026, https://www.pwc.com/us/en/services/consulting/cybersecurity-risk-regulatory/library/tech-regulatory-policy-developments/eu-digital-omnibus.html
  13. EU Artificial Intelligence Act | Up-to-date developments and analyses of the EU AI Act, accessed February 16, 2026, https://artificialintelligenceact.eu/
  14. Where AI Regulation is Heading in 2026: A Global Outlook | Blog | OneTrust, accessed February 16, 2026, https://www.onetrust.com/blog/where-ai-regulation-is-heading-in-2026-a-global-outlook/
  15. More sanctions and higher fines: the AEPD raises the level of fines …, accessed February 16, 2026, https://www.ecija.com/en/news-and-insights/mas-sanciones-y-de-mayor-importe-la-aepd-sube-el-nivel-de-multas-en-2025/
  16. Artificial Intelligence in healthcare – Public Health – European Commission, accessed February 16, 2026, https://health.ec.europa.eu/ehealth-digital-health-and-care/artificial-intelligence-healthcare_en
  17. Medical devices and the EU AI Act AI Act – how will two sets of regulations work together?, accessed February 16, 2026, https://www.taylorwessing.com/en/interface/2024/ai-act-sector-focus/medical-devices-and-the-eu-ai-act-ai-act
  18. FDA Issues Guidance on AI for Medical Devices – CyberAdviser, accessed February 16, 2026, https://www.cyberadviserblog.com/2025/08/fda-issues-guidance-on-ai-for-medical-devices/
  19. EU AI Act: What it means for universities – Digital Education Council, accessed February 16, 2026, https://www.digitaleducationcouncil.com/post/eu-ai-act-what-it-means-for-universities
  20. AI Act Compliance Checklist: Your 2026 Survival Guide (With Free …, accessed February 16, 2026, https://medium.com/@vicki-larson/ai-act-compliance-checklist-your-2026-survival-guide-with-free-template-44cdcd8fbf8e
  21. Annex III: High-Risk AI Systems Referred to in Article 6(2) | EU Artificial Intelligence Act, accessed February 16, 2026, https://artificialintelligenceact.eu/annex/3/
  22. Upcoming EU AI Act Obligations Mandatory Training and Prohibited Practices, accessed February 16, 2026, https://www.lw.com/en/insights/upcoming-eu-ai-act-obligations-mandatory-training-and-prohibited-practices
  23. AI Regulatory Sandbox Approaches: EU Member State Overview, accessed February 16, 2026, https://artificialintelligenceact.eu/ai-regulatory-sandbox-approaches-eu-member-state-overview/
  24. EU AI Act 2026 Compliance Guide: Key Requirements Explained – Secure Privacy, accessed February 16, 2026, https://secureprivacy.ai/blog/eu-ai-act-2026-compliance
  25. Streamlining EU Compliance for AI-Enabled Medical Devices – Intertek, accessed February 16, 2026, https://www.intertek.com/blog/2025/12-30-streamlining-eu-compliance-for-ai-enabled-medical-devices/
  26. Dutch Supervisory Authority imposes a fine on Clearview because of illegal data collection for facial recognition, accessed February 16, 2026, https://www.edpb.europa.eu/news/national-news/2024/dutch-supervisory-authority-imposes-fine-clearview-because-illegal-data_en
  27. Netherlands: Face-Recognition Company Clearview AI Fined for Violating EU’s General Data Protection Regulation | Library of Congress, accessed February 16, 2026, https://www.loc.gov/item/global-legal-monitor/2024-10-16/netherlands-face-recognition-company-clearview-ai-fined-for-violating-eus-general-data-protection-regulation/
  28. Fines for GDPR violations in AI systems and how to avoid them …, accessed February 16, 2026, https://data-privacy-office.eu/fines-for-gdpr-violations-in-ai-systems-and-how-to-avoid-them/
  29. OpenAI faces €15 million fine as the Italian Garante strikes again – Lewis Silkin LLP, accessed February 16, 2026, https://www.lewissilkin.com/en/insights/2025/01/14/openai-faces-15-million-fine-as-the-italian-garante-strikes-again-102jtqc
  30. Data breach: FREE MOBILE and FREE fined €42 million – CNIL, accessed February 16, 2026, https://www.cnil.fr/en/sanction-free-2026
  31. 2026 AI Laws Update: Key Regulations and Practical Guidance – Gunderson Dettmer, accessed February 16, 2026, https://www.gunder.com/en/news-insights/insights/2026-ai-laws-update-key-regulations-and-practical-guidance
  32. An Introduction to the Code of Practice for General-Purpose AI | EU Artificial Intelligence Act, accessed February 16, 2026, https://artificialintelligenceact.eu/introduction-to-code-of-practice/
  33. AI Watch: Global regulatory tracker – European Union | White & Case LLP, accessed February 16, 2026, https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-european-union

This article was written with my brain and two hands (primarily) with the help of Google Gemini, Notebook LM, Claude, and other wondrous toys.

Leave a comment