Research: Why good orgs in regulated industries still neglect AI Ethics and Governance

The rapid integration of Artificial Intelligence (AI) into the operational fabric of regulated industries—spanning financial services, healthcare, government, education, and the nonprofit sector—has created a paradox of innovation. While organizations in North America and Europe aggressively scale AI to capture efficiency gains and competitive advantages, a significant “governance deficit” has emerged. This report provides a comprehensive analysis of the root causes behind this neglect and the resulting legal, financial, and reputational penalties.

Research indicates that the neglect of AI ethics is rarely a product of malice but rather a structural failure of alignment between technological capability and organizational maturity. The primary barriers to adopting robust AI governance are deeply rooted in financial justifications, technical literacy gaps, and legacy infrastructure rigidity. Approximately 42% of organizations report that an “inadequate financial justification” or “unclear business case” prevents them from prioritizing ethical oversight.1 This financial skepticism is compounded by a profound lack of technical expertise at the board level, where nearly two-thirds of directors admit to having limited or no knowledge of AI systems.2 Consequently, organizations frequently deploy “black box” algorithms without the necessary observability to detect bias, drift, or failure, leading to catastrophic outcomes.

The consequences of this neglect are no longer theoretical. We are witnessing a transition from “soft law” ethical guidelines to “hard law” regulatory enforcement. In the United States, the Federal Trade Commission (FTC) and the Consumer Financial Protection Bureau (CFPB) have launched aggressive enforcement sweeps, such as “Operation AI Comply,” targeting deceptive and discriminatory AI practices.3 In Europe, the implementation of the EU AI Act has introduced extraterritorial jurisdiction and penalties of up to 7% of global turnover for prohibited AI practices.4

This report details high-profile case studies of governance failure, including the $20 million settlement in Michigan’s unemployment fraud scandal 5, the racial bias inherent in UnitedHealth’s Optum algorithm 6, and the privacy violations by Clearview AI that resulted in maximum fines across multiple European jurisdictions.7 These cases demonstrate that algorithmic failure often stems from the use of flawed “proxy variables”—such as using healthcare spending as a proxy for health needs or school history as a proxy for student potential—which codifies systemic inequalities into automated decision-making.

To mitigate these risks, executives must transition from static, point-in-time compliance checks to continuous, automated governance frameworks. The data suggests that “Responsible AI” is not merely a compliance cost but a driver of value; 58% of executives who operationalize ethical principles report improved return on investment (ROI) and organizational efficiency.9 This report outlines actionable strategies for bridging the literacy gap, establishing “human-in-the-loop” decision rights, and ensuring data hygiene to prevent the “garbage in, bias out” cycle that plagues modern AI deployments.

Section 1: The Landscape of Neglect – Root Causes and Systemic Barriers

The failure to implement effective AI governance is a systemic issue driven by conflicting incentives, resource scarcity, and a fundamental misunderstanding of the technology’s risks. Our research identifies four primary clusters of root causes that explain why organizations, despite facing increasing regulatory pressure, continue to neglect AI ethics.

1.1 The ROI Paradox and Financial Justification

One of the most pervasive barriers to the adoption of AI ethics and governance is the perceived conflict between speed-to-market and the “friction” of oversight. Executives in regulated industries are under immense pressure to demonstrate the value of generative AI investments, yet they struggle to quantify the “return on ethics.”

The Measurement Problem: According to 2024-2025 industry reports, 42% of organizations cite “inadequate financial justification or business case” as a top barrier to AI adoption and governance.1 In the government sector, this challenge is even more acute; 78% of government leaders across 14 countries report struggling to measure the impacts of Generative AI (GenAI).10 Unlike operational efficiencies, which can be measured in hours saved or costs reduced, the value of governance is often “preventative”—it is measured in lawsuits avoided, reputations preserved, and trust maintained. These are difficult metrics to place on a balance sheet until a crisis occurs.

The “Cost Center” Fallacy: Governance is frequently viewed as a cost center rather than a value driver. However, this perspective is empirically flawed. Data from PwC’s Responsible AI survey indicates that 58% of executives found that Responsible AI initiatives actually improve ROI and organizational efficiency, while 55% reported improvements in customer experience and innovation.9 The neglect of ethics is often a failure of strategic vision, where leaders view compliance as a hurdle to be cleared rather than a mechanism to ensure the sustainability of the AI product.

1.2 The Technical Literacy Gap in Leadership

A critical “governance gap” exists between the technical teams deploying models and the executive boards responsible for oversight. This disconnect prevents effective risk assessment and resource allocation.

Boardroom Illiteracy: Despite the existential risks posed by AI, corporate boards remain dangerously under-informed. A 2025 Deloitte survey reveals that 66% of board members still have “limited to no knowledge or experience” with AI.2 Furthermore, nearly one-third (31%) of boards do not even have AI as a standing item on their agenda.2 This lack of literacy creates a vacuum where technical teams act without strategic guardrails, and executives approve systems they do not understand, relying entirely on vendor assurances.

The “Black Box” Trust Issue: This literacy gap is exacerbated by the “black box” nature of deep learning models. When executives cannot interpret how an algorithm reaches a decision—whether it is a credit denial or a hiring rejection—they are unable to challenge the output. Research shows that 42% of organizations struggle with “inadequate generative AI expertise,” which directly hampers their ability to implement governance frameworks like “explainability” or “interpretability”.1

1.3 Data Infrastructure and the “Garbage In” Problem

The integrity of an AI system is wholly dependent on the data upon which it is trained. However, organizations in regulated industries often lack the data infrastructure required to build fair and robust models.

Data Scarcity and Proxy Reliance: Approximately 45% of organizations cite “concerns about data accuracy or bias” as a primary adoption challenge, while 42% report “insufficient proprietary data” to customize models.1 In the absence of direct, high-quality data, developers often resort to “proxy variables”—substitutes that correlate with the target outcome but often carry hidden biases.

  • Example: In healthcare, “cost of care” is often used as a proxy for “health need.” This creates a feedback loop where populations with historically lower access to care (and thus lower costs) are deemed “healthier” by the algorithm, denying them necessary resources.11
  • Legacy Systems: The challenge is magnified by legacy infrastructure. About 30% of organizations struggle to integrate modern, agentic AI with rigid legacy systems, making it difficult to implement real-time monitoring or governance “wrappers” around older tech stacks.12

1.4 “Shadow AI” and Vendor Risk Management

As AI tools become more accessible, they are increasingly entering organizations through “side doors,” bypassing formal procurement and governance processes.

The Third-Party Blind Spot: Most organizations do not build their own foundation models; they procure them. Yet, vendor oversight is dangerously lax. While 84% of ethics and compliance teams claim to own third-party risk management, only 14% have actually audited more than half of their vendors.13 Furthermore, only 15% of companies include AI safeguards in their third-party codes of conduct.13 This reliance on unverified third-party tools creates “Shadow AI” risks, where employees use unauthorized tools for sensitive tasks, exposing the organization to data breaches and regulatory non-compliance.14

Section 2: The Regulatory Tsunami – Enforcement in North America and Europe

The era of voluntary self-regulation is ending. Across North America and Europe, legislators and regulators are constructing a dense web of “hard law” aimed at curbing the excesses of the algorithmic economy. Executives must understand that AI is no longer an exception to the rule of law; it is a primary target of enforcement.

2.1 The European Union: The AI Act and Extraterritorial Reach

The European Union has established itself as the global “first mover” in comprehensive AI regulation with the EU AI Act, which entered into force in 2024 and began enforcing prohibitions in early 2025.

Risk-Based Architecture:

The Act categorizes AI systems based on the risk they pose to fundamental rights and safety:

Risk LevelDescriptionExamplesRegulatory Obligations
Unacceptable RiskStrictly prohibited practices deemed a clear threat to fundamental rights.Social scoring, subliminal manipulation, real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions).Total Ban (Effective Feb 2025) 4
High RiskSystems that can impact life chances or safety.AI in critical infrastructure, education (grading/proctoring), employment (resume screening), essential public services (welfare/credit), and law enforcement.Mandatory Conformity Assessments, high-quality data governance, human oversight, record-keeping, and accuracy/cybersecurity standards.4
Limited RiskSystems with specific transparency risks.Chatbots, emotion recognition systems, deep fakes.Transparency Obligations: Users must be informed they are interacting with AI; deep fakes must be labeled.4
Minimal RiskThe majority of AI systems currently in use.Spam filters, video games, inventory management.No new obligations, though voluntary codes of conduct are encouraged.4

Penalties and Jurisdiction: The penalties for non-compliance are draconian, reaching up to €35 million or 7% of global annual turnover, whichever is higher.4 Crucially, the Act applies extraterritorially to any provider placing AI systems on the EU market or whose system’s outputs affect people in the EU, regardless of where the provider is headquartered.4

GDPR Intersection: The AI Act works in tandem with the General Data Protection Regulation (GDPR). European Data Protection Authorities (DPAs) have been aggressive in fining companies for data scraping and automated decision-making violations. For instance, Clearview AI has been fined the maximum €20 million by DPAs in Italy, Greece, and France for unlawful processing of biometric data, illustrating the “pincer movement” of privacy and AI regulation in Europe.7

2.2 The United States: “Operation AI Comply” and Section 5 Authority

Unlike the EU, the United States lacks a single comprehensive federal AI law. Instead, agencies like the FTC, CFPB, and DOJ are using existing consumer protection and civil rights statutes to vigorously enforce AI standards.

The Federal Trade Commission (FTC): The FTC has asserted that “there is no AI exemption from the laws on the books”.3 Under Chair Lina Khan, the agency has utilized Section 5 of the FTC Act (prohibiting unfair or deceptive acts) to target AI.

  • Operation AI Comply: In September 2024, the FTC launched a coordinated law enforcement sweep against companies using AI hype to deceive consumers. This included actions against DoNotPay, which claimed to offer “the world’s first robot lawyer” but failed to deliver the promised legal expertise.3
  • Deception and Unfairness: The FTC has clarified that making false claims about an AI product’s capabilities (e.g., “AI-powered income generation”) is fraud. Furthermore, using AI to discriminate (e.g., in housing or hiring) can be classified as an “unfair practice” under Section 5.3

Consumer Financial Protection Bureau (CFPB):

The CFPB has focused on the “black box” problem in credit and lending.

  • Circular 2022-03: The Bureau confirmed that federal law requires creditors to provide specific, accurate reasons for adverse actions. “The algorithm did it” is not a valid legal defense. If a creditor cannot explain why an AI model denied credit (due to complexity or “black box” opacity), they cannot use that model.20

State and Local Legislation:

  • NYC Local Law 144: Since July 2023, New York City has required employers using Automated Employment Decision Tools (AEDTs) to screen candidates to conduct an annual independent “bias audit” and publish the results.21 While a 2025 audit by the State Comptroller found enforcement to be “ineffective” due to reliance on complaints, the law has set a compliance baseline for major US employers.21
  • California and Colorado: Emerging frameworks in these states are moving toward mandating “risk assessments” for automated decision-making systems in insurance and employment, mirroring the EU’s high-risk categories.23

2.3 Canada: Administrative Law and the “Chinook” Precedent

In Canada, the regulation of AI in the public sector is evolving through the Directive on Automated Decision-Making and judicial review in the Federal Court.

Immigration and “Chinook”:

Immigration, Refugees and Citizenship Canada (IRCC) uses a tool called Chinook to process visa applications. While the government claims it is a workflow tool and not “AI decision-making,” Federal Court challenges have scrutinized its role.

  • Haghshenas v. Canada: The court upheld the use of Chinook but emphasized that the decision must remain reasonable and intelligible. The concern is that such tools may produce “boilerplate” refusals that lack the nuance of human judgment.25
  • Legal Hallucinations: Canadian courts have also penalized lawyers for submitting legal briefs containing “hallucinated” cases generated by AI tools, treating this as a violation of professional ethics and competence.27

Section 3: Industry Case Studies – Penalties, Fines, and Governance Failures

The following case studies illustrate the tangible consequences of AI governance neglect. They reveal a pattern where efficiency is prioritized over accuracy, leading to systemic bias, consumer harm, and substantial financial penalties.

3.1 Financial Services & Fintech: The “Black Box” Liability

Hello Digit: The Deceptive Algorithm

  • The Incident: Fintech company Hello Digit marketed an automated savings tool with a “no overdraft guarantee,” claiming its proprietary algorithm would never transfer more money than a user could afford.
  • The Failure: The algorithm failed to account for real-time checking account variables, routinely causing overdrafts for customers. The company then ignored customer complaints.28
  • The Penalty: In 2022, the CFPB ordered Hello Digit to pay a $2.7 million penalty and reimburse all consumers for the overdraft fees.28
  • Key Insight: This case established that companies are strictly liable for the performance of their algorithms. Marketing an AI capability as “foolproof” when it is not is a deceptive practice under federal law.

Goldman Sachs & Apple Card: The Gender Bias Controversy

  • The Incident: In 2019, viral reports emerged that the Apple Card (underwritten by Goldman Sachs) offered significantly lower credit limits to women than to their husbands, even when the women had higher credit scores and shared assets.31
  • The Investigation: The NYDFS launched a probe into the underwriting algorithm.
  • The Outcome: The investigation found no evidence of intentional discrimination (disparate treatment). However, it highlighted a massive failure in transparency and customer service. The bank could not immediately explain why the algorithm made its decisions, leading to a public relations crisis.33
  • Key Insight: “Fairness” is not enough; “Explainability” is critical. Even if an algorithm is statistically fair, the inability to explain a decision to a consumer destroys trust and invites regulatory scrutiny.

3.2 Healthcare: Systemic Bias and Clinical Failure

UnitedHealth / Optum: Racial Bias in Care Management

  • The Incident: An algorithm widely used by US hospitals to allocate care management resources was found to systematically discriminate against Black patients. At a given risk score, Black patients were considerably sicker than white patients.6
  • The Mechanism (Root Cause): The algorithm used healthcare costs as a proxy for health needs. Because the US healthcare system has historically spent less on Black patients (due to access barriers and systemic bias), the algorithm falsely predicted that they had lower health needs, thereby denying them extra care.11
  • The Consequence: NYDFS and the NY Department of Health investigated, demanding the removal of the bias. The case is a textbook example of “Label Bias”—choosing the wrong target variable for training.36

Epic Systems: The Sepsis Prediction Failure

  • The Incident: Epic Systems deployed a proprietary “Epic Sepsis Model” (ESM) to hundreds of hospitals.
  • The Failure: Independent validation published in JAMA Internal Medicine found the model missed 67% of sepsis cases and that 88% of its alerts were false positives, causing dangerous “alert fatigue” among doctors.38
  • The Consequence: The failure highlighted the risks of proprietary, “black box” clinical algorithms that have not undergone rigorous external peer review. Epic was forced to overhaul the model.39

3.3 Government & Welfare: “Automated Stategraft”

Michigan MiDAS: The Unemployment Fraud Disaster

  • The Incident: To save money, Michigan replaced human fraud auditors with an automated system (MiDAS). The system operated without human oversight, flagging discrepancies as fraud and automatically seizing tax refunds and garnishing wages.5
  • The Failure: The system had an error rate of 93%. It used a logic that interpreted a lack of response to a confusing digital questionnaire as an admission of guilt.40
  • The Penalty: After a decade of legal battles, the state agreed to a $20 million settlement for the falsely accused citizens.41
  • Key Insight: Automation without “Human-in-the-Loop” (HITL) processes in high-stakes welfare decisions violates due process and leads to massive liability.

Arkansas Medicaid: The “Black Box” Cuts

  • The Incident: Arkansas implemented an algorithm to allocate home care hours for the disabled. The new system drastically cut hours for thousands of people without explanation.42
  • The Consequence: A lawsuit led to a $500,000 settlement and a court order requiring the state to explain the algorithmic logic to beneficiaries. The court found the “black box” nature of the decision-making violated the Due Process Clause of the Constitution.44

3.4 Education: Algorithmic Grading and Surveillance

The UK A-Levels Fiasco

  • The Incident: When COVID-19 cancelled exams, the UK regulator (Ofqual) used an algorithm to standardize grades.
  • The Failure: The algorithm weighted the historical performance of the school more heavily than the individual student’s potential. This resulted in students from disadvantaged schools being downgraded, while students at private schools kept their inflated predicted grades.45
  • The Outcome: Under threat of judicial review for discrimination and GDPR violations, the government scrapped the algorithm days later in a humiliating “U-turn”.47

Remote Proctoring and Biometric Bias

  • The Incident: Universities adopted AI proctoring tools (e.g., Proctorio) that use facial detection and gaze tracking.
  • The Failure: Students of color reported that the software failed to recognize their faces due to poor lighting calibration for darker skin tones. Students with disabilities were flagged for “suspicious movements” caused by their conditions.49
  • The Consequence: Lawsuits and student protests have forced many institutions to abandon these tools, citing privacy and discrimination concerns.51

3.5 Nonprofit Sector: Fundraising Ethics and Data Privacy

Wealth Screening and Data Trading

  • The Incident: The UK Information Commissioner’s Office (ICO) investigated charities for “wealth screening”—using third-party companies to analyze the financial status of donors to target them for larger gifts, often without consent.53
  • The Penalty: Eleven charities, including the RSPCA and Oxfam, were fined. The ICO ruled that donors did not consent to having their data “enriched” or traded, viewing it as a betrayal of trust.53

St. Jude vs. Feeding America: The Ethics of AI Targeting

  • The Controversy: St. Jude Children’s Research Hospital has been scrutinized for its aggressive, data-driven fundraising, which targets potential bequest donors (those leaving money in wills) with high precision.55
  • The Risk: While highly effective, this use of predictive analytics raises ethical questions about the manipulation of vulnerable, elderly donors. AI tools that predict “propensity to give” can inadvertently target those with cognitive decline or exploit emotional vulnerabilities, risking the organization’s moral standing.55

Section 4: Key Recommended Actions for Executives

To navigate this minefield, executives in regulated industries must abandon the “move fast and break things” mentality in favor of a “governance by design” approach.

4.1 Operationalize “Human-in-the-Loop” (HITL) Frameworks

Automation should never be fully autonomous in high-stakes domains (healthcare, credit, welfare).

  • Action: Implement a mandatory “human circuit breaker.” For any AI decision that impacts a person’s legal rights or financial status (e.g., denying a claim, flagging fraud), a qualified human must review the decision before it is finalized.
  • Context: This is now a legal requirement under the EU AI Act for “High Risk” systems and is effectively mandated by US due process case law (e.g., Arkansas Medicaid, Michigan MiDAS).5

4.2 Establish Continuous, Automated Governance

Annual audits are insufficient for dynamic AI models that learn and drift.

  • Action: Deploy MLOps (Machine Learning Operations) tools that provide real-time monitoring for data drift, bias, and performance degradation.
  • Context: Just as cybersecurity is 24/7, AI governance must be continuous. The Hello Digit case showed that an algorithm that works on Day 1 can fail on Day 100 due to changing environmental data.28

4.3 Eliminate “Proxy Bias” Through Data Hygiene

The Optum and Amazon hiring cases prove that using proxies (e.g., cost instead of health, resume keywords instead of skill) is a primary source of bias.

  • Action: Conduct “Proxy Stress Tests” during the design phase. explicitly ask: “Does this variable (e.g., zip code) serve as a proxy for a protected class (e.g., race)?”
  • Context: This requires diverse teams. A homogenous team may not realize that “time gaps in a resume” acts as a proxy for gender (maternity leave), whereas a diverse team is more likely to spot this correlation.57

4.4 Bridge the Boardroom Gap

Governance cannot succeed if the board treats AI as magic.

  • Action: Create a dedicated AI Ethics & Risk Committee at the board level, similar to the Audit Committee. Ensure this committee includes at least one member with technical AI literacy.
  • Context: With 66% of board members lacking knowledge, they cannot fulfill their fiduciary duty to oversee risk. This committee must have the power to veto high-risk deployments that lack sufficient guardrails.2

4.5 Mandate Third-Party algorithmic Audits

You cannot outsource liability.

  • Action: Update procurement contracts to require Algorithmic Impact Assessments (AIA) from all AI vendors. Demand “glass box” access for independent auditing of high-risk tools.
  • Context: The EU AI Act places obligations on the “deployer” of the system. If you buy a biased HR tool, you are liable for the discrimination it causes in your hiring process.13

4.6 The “Explainability” Imperative

If you can’t explain it, don’t deploy it.

  • Action: Adopt a policy that prioritizes interpretable models (e.g., decision trees, linear regression) over black-box deep learning for regulated decisions, unless the performance gain of the black box is massive and can be mitigated by explainability tools (SHAP/LIME).
  • Context: The Goldman Sachs/Apple Card debacle proved that even a “fair” model will be deemed a failure by regulators and the public if its decisions are opaque.33

List of Sources

1 IBM. “AI Adoption Challenges.” IBM Think Insights. (2025). 12 Deloitte. “AI Trends 2025: Adoption Barriers and Updated Predictions.” Deloitte Insights. (2025). 10 OECD. “Implementation Challenges that Hinder the Strategic Use of AI in Government.” OECD Public Governance Policy Papers. (2025). 9 PwC. “Responsible AI Survey.” PwC Tech Effect. (2025). 58 Ryseff, J., et al. “The Root Causes of Failure for Artificial Intelligence Projects.” RAND Corporation. (2024). 3 FTC. “FTC Announces Crackdown on Deceptive AI Claims and Schemes.” Federal Trade Commission Press Release. (2024). 17 MetricStream. “AI Regulation Trends: AI Policies US, UK, EU.” (2025). 15 European Commission. “Regulatory Framework on AI.” Digital Strategy. (2025). 4 Quinn Emanuel. “Initial Prohibitions Under EU AI Act Take Effect.” (2025). 33 Banking Dive. “Goldman Sachs Gender Bias Claims Apple Card.” (2021). 31 City & State NY. “DFS Superintendent Tackles Algorithm Bias with Apple Card Probe.” (2019). 28 AI Incident Database. “Incident 1222: Hello Digit’s Automated Savings Algorithm.” (2022). 29 CFPB. “CFPB Takes Action Against Hello Digit for Lying to Consumers.” (2022). 6 Johns Hopkins Bloomberg School of Public Health. “Rooting Out AI’s Biases.” Public Health Magazine. (2023). 35 The Guardian. “Healthcare Algorithm Racial Biases Optum.” (2019). 11 Fierce Healthcare. “New York to Probe Algorithm Used by Optum for Racial Bias.” (2019). 47 Foxglove. “We Put a Stop to the A-Level Grading Algorithm.” (2020). 45 Cornerstone Barristers. “U-Turn on A-Level Algorithm in Wake of JR Threats.” (2020). 38 Mind Matters. “An Epic Failure: Overstated AI Claims in Medicine.” (2021). 39 Becker’s Hospital Review. “Accuracy of Epic’s Sepsis Model Faces Scrutiny.” (2024). 14 Wolters Kluwer. “Health System Size Impacts AI Privacy and Security Concerns.” (2025). 59 EDPB. “Health Data Breach: Dedalus Biologie Fined 1.5 Million Euros.” (2022). 49 The Guardian. “US Schools Anti-Cheating Software Proctorio.” (2024). 42 TechTonic Justice. “Arkansas Medicaid Algorithm Lawsuit Outcome.” (2023). 44 BTAH. “Arkansas Medicaid Home and Community Based Services Hours Cuts.” (2023). 7 Privacy International. “Challenge Against Clearview AI in Europe.” (2022). 8 IAPP. “Greek DPA Imposes 20M Euro Fine on Clearview AI.” (2022). 21 NY State Comptroller. “Enforcement of Local Law 144 – Automated Employment Decision Tools.” (2025). 5 University of Michigan. “Case Over Michigan Unemployment Insurance Agency’s Faulty Automated System.” (2024). 41 Bridge Michigan. “Michiganders Falsely Accused of Jobless Fraud to Share in $20M Settlement.” (2024). 27 Bennett Jones. “Update: Requirements and Guidelines From Canadian Regulators.” (2025). 25 Torkin Manes. “Is a Decision-Maker’s Use of AI Unfair?” (2024). 60 AFP Global. “New Currency: Fundraising Trust in the Age of AI.” (2025). 2 Deloitte. “Progress on AI in the Boardroom.” Global Boardroom Program. (2025). 9 PwC. “Responsible AI as a Driver of Sustained Value.” (2025). 13 Ethisphere. “AI Governance, Risk, Ethics & Compliance Report.” (2025). 3 FTC. “Operation AI Comply.” (2024). 55 ProPublica. “St. Jude Fights Donors’ Families in Court.” (2021). 54 Stephens Scown. “Data Protection Warning for Charities.” (2018). 53 RPC. “Secret Wealth-Screening by Charities Breaks Data Laws.” (2017).

Works cited

  1. The 5 biggest AI adoption challenges for 2025 – IBM, accessed February 3, 2026, https://www.ibm.com/think/insights/ai-adoption-challenges
  2. Governance of AI: A critical imperative for today’s boards, 2nd edition | Deloitte Global, accessed February 3, 2026, https://www.deloitte.com/global/en/issues/trust/progress-on-ai-in-the-boardroom-but-room-to-accelerate.html
  3. FTC Announces Crackdown on Deceptive AI Claims and Schemes, accessed February 3, 2026, https://www.ftc.gov/news-events/news/press-releases/2024/09/ftc-announces-crackdown-deceptive-ai-claims-schemes
  4. Initial Prohibitions Under EU AI Act Take Effect – Quinn Emanuel, accessed February 3, 2026, https://www.quinnemanuel.com/the-firm/publications/initial-prohibitions-under-eu-ai-act-take-effect/
  5. Case Over the Michigan Unemployment Insurance Agency’s Faulty Automated System Finally Settled | Science, Technology and Public Policy (STPP), accessed February 3, 2026, https://stpp.fordschool.umich.edu/research/policy-brief/case-over-michigan-unemployment-insurance-agencys-faulty-automated-system
  6. Rooting Out AI’s Biases | Hopkins Bloomberg Public Health Magazine, accessed February 3, 2026, https://magazine.publichealth.jhu.edu/2023/rooting-out-ais-biases
  7. Challenge against Clearview AI in Europe | Privacy International, accessed February 3, 2026, https://privacyinternational.org/legal-action/challenge-against-clearview-ai-europe
  8. Greek DPA imposes 20M euro fine on Clearview AI for unlawful processing of personal data, accessed February 3, 2026, https://iapp.org/news/a/greek-dpa-imposes-20m-euro-fine-on-clearview-ai-for-unlawful-processing-of-personal-data
  9. PwC’s 2025 Responsible AI survey: From policy to practice, accessed February 3, 2026, https://www.pwc.com/us/en/tech-effect/ai-analytics/responsible-ai-survey.html
  10. Implementation challenges that hinder the strategic use of AI in government – OECD, accessed February 3, 2026, https://www.oecd.org/en/publications/2025/06/governing-with-artificial-intelligence_398fa287/full-report/implementation-challenges-that-hinder-the-strategic-use-of-ai-in-government_05cfe2bb.html
  11. New York insurance regulator to probe Optum algorithm for racial bias – Fierce Healthcare, accessed February 3, 2026, https://www.fiercehealthcare.com/payer/new-york-to-probe-algorithm-used-by-optum-for-racial-bias
  12. AI trends 2025: Adoption barriers and updated predictions – Deloitte, accessed February 3, 2026, https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/blogs/pulse-check-series-latest-ai-developments/ai-adoption-challenges-ai-trends.html
  13. New Report Reveals 85% of Ethics & Compliance Teams Are Exposed on AI Third-Party Governance – Ethisphere, accessed February 3, 2026, https://ethisphere.com/ai-governance-risk-ethics-compliance-report-2025/
  14. Health system size impacts AI privacy and security concerns – Wolters Kluwer, accessed February 3, 2026, https://www.wolterskluwer.com/en/expert-insights/health-system-size-impacts-ai-privacy-and-security-concerns
  15. AI Act | Shaping Europe’s digital future – European Union, accessed February 3, 2026, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
  16. Navigating the European Union Artificial Intelligence Act for Healthcare – PMC, accessed February 3, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC11319791/
  17. 2026 Guide to AI Regulations and Policies in the US, UK, and EU, accessed February 3, 2026, https://www.metricstream.com/blog/ai-regulation-trends-ai-policies-us-uk-eu.html
  18. The French SA fines Clearview AI EUR 20 million | European Data Protection Board, accessed February 3, 2026, https://www.edpb.europa.eu/news/national-news/2022/french-sa-fines-clearview-ai-eur-20-million_en
  19. FTC Launches Operation AI Comply with Five Enforcement Actions Involving AI Misuse – AI: The Washington Report | Mintz, accessed February 3, 2026, https://www.mintz.com/insights-center/viewpoints/54731/2024-10-03-ftc-launches-operation-ai-comply-five-enforcement
  20. FTC Signals Tough Line in First AI Discrimination Case Under Section 5 | Perkins Coie, accessed February 3, 2026, https://perkinscoie.com/insights/update/ftc-signals-tough-line-first-ai-discrimination-case-under-section-5
  21. Enforcement of Local Law 144 – Automated Employment Decision Tools | Office of the New York State Comptroller, accessed February 3, 2026, https://www.osc.ny.gov/state-agencies/audits/2025/12/02/enforcement-local-law-144-automated-employment-decision-tools
  22. Critical audit of NYC’s AI hiring law signals increased risk for employers | DLA Piper, accessed February 3, 2026, https://www.dlapiper.com/en-us/insights/publications/2026/01/critical-audit-of-nyc-ai-hiring-law-signals-increased-risk-for-employers
  23. Artificial Intelligence Briefing: FTC to Address Commercial Surveillance and Data Security | Publications | Insights | Faegre Drinker Biddle & Reath LLP, accessed February 3, 2026, https://www.faegredrinker.com/en/insights/publications/2022/8/artificial-intelligence-briefing-ftc-to-address-commercial-surveillance-and-data-security
  24. AI Watch: Global regulatory tracker – United States | White & Case LLP, accessed February 3, 2026, https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-united-states
  25. Is a Decision-Maker’s Use of AI Unfair? – Torkin Manes, accessed February 3, 2026, https://www.torkin.com/insights/publication/is-a-decision-maker-s-use-of-ai-unfair
  26. Cautious Concern But Missing Crucial Context – Justice Brown’s Decision in Haghshenas – Welcome to Vancouver’s Immigration Blog, accessed February 3, 2026, https://vancouverimmigrationblog.com/cautious-concern-but-missing-crucial-context-justice-browns-decision-in-haghshenas/
  27. AI in Canada: The Latest from Regulators, Courts and Public Bodies | Bennett Jones, accessed February 3, 2026, https://www.bennettjones.com/Insights/Blogs/Update-Requirements-and-Guidelines-From-Canadian-Regulators
  28. Incident 1222: CFPB Reportedly Finds Hello Digit’s Automated Savings Algorithm Caused Overdrafts and Orders Redress with $2.7M Penalty, accessed February 3, 2026, https://incidentdatabase.ai/cite/1222/
  29. CFPB Takes Action Against Hello Digit for Lying to Consumers About Its Automated Savings Algorithm, accessed February 3, 2026, https://www.consumerfinance.gov/about-us/newsroom/cfpb-takes-action-against-hello-digit-for-lying-to-consumers-about-its-automated-savings-algorithm/
  30. US FinTech Hello Digit fined $2.7m for faulty algorithm – FStech Financial Sector Technology, accessed February 3, 2026, https://www.fstech.co.uk/fst/US_FinTech_Hello_Digit_Fined_2_7m_For_Faulty_Algorithm.php
  31. DFS superintendent tackles algorithm bias with Apple Card probe – City & State New York, accessed February 3, 2026, https://www.cityandstateny.com/policy/2019/11/dfs-superintendent-tackles-algorithm-bias-with-apple-card-probe/176717/
  32. Gender Bias Complaints against Apple Card Signal a Dark Side to Fintech – Baker Library, accessed February 3, 2026, https://www.library.hbs.edu/working-knowledge/gender-bias-complaints-against-apple-card-signal-a-dark-side-to-fintech
  33. Goldman cleared of bias claims in NYDFS’s Apple Card probe – Banking Dive, accessed February 3, 2026, https://www.bankingdive.com/news/goldman-sachs-gender-bias-claims-apple-card-women-new-york-dfs/597273/
  34. DFS Issues Findings on the Apple Card and Its Underwriter Goldman Sachs Bank, accessed February 3, 2026, https://www.dfs.ny.gov/reports_and_publications/press_releases/pr202103231
  35. Healthcare algorithm used across America has dramatic racial biases – The Guardian, accessed February 3, 2026, https://www.theguardian.com/society/2019/oct/25/healthcare-algorithm-racial-biases-optum
  36. Comment Letter – October 25, 2019: DFS and DOH joint letter to United Health Group Incorporated | Department of Financial Services – NY DFS, accessed February 3, 2026, https://www.dfs.ny.gov/reports-and-publications/comment-letters/dfs-doh-joint-letter-uhgi-20191025
  37. NY Regulators Probe for Racial Bias in Health-Care Algorithm – GovTech, accessed February 3, 2026, https://www.govtech.com/health/ny-regulators-probe-for-racial-bias-in-health-care-algorithm.html
  38. An Epic Failure: Overstated AI Claims in Medicine | Mind Matters, accessed February 3, 2026, https://mindmatters.ai/2021/08/an-epic-failure-overstated-ai-claims-in-medicine/
  39. Accuracy of Epic’s sepsis model faces scrutiny – Becker’s Hospital Review | Healthcare News & Analysis, accessed February 3, 2026, https://www.beckershospitalreview.com/healthcare-information-technology/ehrs/accuracy-of-epics-sepsis-model-faces-scrutiny/
  40. Michigan Unemployment Insurance False Fraud Determinations, accessed February 3, 2026, https://www.btah.org/case-study/michigan-unemployment-insurance-false-fraud-determinations.html
  41. Michiganders falsely accused of jobless fraud to share in $20M settlement – Bridge Michigan, accessed February 3, 2026, https://bridgemi.com/michigan-government/michiganders-falsely-accused-jobless-fraud-share-20m-settlement/
  42. Victories — TechTonic Justice, accessed February 3, 2026, https://www.techtonicjustice.org/victories
  43. Three disabled Arkansans obtain historic settlement and program improvements in lawsuit against DHS officials. – Legal Aid of Arkansas, accessed February 3, 2026, https://arlegalaid.org/news-events/newsroom.html/article/2023/08/07/three-disabled-arkansans-obtain-historic-settlement-and-program-improvements-in-lawsuit-against-dhs-officials-
  44. Arkansas Medicaid Home and Community Based Services Hours Cuts, accessed February 3, 2026, https://www.btah.org/case-study/arkansas-medicaid-home-and-community-based-services-hours-cuts.html
  45. U-turn on A-level algorithm in wake of JR threats – Cornerstone Barristers, accessed February 3, 2026, https://cornerstonebarristers.com/u-turn-level-algorithm-wake-jr-threats/
  46. “F**k the algorithm”?: What the world can learn from the UK’s A-level grading fiasco, accessed February 3, 2026, https://blogs.lse.ac.uk/impactofsocialsciences/2020/08/26/fk-the-algorithm-what-the-world-can-learn-from-the-uks-a-level-grading-fiasco/
  47. We put a stop to the A Level grading algorithm! – Foxglove, accessed February 3, 2026, https://www.foxglove.org.uk/2020/08/17/we-put-a-stop-to-the-a-level-grading-algorithm/
  48. A-Level results: Students challenging the algorithm via judicial review – Capital Law, accessed February 3, 2026, https://www.capitallaw.co.uk/news/a-level-results-students-challenging-the-algorithm-via-judicial-review/
  49. Are your kids being spied on? The rise of anti-cheating software in US schools, accessed February 3, 2026, https://www.theguardian.com/education/2024/apr/18/us-schools-anti-cheating-software-proctorio
  50. AI Proctoring: Academic Integrity vs. Student Rights – UC Law Journal, accessed February 3, 2026, https://www.hastingslawjournal.org/wp-content/uploads/10-Mita_final.pdf
  51. AI-Powered Remote Proctoring: Suffering the Same Old Issues That Trigger False Accusations of Cheating | LLF National Law Firm, accessed February 3, 2026, https://www.studentdisciplinedefense.com/ai-powered-remote-proctoring-suffering-the-same-old-issues-that-trigger-false-accusations-of-cheating
  52. The Surveillant University: Remote Proctoring, AI, and Human Rights – The Canadian Journal of Comparative and Contemporary Law, accessed February 3, 2026, https://www.cjccl.ca/wp-content/uploads/2022/10-Scassa.pdf
  53. Secret wealth screening by charities breaks data laws – RPC, accessed February 3, 2026, https://www.rpclegal.com/thinking/data-and-privacy/secret-wealth-screening-by-charities-breaks-data-laws/
  54. Data protection warning for charities – Stephens Scown, accessed February 3, 2026, https://www.stephens-scown.co.uk/intellectual-property-2/data-protection/data-protection-warning-charities/
  55. St. Jude fights donors’ families in court for share of estates – Fierce Healthcare, accessed February 3, 2026, https://www.fiercehealthcare.com/providers/st-jude-fights-donors-families-court-share-estates
  56. EQUITY | Ethical Issues Resulting from Using AI in Fundraising – Hilborn Charity eNEWS, accessed February 3, 2026, https://hilborn-charityenews.ca/articles/equity-ethical-issues-resulting-from-using-ai
  57. Research Summaries: Artificial Intelligence and Data for Nonprofits – BYU ScholarsArchive, accessed February 3, 2026, https://scholarsarchive.byu.edu/cgi/viewcontent.cgi?article=1122&context=joni
  58. The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed – RAND, accessed February 3, 2026, https://www.rand.org/pubs/research_reports/RRA2680-1.html
  59. Health data breach: Dedalus Biologie fined 1.5 million euros, accessed February 3, 2026, https://www.edpb.europa.eu/news/national-news/2022/health-data-breach-dedalus-biologie-fined-15-million-euros_en
  60. The New Currency of Fundraising: Trust in the Age of AI, accessed February 3, 2026, https://afpglobal.org/new-currency-fundraising-trust-age-ai

This article was written with my brain and two hands (primarily) with the help of Google Gemini, ChatGPT, Claude, and other wondrous toys.

Leave a comment