The widespread adoption of artificial intelligence (AI) across North American enterprises has precipitated a capital allocation cycle of historic magnitude. Driven by the generative AI boom, organizations in regulated sectors—finance, healthcare, government, education, and nonprofits—are aggressively increasing technology budgets, with 92% of large enterprises planning to ramp up AI spending in the coming fiscal years.1 Yet, a profound disconnect exists between capital committed and value realized. Industry analysis suggests that while experimentation is rampant, only a small fraction of these initiatives—approximately 5%—reach a level of maturity that delivers consistent, scalable financial returns.2 The remaining 95% of projects languish in a costly limbo known as “Pilot Purgatory,” consuming vast resources without impacting the bottom line.
This report serves as a strategic audit for senior leadership. It moves beyond the hype cycle to identify the structural, operational, and strategic vectors through which medium-to-large organizations hemorrhage capital during AI adoption. The analysis reveals that waste is rarely a product of technological failure; rather, it stems from a fundamental misalignment between the probabilistic nature of AI and the deterministic structures of legacy procurement, governance, and infrastructure.
In the following sections, we dissect the six primary mechanisms of financial waste, establish a diagnostic framework for early detection, and provide evidence-based course-correction strategies. Drawing on recent case studies—from the operational collapse of the Los Angeles Unified School District’s “Ed” chatbot to the strategic success of Cardinal Health’s AI Center of Excellence—this document outlines a path from unbridled experimentation to disciplined, value-accretive engineering.
Section I: The Landscape of Waste — Six Vectors of AI Capital Destruction
In the rush to modernize, organizations often mistake activity for progress. The following six vectors represent the most significant channels of financial loss in current AI adoption strategies across regulated North American industries.
1. The Pilot Purgatory Trap: Innovation Without Implementation
The most pervasive source of financial waste is “Pilot Purgatory,” a state where organizations fund endless proof-of-concept (PoC) initiatives that function technically within a controlled sandbox but fail to transition into production environments. Research indicates that nearly half of all AI projects never reach production, and a significant portion of generative AI (GenAI) projects are expected to be abandoned after the PoC phase due to unclear business value or escalating costs.3
The Mechanics of the Stall
Pilot Purgatory creates a sunk-cost trap where resources are consumed by perpetual experimentation. In regulated industries, this failure mode is often driven by a disconnect between the innovation labs building the models and the operational units required to deploy them. Innovation teams, often incentivized by novelty, prioritize technical metrics such as model perplexity or F1-scores over business utility. A model may achieve near-perfect accuracy in a lab setting but fail to integrate with legacy Electronic Health Records (EHR) or core banking systems due to API incompatibilities or latency issues.4
This “Science Fair” syndrome leads to a proliferation of disconnected experiments. Departments across an enterprise—marketing, HR, operations—often launch independent pilots with different vendors to solve the same problem, such as document summarization or customer segmentation. Without a centralized registry or governance framework, the organization pays for redundant research and development, effectively buying the same innovation multiple times.5
The Infrastructure Disconnect
A critical driver of this waste is “Infrastructure Blindness.” Pilots are frequently built on sanitized, static datasets that do not reflect the messy, dynamic reality of live enterprise data. When these models are introduced to production environments, they encounter data drift, latency, and security requirements that were not accounted for in the initial budget. The cost of retrofitting a pilot for production-grade security, logging, and governance often exceeds the cost of the initial build, leading executives to abandon the project entirely.4
Sector-Specific Manifestations
In the healthcare sector, hospitals invest heavily in diagnostic AI pilots that demonstrate high efficacy in radiology or pathology. However, these tools often stall because they do not integrate seamlessly into the physician’s workflow. If a radiologist must log into a separate portal to view AI insights, the tool disrupts their cognitive flow and increases the time per patient. Consequently, the tool is ignored, and the investment is rendered worthless.7 Similarly, in government, the “pilot trap” is exacerbated by rigid grant cycles. Agencies may fund a pilot with a specific grant but lack the appropriated funds for the long-term cloud compute costs required to sustain the model at scale, leading to a “zombie” project that exists on paper but delivers no value.8
2. The Infrastructure Illusion: Buying Models Before Fixing “Data Plumbing”
The second major vector of waste is the “Data-Last” fallacy. Organizations often underestimate the state of their data readiness, assuming that advanced AI models can compensate for poor data hygiene. This leads to the purchase of expensive enterprise AI licenses or GPU clusters that sit idle while data engineering teams scramble to clean datasets—a process that often takes months or years.
The High Cost of Unstructured Data
Data quality is cited by a significant plurality of organizations as the top obstacle to AI success.3 In regulated industries, data is not just messy; it is siloed and legally encumbered. A substantial portion of the valuable data in insurance and healthcare exists in “dark data” formats—PDF contracts, handwritten clinician notes, or scanned faxes. Organizations waste millions attempting to train models on this unstructured data without first investing in the optical character recognition (OCR) and data transformation pipelines necessary to make it machine-readable.4
The RAG Fallacy
Many firms are rushing to build Retrieval-Augmented Generation (RAG) systems to query internal knowledge bases. However, if the underlying documents are outdated, contradictory, or poorly tagged, the RAG system will confidently retrieve obsolete information. The financial waste here is twofold: the cost of the AI implementation and the operational cost of employees acting on bad information. The assumption that an AI can “figure it out” without a robust information architecture is a costly delusion.4
Cloud Waste and Idle Compute
Organizations frequently over-provision cloud resources, fearing performance bottlenecks. Without a mature FinOps practice to monitor usage, companies pay for reserved instances or dedicated GPU clusters that remain underutilized. This “cloud waste” can account for a significant percentage of the AI infrastructure budget, particularly when projects are delayed due to data unavailability.10
3. Shadow AI: The Hidden Tax of Unmanaged Risk
“Shadow AI” refers to the unsanctioned use of consumer-grade AI tools by employees. While often driven by a desire for efficiency, Shadow AI introduces catastrophic financial risks through data exfiltration, regulatory non-compliance, and cybersecurity vulnerabilities.
The Economics of a Breach
The cost of Shadow AI extends beyond the immediate risk of a data leak. Shadow AI incidents now account for a substantial portion of all data breaches and carry a significant cost premium compared to standard breaches.11 In sectors like finance and healthcare, the upload of Personally Identifiable Information (PII) or Protected Health Information (PHI) to a public Large Language Model (LLM) constitutes a direct violation of regulations such as GDPR, CCPA, or HIPAA. The resultant fines can amount to millions in penalties, far outstripping any productivity gains achieved by the unsanctioned use of the tool.12
The “Invisible” License Cost
When individual departments purchase SaaS subscriptions for AI tools on corporate credit cards (Shadow IT), the organization loses economies of scale. A centralized enterprise license is often significantly cheaper per seat than hundreds of individual “pro” subscriptions scattered across the balance sheet. This decentralized spending creates a “hidden tax” that inflates the overall cost of technology adoption without providing the security and governance benefits of an enterprise agreement.14
4. The Procurement Paralysis: Rigid Buying in a Fluid Market
Government and nonprofit sectors are particularly prone to wasting money through antiquated procurement processes. The traditional “Waterfall” procurement model—where requirements are defined in detail years before deployment—is fundamentally incompatible with the rapid iteration cycle of AI.
The “Vendor Lock-In” of Obsolete Tech
Public sector agencies often sign multi-year contracts for “cutting-edge” AI solutions. Given that the state-of-the-art in AI evolves rapidly—often every six months—agencies frequently find themselves contractually obligated to pay for obsolete technology for years. By the time a solution is deployed, the underlying model may already be generations behind the market standard.15
The Custom Build Trap
There is a persistent tendency in government to build custom solutions for problems that have been solved by commercial off-the-shelf (COTS) products. Custom development is not only more expensive upfront but carries a massive long-term maintenance burden. The LAUSD “Ed” chatbot failure is a prime example of the risks associated with betting heavily on a custom development partner that lacks financial stability. The district spent millions on a solution that collapsed shortly after launch due to the vendor’s financial insolvency, leaving the district with no usable product and significant sunk costs.16
5. Misaligned Use Cases: The “Magic Bullet” Syndrome
A significant portion of AI spending is directed toward use cases where AI is either technologically immature or operationally inappropriate. This is often driven by “Executive FOMO” (Fear Of Missing Out) rather than a clear business case.17
The Automation Fallacy
In high-stakes fields like social work or mental health, attempting to replace human interaction with AI chatbots can lead to disaster. The loss of trust when an AI fails in these contexts can set an organization’s digital transformation back by years. Replacing a human touch with an algorithmic response in sensitive situations often results in lower engagement and poorer outcomes, necessitating costly interventions to repair the damage.16
Over-Engineering
Organizations frequently use Generative AI for problems that could be solved with simpler, cheaper, and more explainable technologies like Robotic Process Automation (RPA) or traditional regression models. Using a Large Language Model to extract structured data from a standardized form is an exorbitant waste of compute resources when a simple script could achieve the same result with higher accuracy and lower cost.18
6. The “Human-in-the-Loop” Blind Spot
The final vector of waste is the underestimation of the human cost of AI. AI is rarely “set it and forget it.” It requires continuous supervision, retraining, and validation.
The Hidden OpEx
If an AI system has an accuracy of 80%, a human must review 100% of its output to catch the 20% of errors. If the time taken to review the AI’s output approaches the time it would take to do the task manually, the ROI evaporates. This is often seen in legal and compliance use cases where “hallucinations” (fabrications) by the AI necessitate rigorous fact-checking.19
Maintenance and Drift
AI models degrade over time as the world changes (concept drift). A fraud detection model trained on data from one year will fail to catch fraud patterns in the next. Organizations often budget for the build but fail to budget for the continuous retraining and MLOps (Machine Learning Operations) required to keep the model relevant. This oversight leads to the rapid obsolescence of expensive models.21
Section II: Early Indicators of Failure — A Diagnostic Framework
For senior leaders, the ability to detect a failing AI initiative before it consumes the annual budget is critical. The following diagnostic indicators serve as early warning systems for the six vectors of waste described above.
Financial and Budgetary Red Flags
The most immediate indicators of a failing AI strategy often appear in the financial ledgers. A sudden, unexplained deviation in cloud computing costs is a primary red flag. If cloud bills from providers like AWS or Azure spike without a corresponding increase in active users or revenue, it suggests unoptimized code, rogue training runs, or inefficient architecture.22 Similarly, if a project remains a line item on the budget for more than two consecutive fiscal quarters as a “pilot” or “PoC” without a defined path to production revenue or savings, it is likely trapped in Pilot Purgatory.5 Another subtle indicator is an uptick in expense reimbursements for “software,” “subscriptions,” or “productivity tools” in non-IT departments, which signals widespread Shadow AI adoption and a bypassing of centralized governance.14
Operational and Technical Red Flags
Operationally, the “One-Off” architecture is a significant warning sign. If every AI project requires a unique technology stack, security review, and data pipeline, the organization lacks a scalable AI infrastructure, guaranteeing high maintenance costs and technical debt.23 In the testing phase, high rates of “false positives” in fraud or diagnostic models lead to “alert fatigue.” If users stop trusting the system during the pilot phase, they will actively bypass it in production, rendering the investment useless.24 Furthermore, frequent complaints from data science teams regarding “lack of access” or “poor data quality” indicate that the organization is attempting to build models before the data foundation is ready, a leading indicator of project failure.3
Organizational and Cultural Red Flags
Culturally, the “IT-Led” project is a harbinger of doom. If an AI initiative is championed solely by the CIO or CTO without a dedicated business sponsor—such as the CMO, CFO, or Chief Clinical Officer—it is destined to fail. Successful AI solves a business problem, not a technical one.18 Silence from the legal or compliance teams is another danger signal. If these stakeholders are not involved in the early design phases of a GenAI project, the project will likely hit a “compliance wall” immediately prior to deployment, necessitating a costly rebuild or cancellation.26 Finally, vendor secrecy regarding model training data or architecture creates an unacceptable risk for regulated industries. A vendor’s refusal to provide transparency often masks deeper issues, as seen in the collapse of AllHere, where a lack of financial and technical transparency preceded the failure.16
Diagnostic Checklist for Executives
The following table provides a quick diagnostic tool for executives to assess the health of their AI initiatives.
| Category | Warning Sign | Implication |
| Strategy | “We need a GenAI strategy” (Technology-first) vs. “We need to reduce claim processing time” (Problem-first). | The project is a solution looking for a problem. High risk of abandonment. |
| Finance | Cloud costs are rising linearly (or exponentially) while active user count is flat. | Inefficient code, zombie instances, or Shadow IT proliferation. |
| Data | The team spends >50% of the timeline discussing “data access” or “legal approval” rather than modeling. | Infrastructure is not ready. The pilot will likely time out before deployment. |
| Talent | The AI team reports to IT/Innovation but has no weekly meeting with the Business Unit leader (the P&L owner). | Lack of “skin in the game.” The business unit will refuse to fund the production rollout. |
| Vendor | The vendor claims “proprietary magic” and refuses to share performance benchmarks or training data sources. | High risk of “snake oil,” bias, or regulatory non-compliance. |
Section III: Strategic Course Correction — From Waste to Value
To reverse the trend of wasted capital, organizations must transition from a posture of “unbridled experimentation” to one of “disciplined engineering.” The following strategies outline how to course-correct, supported by best practices and case studies.
1. The “Living Product” Mindset: Escaping Pilot Purgatory
Organizations must stop treating AI as a “project” with a start and end date and start treating it as a “product” with a lifecycle. This shift requires a fundamental change in how initiatives are funded and managed.
Establishing the Production Path
No pilot should be approved without a “Production Path” document. This document must define the Line of Business (LOB) budget source for post-deployment maintenance, the specific Service Level Agreements (SLAs) for uptime and accuracy, and the “Kill Criteria”—metrics that, if not met, trigger the immediate termination of the project. This discipline forces stakeholders to consider the long-term viability of the project before resources are committed.4
The “15-Minute Rule” Best Practice
Lumen Technologies provides a compelling example of this focused approach. Instead of attempting to automate the entire sales process, they identified a specific bottleneck: sales teams were spending hours researching customers. By focusing on this single, measurable problem, they designed Copilot integrations that reduced research time to 15 minutes. This targeted application of AI yielded a clear ROI and avoided the pitfalls of vague, sprawling initiatives.4
2. The AI Center of Excellence (CoE): Centralized Governance, Decentralized Execution
To combat siloed purchasing and inconsistent governance, mature organizations establish an AI Center of Excellence (CoE). This body does not necessarily build every model, but it sets the standards for how models are built and bought.
Case Study: Cardinal Health
Cardinal Health successfully established an AI CoE to identify high-value use cases and enforce governance. By centralizing expertise, they avoided the chaos of disconnected pilots. Their CoE focuses on “incremental progress” rather than moonshots, using AI to optimize supply chain inventory visibility—a “boring” but high-value application. Crucially, the CoE ensures that every project has an “unmet need tied to customer success” before a single dollar is spent. This rigorous qualification process has been central to their success.28
Implementation Strategy
The CoE should act as a clearinghouse for AI procurement, ensuring that the organization leverages its collective bargaining power with vendors. It should include representatives from IT, Legal, Risk, and Business Units to ensure a holistic view of every initiative. This cross-functional approach prevents the “siloed” decision-making that often leads to failure.29
3. FinOps for AI: The New Discipline of Cost Control
Just as cloud computing spawned “Cloud FinOps,” AI requires “AI FinOps” to manage the unique costs of token consumption, vector database storage, and GPU inference.
Unit Economics and Model Routing
Organizations must implement “Unit Economics” for AI, measuring not just the total cost of the model but the cost per transaction. If an AI customer service agent costs more per resolution than a human agent, the AI is destroying value.22 To optimize costs, organizations should use “Model Routing,” where simple queries are routed to smaller, cheaper models (like Llama 3 or GPT-3.5), while complex reasoning tasks are sent to flagship models. This tiered approach can reduce inference costs by 60-80%.10
4. Risk-Tiered Governance: The “Agile Compliance” Framework
In regulated industries, a “one-size-fits-all” governance policy creates bottlenecks. A chatbot offering mental health advice requires a different risk profile than an AI summarizing meeting notes.
The AI TRiSM Framework
Implementing a tiered risk framework, such as the AI Trust, Risk, and Security Management (TRiSM) framework, allows organizations to tailor their governance to the specific risk level of each application.
- Tier 1 (High Risk): Clinical decision support, credit scoring, hiring algorithms. Requires full external audit, human-in-the-loop, adversarial testing, and explainability.
- Tier 2 (Moderate Risk): Internal knowledge retrieval, code generation. Requires internal review, automated testing, and periodic audit.
- Tier 3 (Low Risk): Marketing copy generation, meeting summarization. Requires basic usage guidelines and post-hoc review. This approach allows low-risk innovation to proceed rapidly while concentrating compliance resources on high-stakes applications.31
5. Data Fabric Architecture: Fixing the Plumbing
Stop building point-to-point integrations for every new AI tool. Invest in a “Data Fabric” or “Lakehouse” architecture that creates a unified access layer for data.
Metadata Management and Context
AI models are blind without context. Prioritizing metadata management—tagging data with its provenance, sensitivity level (PII/PHI), and freshness—is essential. This ensures that models have the context they need to operate accurately and securely.
Case Study: Nebraska Medicine
Nebraska Medicine provides a powerful example of the value of data hygiene. By focusing on cleaning operational data related to bed capacity and discharge times, rather than complex clinical data, they utilized AI to optimize patient flow. This led to a 2500% increase in the use of the discharge lounge, significantly improving operational efficiency. This success demonstrates that fixing data plumbing for operational metrics often yields faster and more reliable ROI than complex clinical AI initiatives.33
Section IV: Sector-Specific Deep Dives
1. Healthcare: Beyond the Hype to Operational Efficiency
The Waste: Healthcare organizations often waste millions on “black box” diagnostic AI tools that clinicians refuse to trust or use. Additionally, the fragmented nature of medical data makes scaling these tools nearly impossible without massive integration costs.7
The Correction: The most effective strategy is to focus on Administrative and Operational AI.
- Cardinal Health (Supply Chain): Instead of attempting to replace pharmacists, Cardinal Health used AI to optimize the supply chain, ensuring the right drugs were in the right place. This “back office” application reduced waste and improved patient outcomes without the regulatory challenges associated with clinical AI.28
- Stroke Care ROI: A stroke center utilized AI not just for diagnosis but to coordinate the logistics of stroke response. By speeding up image analysis and alerting the right teams instantly, they reduced patient Length of Stay (LOS), saving $70,000 to $120,000 per patient. The ROI came from efficiency—getting the patient to the right care faster—not just from the diagnosis itself.34
Key Takeaway: The highest ROI in healthcare often lies in the “unsexy” areas: scheduling, coding, billing, and supply chain management. These applications face lower regulatory hurdles and deliver measurable efficiency gains.
2. Finance: The Governance Advantage
The Waste: Financial institutions waste money on “Shadow AI” where traders or analysts use unapproved tools, risking massive fines. They also suffer from “Model Risk Management” (MRM) bottlenecks, where validation takes so long that the model is obsolete by the time it is approved.
The Correction: The winning strategy involves Automated Governance and Fraud Detection.
- Citibank & nCino: Instead of building general-purpose AI, banks are deploying targeted AI for specific workflows like Continuous Credit Monitoring. nCino’s system provides transparency (explainability) which satisfies regulators. This allows them to automate the monitoring of credit risk, freeing up analysts to focus on complex cases. The ROI is found in risk avoidance and labor productivity.26
- HSBC Fraud Detection: HSBC uses AI to monitor 1.35 billion transactions monthly. By using AI to reduce “false positives” in fraud detection by 60%, they saved millions in operational costs (fewer manual reviews) while catching 2-4x more fraud. This is a classic “efficiency + revenue protection” play.36
Key Takeaway: In finance, AI should primarily be viewed as a Risk Management and Productivity tool. Use AI to handle the volume of data that humans cannot, but keep humans on the final decision layer for high-stakes credit/trading decisions.
3. Government & Education: The Procurement & Implementation Crisis
The Waste: The public sector is plagued by high-profile failures due to poor vendor selection and a lack of technical oversight. The collapse of the LAUSD “Ed” chatbot is the definitive cautionary tale.
Case Study: The LAUSD “Ed” Debacle The Los Angeles Unified School District (LAUSD) spent approximately $3-6 million on a chatbot from a startup, AllHere, that collapsed financially months after launch. The root cause was a failure of procurement to assess the financial solvency of the vendor. The district treated the project as a “tech purchase” rather than a “strategic partnership.” When the vendor furloughed staff, the district was left with a “zombie” product and significant concerns regarding student data privacy. This failure underscores the critical need for financial due diligence in AI procurement.16
The Correction: Governments should focus on Revenue Recovery & Process Automation.
- Wilmington, Delaware: Instead of a flashy chatbot, the city used AI to analyze data on unpaid water bills and target digital ads to delinquent account holders. This project cost a fraction of a massive IT overhaul but recovered $1.1 million in revenue. It was a specific, targeted use case with a clear financial ROI.37
- FEMA: FEMA is developing an AI chatbot specifically to help staff navigate the complex “Hazard Mitigation Assistance” grants. This is an internal-facing tool to help employees do their jobs better, reducing the risk of public-facing hallucinations while improving government efficiency.38
Key Takeaway: Public sector entities should focus on Internal-Facing AI (to assist overworked staff) and Revenue/Fraud Recovery before attempting high-risk, public-facing automated agents.
4. Nonprofits: Overcoming Resource Strain
The Waste: Nonprofits often suffer from “Donor CRM Failure.” They buy sophisticated tools like Salesforce NPSP but lack the staff to maintain the data. The data becomes “dirty,” and the AI features (like predictive donor scoring) become useless.39
The Correction: The strategy should focus on Shared Services & Data Hygiene.
- Lone Star Legal Aid: This organization built “Juris,” an internal AI tool to help attorneys find case law and internal documents. By building it in-house with a phased approach ($2,000/year infra cost), they avoided massive vendor fees and created a tool that directly aided their mission. This “low-code/no-code” approach allows nonprofits to leverage AI without the massive overhead of enterprise solutions.40
Conclusion: The Path to “AI Maturity”
The era of “AI Tourism”—where organizations could afford to visit the future without living there—is over. As financial pressures mount and regulatory scrutiny tightens, the organizations that succeed will be those that treat AI not as a magic trick, but as a disciplined industrial process.
To stop wasting money, leaders must:
- Kill zombie pilots that lack a path to production.
- Redirect budget from buying new models to fixing old data.
- Tier their governance, allowing speed for low-risk tools and enforcing rigor for high-risk ones.
- Empower a Center of Excellence to centralize expertise and decentralize execution.
The paradox of AI is that to move fast, you must first slow down. You must fix the plumbing, train the people, and secure the perimeter. Only then can the capital invested in AI transform from a cost center into a competitive engine.
Works cited
- Scaling gen AI in the life sciences industry – McKinsey, accessed December 27, 2025, https://www.mckinsey.com/industries/life-sciences/our-insights/scaling-gen-ai-in-the-life-sciences-industry
- Stop Wasting Money on Failed AI Use Cases – Reworked, accessed December 27, 2025, https://www.reworked.co/digital-workplace/stop-wasting-money-on-failed-ai-use-cases/
- The Surprising Reason Most AI Projects Fail – And How to Avoid It at …, accessed December 27, 2025, https://www.informatica.com/blogs/the-surprising-reason-most-ai-projects-fail-and-how-to-avoid-it-at-your-enterprise.html
- Why most enterprise AI projects fail — and the patterns that actually …, accessed December 27, 2025, https://workos.com/blog/why-most-enterprise-ai-projects-fail-patterns-that-work
- Scaling AI in Healthcare: Escaping Pilot Purgatory – Nisum, accessed December 27, 2025, https://www.nisum.com/nisum-knows/scaling-ai-in-healthcare-escaping-pilot-purgatory
- AI ROI: The paradox of rising investment and elusive returns – Deloitte, accessed December 27, 2025, https://www.deloitte.com/nl/en/issues/generative-ai/ai-roi-the-paradox-of-rising-investment-and-elusive-returns.html
- Economics of Artificial Intelligence in Healthcare: Diagnosis vs. Treatment – PMC, accessed December 27, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC9777836/
- Pilot Purgatory: What It Is and How the Best Have Beaten It – IoT World Today, accessed December 27, 2025, https://www.iotworldtoday.com/iiot/what-pilot-purgatory-is-and-how-the-best-have-beaten-it
- Empowering Impact: The Perils and Promise of AI in the Nonprofit Sector – The IC² Institute, accessed December 27, 2025, https://ic2.utexas.edu/empowering-impact-the-perils-and-promise-of-ai-in-the-nonprofit-sector/
- From Chaos To Clarity: 5 FinOps Best Practices For 2025 – CloudZero, accessed December 27, 2025, https://www.cloudzero.com/blog/finops-best-practices/
- How Shadow AI Costs Companies $670K Extra: IBM’s 2025 Breach Report – Kiteworks, accessed December 27, 2025, https://www.kiteworks.com/cybersecurity-risk-management/ibm-2025-data-breach-report-ai-risks/
- Shadow AI Poses Greater Risks Than Most Health Care Organizations Realize, Report Says, accessed December 27, 2025, https://hooperlundy.com/shadow-ai-poses-greater-risks-than-most-health-care-organizations-realize-report-says/
- The Risk of Shadow AI in Healthcare and why it matters – Sagility, accessed December 27, 2025, https://sagilityhealth.com/the-risk-of-shadow-ai-in-healthcare-and-why-it-matters/
- Small Purchases, Big Risks: Shadow AI Use In Government – Forrester, accessed December 27, 2025, https://www.forrester.com/blogs/small-purchases-big-risks-shadow-ai-use-in-government/
- AI Won’t Outrun Bad Procurement – RAND, accessed December 27, 2025, https://www.rand.org/pubs/commentary/2025/09/ai-wont-outrun-bad-procurement.html
- LAUSD’s AI chatbot fail – LAist, accessed December 27, 2025, https://laist.com/brief/news/education/communities-demand-transparency-after-ed-lausds-ai-chatbot-fails
- White House Issues Executive Order to Establish Uniform National AI Standards, accessed December 27, 2025, https://www.morganlewis.com/pubs/2025/12/white-house-issues-executive-order-to-establish-uniform-national-ai-standards
- Why Your AI Pilots Are Stuck in Purgatory – RT Insights, accessed December 27, 2025, https://www.rtinsights.com/why-your-ai-pilot-is-stuck-in-purgatory-and-what-to-do-about-it/
- AI is Destroying the University and Learning Itself – Current Affairs, accessed December 27, 2025, https://www.currentaffairs.org/news/ai-is-destroying-the-university-and-learning-itself
- FINOS AI Governance Framework:, accessed December 27, 2025, https://air-governance-framework.finos.org/
- What Is Model Drift? | IBM, accessed December 27, 2025, https://www.ibm.com/think/topics/model-drift
- FinOps for AI: A Guide To Managing AI Cloud Costs – ProsperOps, accessed December 27, 2025, https://www.prosperops.com/blog/finops-for-ai/
- AI ROI: The paradox of rising investment and elusive returns | Deloitte Global, accessed December 27, 2025, https://www.deloitte.com/global/en/issues/generative-ai/ai-roi-the-paradox-of-rising-investment-and-elusive-returns.html
- AI KPIs: How to Track and Measure AI Performance – Corporate Finance Institute, accessed December 27, 2025, https://corporatefinanceinstitute.com/resources/data-science/ai-kpis-tracking-performance/
- Avoid pilot purgatory in 7 steps | McKinsey & Company, accessed December 27, 2025, https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/the-organization-blog/avoid-pilot-purgatory-in-7-steps
- Aligning AI Governance With Bank Goals – Risk Management Association, accessed December 27, 2025, https://www.rmahq.org/blogs/2024/aligning-ai-governance-with-bank-goals/?gmssopc=1
- Escaping Pilot Purgatory. Why Technical Success is a Vanity… | by Ashish Jaiman, accessed December 27, 2025, https://ashishjaiman.medium.com/escaping-pilot-purgatory-dc4a932e40a1
- Inside Cardinal Health’s AI Center of Excellence, accessed December 27, 2025, https://newsroom.cardinalhealth.com/Inside-Cardinal-Healths-AI-Center-of-Excellence
- How to Build Your AI Center of Excellence in 2025: A Guide – Tredence, accessed December 27, 2025, https://www.tredence.com/blog/ai-center-of-excellence
- Oracle Launches an AI Center of Excellence for Healthcare to Help Customers Maximize the Value of AI Across Clinical, Operational, and Financial Workflows, accessed December 27, 2025, https://www.oracle.com/news/announcement/oracle-launches-an-ai-center-of-excellence-for-healthcare-2025-09-10/
- How Risk Tiering helps you focus AI Governance where it matters most – Yields.io, accessed December 27, 2025, https://www.yields.io/blog/risk-tiering-prioritising-oversight/
- The AI Risk Matrix: Evolving AI Safety & Security for Today | Scale, accessed December 27, 2025, https://scale.com/blog/risk-matrix
- From hype to value: aligning healthcare AI initiatives and ROI – Vizient Inc., accessed December 27, 2025, https://www.vizientinc.com/insights/blogs/2025/from-hype-to-value-aligning-healthcare-ai-initiatives-and-roi
- Redefining AI ROI in Healthcare: The New Framework that Puts Clinical Use Cases First, accessed December 27, 2025, https://premierinc.com/newsroom/blog/redefining-ai-roi-in-healthcare-the-new-framework-that-puts-clinical-use-cases-first
- AI Trends in Banking 2025: The Strategic Transformation of Financial Services – nCino, accessed December 27, 2025, https://www.ncino.com/blog/ai-accelerating-these-trends
- AI Governance in Finance: Key Strategies and Challenges – Ideas2IT, accessed December 27, 2025, https://www.ideas2it.com/blogs/ai-governance-in-finance
- Using AI in Local Government: 10 Use Cases – Oracle, accessed December 27, 2025, https://www.oracle.com/artificial-intelligence/ai-local-government/
- Federal Emergency Management Agency – AI Use Cases | Homeland Security, accessed December 27, 2025, https://www.dhs.gov/ai/use-case-inventory/fema
- How to overcome donor management hurdles with a modern CRM solution – PwC, accessed December 27, 2025, https://www.pwc.ch/en/insights/case-study-ngo-fundraising-campaign.html
- Legal aid leads on AI: How Lone Star Legal Aid built Juris to deliver faster, fairer results, accessed December 27, 2025, https://www.thomsonreuters.com/en-us/posts/ai-in-courts/legal-aid-ai-lone-star-juris/
This article was written with my brain and two hands (primarily) with the help of Google Gemini, ChatGPT, Claude, and other wondrous toys.