Research: Strategic Barriers to AI Adoption in Government

The current trajectory of artificial intelligence (AI) adoption across the global public sector is characterized by a profound and widening divergence between technological aspiration and organizational capability. As the international community enters the mid-point of the 2020s, the “Initial Euphoria” phase of generative AI has transitioned into what analysts describe as an “Operational Reckoning”.1 For federal, provincial, and municipal governments in North America, Europe, and Australia, the challenge is no longer merely the acquisition of algorithmic models, but the systemic redesign of institutional structures that were fundamentally built for a pre-digital era. This report provides an exhaustive investigation into the key barriers hindering adoption, focusing on the cultural, psychological, and regulatory frictions that define the modern public sector experience.

Executive Summary: The Emerging AI Divide

The global landscape of AI in government is increasingly defined by a “Digital Divide” that separates high-maturity entities from those struggling with foundational legacy issues. In 2025, the adoption of generative AI tools grew by 1.2 percentage points globally, with roughly one in six people now utilizing these technologies in a professional or personal capacity.3 However, this progress is highly uneven; adoption in the “Global North” is growing nearly twice as fast as in the “Global South,” creating a geopolitical disparity in administrative efficiency.3 Within developed economies, a similar divide is visible between government levels. Federal agencies typically lead with higher daily usage rates—approximately 64%—compared to just 48% at the state and local levels.4

This disparity is driven by a complex interplay of systemic barriers. At the federal level, progress is frequently stalled by a deeply ingrained culture of risk aversion and the suffocating weight of legacy IT infrastructure.5 In contrast, state and municipal governments face acute talent shortages and a critical lack of standardized governance, leading to a reliance on “Bring Your Own AI” (BYOAI) practices that expose institutions to significant security risks.4 This report tests three foundational hypotheses: that trust is the primary currency of adoption; that “pilot purgatory” is a consequence of data immaturity; and that success is contingent upon the alignment of funding with binding policy. The evidence suggests that while funding provides the necessary fuel, it is the maturity of the underlying data foundations and the courage of leadership to modernize “Red Tape” that determines whether a government realizes a return on its investment.2

Institutional Friction: Federal Barriers to Scalable AI Adoption

Federal governments in Canada, the United States, the United Kingdom, France, Germany, and Australia are currently attempting to manage the largest technological transition in their administrative histories. While the volume of reported use cases has surged—reaching over 3,600 in the U.S. federal government alone by 2025—the majority of these remain experimental or concentrated within a small subset of high-resource agencies.6 The following analysis details the specific mindset, cultural, and technical barriers at the federal level.

The Leadership Mindset: Risk Aversion and the “Chilling Effect”

The primary psychological barrier at the federal level is a pervasive culture of risk aversion. Public sector leaders operate under a unique set of pressures, where the reputational cost of a highly visible failure far outweighs the perceived benefits of an efficiency gain.6 In Australia, the Digital Transformation Agency (DTA) observed that agencies remain intensely cautious, often limiting testing to low-risk, less complex use cases to avoid unintended harm or reputational damage.10 This mindset creates a “chilling effect,” particularly in European jurisdictions where the complexity of the EU AI Act and GDPR can inadvertently inhibit innovation due to an overwhelming fear of non-compliance.11

In Germany, this risk aversion is codified in a cultural requirement for human oversight. Only 14% of German decision-makers currently feel comfortable delegating critical processes entirely to machines, and 43% of AI-generated results still require review by subject matter experts.12 This “human-in-the-loop” necessity, while ethically sound, often acts as a friction point that slows the scaling of AI tools across high-volume workflows.12 Similarly, in Canada, the Treasury Board Secretariat has identified “institutional resistance” as a key hurdle, noting that federal policies have historically been slow to revise for the AI era, leaving a patchwork of guidelines that are difficult for project teams to navigate.13

Organizational Culture: The Inertia of Hierarchy

Federal public services are historically characterized by hierarchical silos and a “slow to change” cultural ethos. The adoption of AI requires cross-functional collaboration between IT, legal, privacy, and service-delivery departments—a mode of operation that conflicts with the traditional siloed structure of government agencies.13 In the UK, research indicates that while 79% of public sector leaders believe AI will deliver a strategic advantage, only 23% say their operating models reliably enable alignment across departments.14

This cultural inertia is further compounded by the “Red Tape” associated with security and administrative approvals. In the United States, structural barriers such as the Paperwork Reduction Act (PRA) and the “Authority to Operate” (ATO) processes—designed for static legacy software—can take six to nine months for approval.6 These processes are fundamentally incompatible with the dynamic, iterative nature of AI models, which require continuous maintenance and updating.6

Technical and Regulatory Barriers: The Legacy Debt

The technical foundation of federal AI is currently burdened by decades of legacy IT debt. In the United Kingdom, 21 out of 72 “highest-risk” legacy systems identified in the government’s digital roadmap still lack remediation funding.7 These systems often “lock away” data in formats that are inaccessible to modern AI models, making data sharing and high-quality training nearly impossible.7

Regulation also acts as a double-edged sword. While frameworks like Canada’s Directive on Automated Decision-Making provide necessary guardrails, the lack of “actionable guidance” for specific policy areas often leaves project teams in a state of paralysis.5 Furthermore, the lack of common use case identifiers in federal inventories—particularly in the US—makes it difficult for researchers and oversight bodies to track the progress and risks of projects over time, leading to a lack of transparency that erodes public trust.6

Regional and Municipal Constraints: The Frontline Delivery Challenge

At the provincial/state and municipal levels, the barriers to AI adoption shift from “bureaucratic inertia” to “resource exhaustion.” These levels of government are responsible for the direct delivery of essential services—such as housing, social security, and infrastructure maintenance—yet they possess a fraction of the technical resources available to federal counterparts.

State and Provincial Level: The Regulatory Patchwork

State and provincial governments often find themselves navigating a “patchwork” of disparate national and local regulations. In the United States, 45 state legislatures introduced over 550 bills related to AI regulation in 2025, covering everything from procurement to liability standards.8 This influx of regulatory action creates a state of “uncertainty” for regional leaders, who are unsure of how to deploy AI safely without violating emerging local laws.16

Furthermore, states face an acute “IT staffing crisis,” with over 450,000 unfilled cybersecurity roles nationwide.8 This talent gap is particularly pronounced in the public sector, where salaries and budgets are significantly smaller than in the private sector. Consequently, 60% of public sector professionals cite the AI skills gap as their single biggest implementation obstacle.4

Municipal Level: The Outsourcing Trap

Municipalities face the most severe resource constraints. Research into Canadian municipalities reveals a heavy dependency on outsourcing, as local authorities lack the internal capacity to develop AI in-house.17 This reliance on external vendors creates a “black box” problem: procurement frameworks often lack clauses for algorithmic transparency, leaving municipalities with little oversight over the systems they deploy.17

Additionally, municipalities are frequently “entangled” with distant corporate technologies that prioritize data extraction over local needs.17 This infrastructural dependence is exacerbated by a lack of civic participation; despite being the level of government closest to the people, municipal AI governance rarely includes meaningful community engagement, leading to a significant “trust gap” among citizens who are naturally skeptical of automated decisions in sensitive areas like housing or policing.17

Comparative Summary Tables of Adoption Barriers

The following tables synthesize the qualitative data into a comparative framework, illustrating how barriers manifest differently across the three levels of government and across the targeted nations.

Table 1: Primary AI Adoption Barriers by Government Level (North America & Australia)

Barrier CategoryFederal Level (US, Canada, AU)State/Provincial LevelMunicipal Level
Leadership MindsetRisk aversion; fear of reputational damage; focus on compliance.6Caution due to ambiguous ROI; focus on immediate service efficiency.16Mindset that AI is an “inevitable pothole”; lack of strategic priority (only 22%).17
Organizational CultureHierarchical silos; slow procurement cycles; security-first posture.13Inconsistent rules across jurisdictions; fragmented departmental progress.8Heavy reliance on outsourcing; lack of in-house oversight and transparency.17
Technical InfrastructureDeep legacy debt; high-risk system maintenance.7Lack of enterprise data infrastructure; fragmented systems.4Inadequate data infrastructure; dependency on vendor-provided platforms.4
Regulatory LandscapeFederal directives (e.g., US EO 14110); national security exemptions.6Regulatory patchwork (550+ bills in US states); contradictory guidance.8Lack of local policy (22% have no AI policy); gaps in public participation.4
Workforce & TalentHigh expertise but retention challenges; focus on AI literacy.4Acute staffing crisis; 450,000+ unfilled roles in cyber/IT.8Dependency on interns/researchers; lack of specialized privacy/security staff.17

Table 2: European Specific Barriers (UK, France, Germany)

Barrier CategoryUnited Kingdom (UK)France (FR)Germany (DE)
Primary Structural BarrierLegacy IT systems (21/72 high-risk need funding); poor data quality.7Talent migration concerns; need for domestic “Tibi” late-stage funding.21Indispensable human review (43% results); desire for predictable pricing (89%).12
Cultural MindsetFocus on “measurable benefits” (only 8% of projects currently show them).5Focus on digital sovereignty and ethical leadership in the G7.22Pragmatism over speed; security/sovereignty prioritized over innovation speed.12
Regulatory FrictionSector-specific approach; focus on transparency records (only 33 published).7Strong integration of AI in HR management; focus on “AI for All” training.5High compliance costs for SMEs; fear of “chilling effect” on innovation.11
Success BottlenecksLack of systematic mechanism to disseminate pilot learnings.7Computational power gaps for startups/research (GENCI program needs scaling).21Outdated infrastructure in manufacturing/industrial base slowing modernization.12

Testing Hypothesis 1: The Trust Deficit as an Adoption Anchor

Hypothesis: A fundamental lack of trust—both internal (among workers) and external (among citizens)—is the primary reason AI adoption has failed to scale beyond back-office functions.

The research indicates that trust is the “missing ingredient” in the public sector AI equation. At the workforce level, a significant “AI Trust Gap” has formed. According to global studies, while 62% of leaders “welcome” AI, only 52% of employees share this sentiment.26 This 10-point gap is driven by a perception that human welfare is not a leadership priority during implementation. 70% of leaders believe AI should allow for human review, yet 42% of employees believe their organizations lack a clear understanding of which systems should be automated versus human-led.26

The Authenticity Crisis in the Office

The trust gap is also manifesting in the daily interactions of the workforce. As AI-generated content becomes indistinguishable from human work, employees are beginning to question the authenticity of communication. 56% of workers report that their trust in a coworker would decrease if they discovered that a message presented as human-written was actually AI-generated.27 Furthermore, 66% of workers admit they have been “fooled” by AI-generated content at least once, leading to a persistent sense of workplace uncertainty.27

Public Skepticism and the “Demonstrably Trustworthy” Requirement

Public trust in government AI is even more precarious. In the United States, only 17% of citizens believe AI will have a positive impact over the next 20 years.6 This skepticism is fueled by transparency gaps: agencies often provide limited information about risk mitigation for “high-risk” systems in law enforcement or healthcare.6 In the UK, the “Algorithmic Transparency Recording Standard” has seen poor compliance, with only 33 records published by January 2025, leading to calls for the government to be more “demonstrably trustworthy”.7

Conclusion on Hypothesis 1: The hypothesis is confirmed. Trust is not a “soft” outcome but a hard structural requirement. Without it, employees engage in “passive resistance” and the public refuses to adopt AI-enabled services, resulting in a low ROI for government investments.7

Testing Hypothesis 2: The Pilot Purgatory Trap

Hypothesis: Public sector organizations are trapped in “pilot purgatory” because they prioritize the procurement of AI models over the modernization of data foundations and governance.

The transition from a 50-person pilot to an enterprise-wide deployment is where the vast majority of AI value “dies” in the public sector.1 This “Pilot Purgatory” is a symptom of failing to fix the underlying data “swamp” before introducing advanced models.

The Numbers of Failure

Research reveals a sobering reality for AI scaling:

  • Failure Rates: RAND Corporation has tracked AI project failure rates north of 80%, consistently citing poor data quality and integration as the root causes.2
  • The ROI Gap: 95% of generative AI pilot programs are not delivering measurable profit and loss (P&L) impact.2
  • Silent Failures: In production environments, roughly 1 in 20 AI requests are failing “silently”—returning a confident but incorrect answer—which erodes the reliability of the system.2

The Illusion of the Sterile Pilot

The fundamental deception of the AI pilot is its “sterile environment.” During testing, models are fed curated datasets and shielded from the “chaotic sprawl” of authentic corporate infrastructure.1 When these models are moved into production, they often encounter “Enterprise Data Swamps” where poor integration leads to irrelevant outputs, causing employees to abandon the tool.1 Furthermore, the “Hidden Tax” of compliance, security, and legal oversight—which is nearly zero during a pilot—can add up to 17% to the total system cost during full rollout.1

Conclusion on Hypothesis 2: The hypothesis is confirmed. “Pilot purgatory” is a governance and data failure, not a model failure. Success belongs to the 5% of organizations that “invested in the data foundation before the model” and treated AI-ready data as a deliverable rather than a side effect.2

Testing Hypothesis 3: The Causal Link Between Success and Funding/Policy

Hypothesis: AI adoption success is directly correlated with centralized funding and the existence of binding, whole-of-government policies rather than siloed agency initiatives.

The evidence suggests that while funding provides the capacity for AI, policy provides the guardrails for scale.

The Funding Paradox

Massive investments do not automatically lead to maturity. Canada’s $2.4 billion federal budget allocation in 2024 is a signal of leadership intent, but federal officials admit that without addressing “talent shortages” and “infrastructure gaps,” the funding may result in fragmented, non-scalable projects.19 In the US, civilian agencies have committed over $3 billion to AI, yet adoption remains concentrated in large agencies (76% of cases), suggesting that smaller agencies are “funding-starved” and unable to build necessary baseline capacity.6

Policy as an Accelerant

Binding policy acts as a “permission slip” for innovation. In the US, federal agencies with established AI policies have higher usage rates (64%) than state and local entities where policies are often missing (22% lack policies).4 Australia’s experience with its “GovAI” hosting service and centralized AI assurance framework suggests that a “whole-of-government” approach lifts the collective confidence of agencies, allowing them to move beyond low-risk pilots.29

Conclusion on Hypothesis 3: The hypothesis is confirmed. Success is most prominent where funding is paired with “cross-agency governance,” “sandbox environments,” and “structured systems to measure outcomes”.18 Countries like Canada and Australia are moving toward this model by establishing central AI Centers of Excellence (CoE) to overcome “agency-level” paralysis.13

Canadian Use Cases: From Supply Chains to Social Housing

Canada’s approach to AI is distinguished by its “Cluster” strategy and its commitment to “Responsible Use.”

Case Study 1: The Scale AI Global Innovation Cluster

The Scale AI cluster represents a successful model of public-private collaboration. Based in Montreal and partially funded by federal and Quebec governments, it has delivered significant value beyond conventional supply-chain use cases.

  • Method: Co-investment with industry ($264M public funding matched by $409M private) focusing on “IP thinking”.31
  • Impact: AI models developed within the cluster are now used in retail, manufacturing, healthcare, and energy to optimize complex global supply chains, enhancing Canada’s global competitiveness.31

Case Study 2: Chronic Homelessness AI (CHAI) – London, Ontario

The City of London, Ontario, developed an AI model to address a critical municipal social challenge: homelessness.

  • Method: The CHAI model predicts when individuals are at risk of chronic homelessness, allowing for targeted intervention by social services.17
  • Impact: The city released the model on a publicly accessible platform to improve replicability, encouraging other municipalities to adopt data-driven predictive tools for social welfare.17

Case Study 3: G7 GovAI Grand Challenge

Canada has also pioneered the “Rapid Solution Labs” (RSLs) to tackle the specific barriers identified in this report.

  • Method: Innovators are invited to solve problem statements such as “processing high volumes of information” and “categorizing unstructured documents”.32
  • Impact: Development of “Multi-Agent AI Bias Detection” systems and “Infrastructure Sovereign Intelligence” platforms, which are designed to work within existing public frameworks while being interoperable and scalable.32

European Use Cases: Digital Sovereignty and Citizen Service

European governments are utilizing AI to modernize their social security systems and infrastructure while maintaining a strong focus on the “Ethical AI” standards of the EU AI Act.

Case Study 4: Social Security Automation – Kela, Finland

Finland’s national social security institution (Kela) has become a global exemplar for AI-led administrative efficiency.

  • Method: An AI platform automates the classification and processing of documents attached to benefit applications.33
  • Impact: An estimated saving of 38 years of full-time equivalent (FTE) work for case workers per year, allowing staff to focus on complex, human-centric tasks rather than manual filing.33

Case Study 5: The “Foundry” – Birmingham, UK

Birmingham City Council established an internal unit known as the “Foundry” to accelerate digital transformation.

  • Method: A centralized team focuses on building digital skills and capability while testing AI tools to close municipal budget gaps.34
  • Impact: Streamlined administrative workflows that allow the council to maintain service levels during periods of extreme fiscal pressure.34

Case Study 6: AI-Powered Road Maintenance – Hertfordshire, UK

To combat the infrastructure debt common in regional governments, Hertfordshire County Council partnered with researchers to automate road repairs.

  • Method: Deployment of an AI-powered road maintenance robot developed in collaboration with technology companies and academics.34
  • Impact: The robot autonomously detects and repairs road defects, significantly reducing the long-term cost of manual road inspections and improving public safety.34

Case Study 7: Central Banking AI – Banque de France

The Banque de France has integrated AI into its daily work through a robust “data platform” and in-house expertise.

  • Method: A two-part roadmap focusing on “AI for all” to ensure every employee can use tools for data science and AI engineering.25
  • Impact: Improved explainability and fairness in financial algorithms, protecting the bank’s autonomy from non-European technology dependencies and extraterritorial laws.25

Strategic Action Playbook: For Progressive and Lagging Leaders

Based on the barriers and success stories analyzed, the following strategic actions are recommended for government leaders at all levels.

For Progressive Leaders (Scaling at Maturity)

Progressive leaders currently occupy the “Top 5%” of the adoption curve. Their challenge is to move from “individual success” to “systemic advantage.”

ActionStrategic ObjectiveImplementation Mechanism
Mandate Data LineageTo eliminate the “Silent Failure” rate and improve reliability.2Invest in “AI-ready data” as a deliverable for every IT project; decommission legacy “dead weight” applications.2
Deploy Agentic SystemsTo automate multi-step complex tasks beyond simple text generation.9Use “Small, Open-Source Models” to reduce dependency on major cloud providers and ensure sovereignty.9
Formalize Ethics CommitteesTo bridge the internal “Trust Gap” with workers.35Establish board-level Governance and Ethics committees that include employee representation and “human-in-the-loop” mandates.26
International StandardizationTo ensure interoperability and avoid “Lock-in”.15Participate in global platforms like the G7 GovAI or GPAI to share standards for safety and procurement.32

For Lagging Leaders (Building the Foundation)

Lagging leaders are often hindered by legacy debt and risk aversion. Their challenge is to “Break the Paralysis.”

ActionStrategic ObjectiveImplementation Mechanism
Prioritize AI LiteracyTo reduce worker fear and “BYOAI” security risks.4Implement mandatory, baseline AI training for all staff, focusing on critical thinking and the “responsible use” of open tools.35
Remediate Legacy TechTo unlock data trapped in 20th-century systems.7Establish a “Common Service Standard” and request dedicated remediation funding for high-risk systems.7
Standardize ProcurementTo gain oversight over external “Black Box” vendors.17Adopt the DTA’s “Model AI Contract Clauses” or the UK’s “Algorithmic Transparency Record” standards.7
Pilot Low-Risk Use CasesTo build institutional confidence without high stakes.10Utilize “Sandboxes” for internal tasks like meeting summaries or document classification (e.g., Kela model).18

The Future Outlook: Toward a “Common Operating Picture”

As we move into 2026, the focus of the public sector will shift from “AI Exploration” to “Common Operating Pictures” (COP). This initiative aims to create seamless information-sharing systems, supported by AI, that improve coordination across levels of government during unfolding events—such as natural disasters or national security crises.40

However, the realization of this vision is contingent on a fundamental shift in government mindset. AI is no longer a “technology project” but a “social and institutional transformation”.41 Governments must recognize that the most successful AI implementations are those that “experiment first, scale later,” allowing for the controlled failure of pilots while doubling down on the data foundations that make enterprise-scale success possible.36

The “Bill is Coming Due” for public sector AI investment. Boards and citizens are no longer asking what AI can do; they are asking why it isn’t producing measurable returns.2 The answer lies in the “Execution Gap”—the space between high-level ambition and the messy, day-to-day reality of legacy data and institutional risk aversion. Leaders who can close this gap by prioritizing trust, data quality, and binding policy will define the next generation of efficient, resilient, and citizen-centered governance.

Works cited

  1. AI Pilot Purgatory: Why Enterprise AI Rollouts Fail to Scale – UC Today, accessed May 10, 2026, https://www.uctoday.com/productivity-automation/ai-pilot-purgatory-enterprise-scaling/
  2. The Bill Comes Due: Why “AI Pilot Purgatory” Is About to Define the 2026 Boardroom, accessed May 10, 2026, https://www.solix.com/blog/the-bill-comes-due-why-ai-pilot-purgatory-is-about-to-define-the-2026-boardroom/
  3. Global AI Adoption in 2025 – AI Economy Institute – Microsoft, accessed May 10, 2026, https://www.microsoft.com/en-us/corporate-responsibility/topics/ai-economy-institute/reports/global-ai-adoption-2025/
  4. AI Agents for Government: Complete Guide – MindStudio, accessed May 10, 2026, https://www.mindstudio.ai/blog/government
  5. Implementation challenges that hinder the strategic use of AI in government – OECD, accessed May 10, 2026, https://www.oecd.org/en/publications/governing-with-artificial-intelligence_795de142-en/full-report/implementation-challenges-that-hinder-the-strategic-use-of-ai-in-government_05cfe2bb.html
  6. Assessing the state of AI adoption across the federal government …, accessed May 10, 2026, https://www.brookings.edu/articles/assessing-the-state-of-ai-adoption-across-the-federal-government/
  7. Use of AI in Government – United Kingdom Parliament, accessed May 10, 2026, https://publications.parliament.uk/pa/cm5901/cmselect/cmpubacc/356/report.html
  8. Enhancing State and Local Government AI Capacity – Federation of American Scientists, accessed May 10, 2026, https://fas.org/publication/grants-enhancing-state-local-ai-capacity/
  9. The government’s AI efficiency numbers look good. That should worry you. – FedScoop, accessed May 10, 2026, https://fedscoop.com/federal-government-ai-efficiency-numbers/
  10. Australian Government artificial intelligence assurance framework: Findings and recommendations | digital.gov.au, accessed May 10, 2026, https://www.digital.gov.au/policy/ai/ai-assurance-framework-pilot-report/findings-recommendations
  11. Impact of EU Regulations on AI Adoption in Smart City Solutions: A Review of Regulatory Barriers, Technological Challenges, and Societal Benefits – MDPI, accessed May 10, 2026, https://www.mdpi.com/2078-2489/16/7/568
  12. Germany AI survey insights | Infor, accessed May 10, 2026, https://www.infor.com/blog/germany-ai-adoption-paradox-regulation
  13. AI Strategy for the Federal Public Service 2025-2027: Priority …, accessed May 10, 2026, https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/gc-ai-strategy-priority-areas.html
  14. Intelligent government – KPMG agentic corporate services, accessed May 10, 2026, https://assets.kpmg.com/content/dam/kpmgsites/xx/pdf/2025/09/intelligent-government.pdf
  15. Governing with Artificial Intelligence – OECD, accessed May 10, 2026, https://www.oecd.org/en/publications/governing-with-artificial-intelligence_795de142-en.html
  16. Technology Adoption in Australian Industry, accessed May 10, 2026, https://www.australianindustrygroup.com.au/resourcecentre/research-economics/technology-adoption-in-australian-industry/
  17. Building AI Governance in Municipalities from the … – School of Cities, accessed May 10, 2026, https://schoolofcities.utoronto.ca/wp-content/uploads/2026/02/Building-AI-Governance-in-Municipalities-from-the-Ground-Up.pdf
  18. Report: Nearly All States Have Piloted AI but Value Is Unclear, accessed May 10, 2026, https://www.govtech.com/artificial-intelligence/report-nearly-all-states-have-piloted-ai-but-value-is-unclear
  19. AI and Amplified Individuals: The Future of Economic Development in the Public Sector, accessed May 10, 2026, https://edco.on.ca/wp-content/uploads/AI-and-Amplified-Individuals-The-Future-of-Economic-Development-in-the-Public-Sector.pdf
  20. AI Strategy for the Federal Public Service 2025-2027, accessed May 10, 2026, https://publications.gc.ca/collections/collection_2025/sct-tbs/BT48-55-2025-eng.pdf
  21. With artificial intelligence poised to transform industries globally, the stakes surrounding this next wave of innovation are only getting higher. This has fueled an intense debate about the competitiveness of France and Europe in AI at a time when the US and China are marshaling massive resources in an attempt to dominate the future. – The French AI Report 2024, accessed May 10, 2026, https://thefrenchreport.ai/summary.html
  22. Transparency and accountability: the challenges of artificial intelligence – France Diplomatie, accessed May 10, 2026, https://www.diplomatie.gouv.fr/en/the-ministry-in-action/action-for-peace-and-respect-for-human-rights/digital-diplomacy/transparency-and-accountability-the-challenges-of-artificial-intelligence
  23. Government AI Readiness Index 2025 – Oxford Insights, accessed May 10, 2026, https://oxfordinsights.com/ai-readiness/government-ai-readiness-index-2025/
  24. Artificial Intelligence sector study 2024 – GOV.UK, accessed May 10, 2026, https://www.gov.uk/government/publications/artificial-intelligence-sector-study-2024/artificial-intelligence-sector-study-2024
  25. The challenges posed by AI from the perspective of the central bank | Banque de France, accessed May 10, 2026, https://www.banque-france.fr/en/governors-interventions/challenges-posed-ai-perspective-central-bank
  26. 2024 Global Study: Closing the AI trust gap – Workday, accessed May 10, 2026, https://forms.workday.com/content/dam/web/sg/documents/reports/davos-wef-ai-trust-gap-report-en-SG.pdf
  27. US workers thought they could spot AI: A new survey reveals a growing workplace trust crisis, accessed May 10, 2026, https://timesofindia.indiatimes.com/education/news/us-workers-thought-they-could-spot-ai-a-new-survey-reveals-a-growing-workplace-trust-crisis/articleshow/130974870.cms
  28. PUBLIC TRUST IN AI: IMPLICATIONS FOR POLICY AND REGULATION – Ipsos, accessed May 10, 2026, https://www.ipsos.com/sites/default/files/ct/news/documents/2024-09/Ipsos%20Public%20Trust%20in%20AI.pdf
  29. Australian Government response: Senate Select Committee on Adopting Artificial Intelligence (AI) report | Department of Industry Science and Resources, accessed May 10, 2026, https://www.industry.gov.au/publications/australian-government-response-senate-select-committee-adopting-artificial-intelligence-ai-report
  30. AI Policy Update: Strengthening responsible use across government, accessed May 10, 2026, https://www.dta.gov.au/articles/ai-policy-update-strengthening-responsible-use-across-government
  31. Scale AI (Canada) – OECD, accessed May 10, 2026, https://www.oecd.org/en/publications/science-technology-and-innovation-policy-case-studies_089a31c7-en/scale-ai-canada_cd1e2c76-en.html
  32. G7 GovAI Grand Challenge – Impact Canada, accessed May 10, 2026, https://impact.canada.ca/en/challenges/g7-govAI
  33. Full Report: Building an AI-ready public workforce | OECD, accessed May 10, 2026, https://www.oecd.org/en/publications/building-an-ai-ready-public-workforce_b89244c7-en/full-report.html
  34. Artificial intelligence case study bank | Local Government Association, accessed May 10, 2026, https://www.local.gov.uk/our-support/cyber-digital-and-technology/artificial-intelligence-hub/artificial-intelligence-case
  35. AI Strategy – Local Government and Social Care Ombudsman, accessed May 10, 2026, https://www.lgo.org.uk/information-centre/about-us/our-aims/ai-strategy
  36. How AI Transforms Knowledge Work in Public Service: Blending Human and Artificial Intelligence, accessed May 10, 2026, https://www.eipa.eu/blog/how-ai-transforms-knowledge-work-in-public-service-blending-human-and-artificial-intelligence/
  37. Artificial intelligence ecosystem – Innovation, Science and Economic Development Canada, accessed May 10, 2026, https://ised-isde.canada.ca/site/ised/en/artificial-intelligence-ecosystem
  38. KI in der Verwaltung – Bundesministerium für Digitales und Staatsmodernisierung, accessed May 10, 2026, https://bmds.bund.de/themen/kuenstliche-intelligenz/ki-in-der-verwaltung
  39. Publications | Local Government Association, accessed May 10, 2026, https://www.local.gov.uk/publications?topic%5B5868%5D=5868
  40. AI in the Public Sector: 2024 Reflection and 2025 Outlook – Dataminr, accessed May 10, 2026, https://www.dataminr.com/resources/blog/ai-in-the-public-sector-2024-reflection-and-2025-outlook/
  41. Systemic challenges in AI adoption in public social and health organizations in Finland: a technology-organisation-environment perspective – PMC, accessed May 10, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC12520618/
  42. Major study reveals how ready UK local councils are for AI …, accessed May 10, 2026, https://www.hw.ac.uk/news/2026/major-study-reveals-how-ready-uk-local-councils-are-for-ai-technology

The idea, research hypotheses, and focus for this article/research are all original and mine. This article was written with my brain and two hands with the assistance of Google Gemini, Notebook LM, Claude, and other wondrous toys.

Leave a comment