Research: How management practices impact AI maturity

This report serves as a foundational strategic document for the Office of the Chief Operating Officer, examining the critical intersection between modern management methodologies and the maturity of Artificial Intelligence (AI) capabilities within North American enterprises. As organizations across the financial, technology, healthcare, education, and nonprofit sectors transition from AI experimentation to industrial-scale deployment, a distinct pattern has emerged: the primary determinant of AI success is not merely technological investment, but the sophistication of the underlying operating model.

The investigation synthesizes data from the 2024-2025 period, drawing on extensive research from major consultancies (McKinsey, BCG, Deloitte), industry indices (Evident AI Index, DORA), and academic studies. The central thesis of this report is that traditional, hierarchical, and project-based management structures are fundamentally incompatible with the probabilistic and iterative nature of AI. Conversely, organizations that have adopted and adapted Agile frameworks, Objectives and Key Results (OKRs), Product Operating Models (POM), Human-Centred Design (HCD), and rigorous Change Management (CM) systems demonstrate a statistically significant advantage in “AI Maturity”—defined as the ability to generate recurring, scalable economic value from AI assets.

Key insights derived from the analysis include:

  1. The Agile-AI Paradox and the “J-Curve” of Performance: While Agile methodologies are a prerequisite for managing the uncertainty of AI development—boasting a project success rate of 42% compared to Waterfall’s 13%—the initial integration of AI into Agile workflows often precipitates a temporary decline in delivery stability and throughput. This “J-curve” effect necessitates a steadfast commitment to “Platform Engineering” to stabilize the development environment before productivity gains are realized.1
  2. Strategic Alignment via OKR Execution 3.0: The static nature of annual planning is being replaced by dynamic, AI-enhanced OKR cycles. Organizations utilizing AI-driven OKRs report a 52% acceleration in the strategy-to-execution cycle. This alignment is critical for bridging the “translation gap” between technical data science teams and business stakeholders, ensuring that model accuracy correlates with business outcomes.4
  3. The Economic Superiority of the Product Operating Model: There is a strong correlation between the maturity of a Product Operating Model and financial performance, with top-quartile companies achieving 60% greater total shareholder returns. Treating data and AI models as long-lived products rather than transient projects ensures continuous optimization and mitigates the risks of model drift and technical debt.5
  4. Human-Centred Design as a Risk Mitigation Strategy: Research indicates that only 5% of organizations conduct user research before releasing AI products, a primary contributor to adoption failure. High-maturity organizations employ Human-Centred AI (HCAI) frameworks to ensure algorithms augment rather than replace human decision-making, significantly reducing operational risk and bias.
  5. Psychological Safety as a Data Quality Control: The research quantifies organizational culture as a hard operational metric. 83% of leaders report that psychological safety has a measurable impact on AI success. In the era of Generative AI, where “hallucinations” are a feature of the technology, a culture that punishes error leads to the suppression of critical feedback loops, directly degrading model performance and safety.7
  6. Sector-Specific Divergence: The North American landscape is characterized by a “Maturity Divide.” The Financial Services sector has successfully operationalized “AI Factories” to navigate regulatory complexity, while the Nonprofit sector faces a widening digital divide, constrained by legacy manual processes despite high potential for AI-driven efficiency gains.9

This report provides a granular analysis of these dynamics, offering the COO a roadmap to re-architect the organization’s management practices to support the demands of the “Agentic Enterprise.”

Table of Contents

  1. 1. The Macro-Operational Context: Defining AI Maturity in 2025
    1. 1.1 The AI Maturity Paradox
    2. 1.2 Frameworks for Assessing Maturity
    3. 1.3 The COO’s Mandate in the Agentic Era
  2. 2. The Agile Paradigm: Operational Agility as a Prerequisite for AI
    1. 2.1 The Structural Conflict: Waterfall vs. AI
    2. 2.2 The DORA Metrics and the “AI Adoption Dip”
    3. 2.3 Hybrid Models: Agile in Regulated Sectors
  3. 3. Strategic Alignment: The Evolution of OKRs in the Age of AI
    1. 3.1 OKR Execution 3.0: The AI-Driven Strategic Loop
    2. 3.2 Bridging the “Output vs. Outcome” Gap
    3. 3.3 OKRs in the Scaled Agile Framework (SAFe)
  4. 4. The Product Operating Model: Structural Foundations for AI Scale
    1. 4.1 The Correlation with Financial Performance
    2. 4.2 Data as a Product
    3. 4.3 Redesigning Workflows
  5. 5. Human-Centred Design (HCD): The User Interface of AI Strategy
    1. 5.1 The “Solutionism” Trap and the 5% Failure Rate
    2. 5.2 Human-Centered Agile: Integrating Discovery and Delivery
    3. 5.3 HCD as a Governance Mechanism
  6. 6. The Human Operating System: Culture, Psychological Safety, and Change Management
    1. 6.1 Psychological Safety as an Operational Metric
    2. 6.2 Change Management Frameworks: Prosci vs. Kotter
  7. 7. Sector Analysis: Financial Services
    1. 7.1 Maturity Landscape
    2. 7.2 Management Practices
    3. 7.3 Challenges
  8. 8. Sector Analysis: Technology and Software
    1. 8.1 “Dogfooding” as a Management Practice
    2. 8.2 Platform Engineering and DevOps
  9. 9. Sector Analysis: Healthcare and Life Sciences
    1. 9.1 HCD in Clinical Workflows
    2. 9.2 Agile in a Regulated Environment
  10. 10. Sector Analysis: Higher Education
    1. 10.1 Human-Centred Pedagogical Design
    2. 10.2 Agile in Academia
  11. 11. Sector Analysis: Nonprofits and Philanthropy
    1. 11.1 The Digital Divide and HCD for Beneficiaries
    2. 11.2 “Lightweight” Agile
  12. 12. Cross-Sector Synthesis: Emerging Management Architectures
    1. 12.1 From “People Managing People” to “People Managing Agents”
    2. 12.2 The “Value Gap” Analysis
  13. 13. Strategic Recommendations for the COO
    1. Recommendation 1: Institutionalize the Product Operating Model
    2. Recommendation 2: Mandate “Human-First” Prototyping (HCD)
    3. Recommendation 3: Implement “Psychological Safety” as a KPI
    4. Recommendation 4: Stabilize the “Agile-AI” Intersection
    5. Recommendation 5: Align Strategy via AI-Driven OKRs
  14. 14. Conclusion
    1. Detailed Pattern Analysis & Research Synthesis

Let’s dive into it:

1. The Macro-Operational Context: Defining AI Maturity in 2025

The North American business environment in 2025 is defined by a paradox of investment versus return. While capital expenditure on AI infrastructure has reached historic highs—with North America accounting for 41.5% of the global enterprise AI market—the realization of tangible business value remains unevenly distributed.11 The distinction between organizations that merely “use” AI and those that have achieved “AI Maturity” is becoming the primary driver of competitive differentiation.

1.1 The AI Maturity Paradox

Despite widespread adoption, where 72% of organizations report using AI in at least one business function, only a small fraction—approximately 6%—report that AI contributes more than 5% to their Earnings Before Interest and Taxes (EBIT).12 This “Maturity Paradox” suggests that the acquisition of technology is necessary but insufficient for value creation. The bottleneck has shifted from the availability of compute power and algorithms to the organizational capacity to integrate these tools into complex human workflows.

Figure 1: The AI Maturity Gap (2025)

The discrepancy between activity (usage) and value realization (EBIT impact).

MetricPercentage of OrganizationsVisual Representation
Using AI72%██████████████
>5% EBIT from AI6%
Self-Rated “Mature”1%

Data derived from 12, 13, 14

McKinsey’s analysis indicates that the 90% of companies investing in AI are largely stuck in “Pilot Purgatory,” characterized by successful proofs-of-concept that fail to scale into production environments.13 The root cause of this failure is often an operating model designed for deterministic software development (where inputs and outputs are fixed) applied to probabilistic AI development (where outputs are variable and evolving).

1.2 Frameworks for Assessing Maturity

To understand the operational requirements of AI, one must first define the stages of maturity. Contemporary frameworks from MIT CISR and Telus Digital provide a nuanced taxonomy that moves beyond simple adoption metrics:

Table 1: Comparative AI Maturity Stages

StageMIT CISR / Telus Digital DefinitionOperational CharacteristicsManagement Focus
Stage 1: AwarenessUnderstanding potential applications.Ad-hoc usage; “Shadow AI” (employees using public tools).Risk mitigation; Policy definition.
Stage 2: ExplorationInitial investments in infrastructure.Isolated Pilots; Proof-of-Concepts (POCs); Siloed data.Innovation labs; Experimentation.
Stage 3: AdoptionDeploying limited projects based on strategy.Formalized teams; emerging governance; “Pilot Purgatory.”Project management; Vendor selection.
Stage 4: OptimizationIntegrating AI into core operations.Automated feedback loops; MLOps; Continuous improvement.Process re-engineering; Change management.
Stage 5: TransformationEmbedding AI to create new business models.“AI-First” strategy; AI Agents driving revenue; Data Products.Organizational redesign; Product Operating Model.

Data derived from 15 and.16

The research highlights that the transition from Stage 2 (Exploration) to Stage 3 (Adoption/Optimization) is the most critical hurdle. This transition requires a fundamental shift in management practices, moving from “innovation theater”—where AI is treated as a novelty—to “industrialized AI,” where it is treated as a core operational capability.16

1.3 The COO’s Mandate in the Agentic Era

For the Chief Operating Officer, the rise of “Agentic AI”—systems capable of autonomous reasoning and action—presents a dual mandate. The first is Efficiency: automating routine tasks to reduce costs, a traditional COO responsibility. The second, and more complex, is Augmentation: redesigning workflows to empower human employees with “Superagency,” effectively integrating AI copilot capabilities to enhance decision-making and creativity.13

The success of this mandate depends on the COO’s ability to implement management architectures that support agility, alignment, and psychological safety. The following sections detail how specific modern management practices correlate with the ability to traverse the maturity curve.

2. The Agile Paradigm: Operational Agility as a Prerequisite for AI

The correlation between Agile methodology adoption and AI maturity is one of the strongest signals identified in the research. The fundamental incompatibility of traditional “Waterfall” project management with AI development is a primary cause of the “Pilot Purgatory” phenomenon.

2.1 The Structural Conflict: Waterfall vs. AI

Traditional Waterfall management relies on the premise that requirements can be fully defined upfront and that the path to delivery is linear. In AI development, the “requirements” are often hypotheses (e.g., “Can we predict customer churn with this data?”), and the “path” is exploratory. A Waterfall approach that demands a fixed specification before development begins inevitably fails because the behavior of the AI model emerges from the data itself, which is often messy, incomplete, or counter-intuitive.

The 18th State of Agile Report and comparative studies quantify this disparity. Agile projects boast a success rate of 42%, compared to just 13% for Waterfall initiatives. More critically, the failure rate for Waterfall projects is nearly 60%, whereas Agile projects fail outright only 11% of the time.1

Figure 2: Project Outcomes by Methodology (Agile vs. Waterfall)

Agile methodologies significantly reduce the risk of outright failure in uncertain domains like AI.

OutcomeAgile ProjectsWaterfall ProjectsDelta
Successful42% (████████)13% (██)+29%
Challenged47% (█████████)28% (█████)N/A
Failed11% (██)59% (███████████)-48%

*Data derived from 17, *

In the context of AI, where the cost of experimentation must be kept low to make failure acceptable, Agile’s iterative structure allows teams to test hypotheses on data subsets, validate performance, and pivot quickly if the signal is weak.

2.2 The DORA Metrics and the “AI Adoption Dip”

While Agile is a prerequisite for success, the integration of AI into Agile workflows is not without friction. The 2024 DORA State of DevOps Report reveals a nuanced reality: the introduction of AI tools initially correlates with a degradation in software delivery performance.

The “J-Curve” of AI Integration

Data indicates that a 25% increase in AI adoption is associated with a 1.5% decrease in delivery throughput and a 7.2% decrease in delivery stability.3 This counter-intuitive finding—that AI tools designed to speed up coding actually slow down delivery—can be attributed to several factors:

  1. Increased Complexity: AI coding assistants generate code rapidly, but this code increases the volume of review work required. The cognitive load on senior developers shifts from writing code to reviewing complex, machine-generated logic, which can create bottlenecks.
  2. The Testing Gap: The speed of code generation often outpaces the speed of test generation, leading to accumulation of technical debt and instability in the build pipeline.
  3. Rework Rates: The DORA report notes that while AI increases individual productivity, it can increase the “change failure rate” if not paired with rigorous automated testing.2

Figure 3: The “DORA Dip” (Impact of 25% Increase in AI Adoption)

Immediate operational impact of scaling AI adoption without platform engineering support.

MetricImpactVisual Scale
Delivery Throughput-1.5%█ (Slight Decline)
Delivery Stability-7.2%████ (Significant Decline)
Documentation Quality+7.5%████ (Improvement)

*Data derived from 3, , *

However, the research also highlights a recovery mechanism. Agile teams that persist through this dip and invest in “Platform Engineering” eventually see improvements. Specifically, AI-enabled teams show distinct advantages in Time to Restore Service. The ability of AI to parse logs, identify anomalies, and suggest remediation allows for rapid recovery, even if the frequency of change failures initially rises.2

2.3 Hybrid Models: Agile in Regulated Sectors

In North America, particularly within the Financial and Healthcare sectors, “Pure Agile” is rarely implemented due to regulatory constraints. The research identifies a prevalence of “Hybrid” models (42% of organizations), where Agile execution in development is wrapped in a more structured governance framework at the portfolio level.19

For AI maturity, this hybridity is often a feature, not a bug.

  • Agile Compliance: Mature organizations implement “Governance as Code” within their Agile pipelines. Instead of manual review boards (Waterfall), compliance checks—such as verifying data lineage or testing for bias—are automated steps in the Continuous Integration/Continuous Deployment (CI/CD) pipeline.
  • The FDA Example: In healthcare, where AI as a Medical Device (SaMD) requires FDA approval, organizations use Agile for the internal model development (Sprint cycles) but wrap releases in “Validation Epochs” that satisfy the documentation requirements of the regulator without stalling the development team.11

The COO must therefore champion a management style that protects the Agile team’s ability to iterate while automating the controls required by the enterprise risk function.

3. Strategic Alignment: The Evolution of OKRs in the Age of AI

As organizations adopt Agile to manage the execution of AI, they increasingly turn to Objectives and Key Results (OKRs) to manage the strategy. The research suggests that OKRs are the “connective tissue” that aligns the experimental nature of AI with the deterministic needs of the P&L.

3.1 OKR Execution 3.0: The AI-Driven Strategic Loop

The traditional quarterly OKR cycle is being accelerated and optimized by AI itself. Research snippets identify a new paradigm termed “OKR Execution 3.0,” where AI is utilized not just to achieve goals, but to set, track, and predict them.

  • Predictive Strategy: Companies that integrate AI-driven OKR systems outperform competitors by 43% in strategic agility. AI tools analyze historical performance data to predict the likelihood of achieving a Key Result, alerting management to “At-Risk” objectives weeks before a human manager would notice. This allows for proactive resource reallocation rather than reactive post-mortems.4
  • Accelerated Cycle Time: Organizations utilizing AI-enhanced OKRs report a 52% faster cycle from strategy formulation to execution. In the rapidly evolving AI landscape—where the capabilities of models (e.g., the leap from GPT-3.5 to GPT-4) can render a 12-month strategy obsolete in weeks—this agility is a survival mechanism.4

3.2 Bridging the “Output vs. Outcome” Gap

A common failure mode in AI adoption is the measurement of outputs (e.g., “We deployed 5 models,” “We centralized 10 petabytes of data”) rather than outcomes (e.g., “Customer churn reduced by 2%,” “Fraud detection latency reduced by 50ms”). The OKR framework forces a shift to outcome-based management, which is critical for justifying the high costs of AI compute.

  • Case Study Insight: A global FinTech company utilized AI-driven OKR software to achieve a 56% improvement in goal visibility across departments. By linking the work of Data Scientists—whose work is often esoteric and opaque to the business—to shared Key Results (e.g., “Increase Fraud Detection Accuracy”), the organization bridged the cultural silo between IT and Business. This alignment ensures that the data science team is optimizing for business value, not just academic model accuracy.4

3.3 OKRs in the Scaled Agile Framework (SAFe)

The integration of OKRs into frameworks like SAFe 6.0 represents a maturing of this practice in large enterprises. For the Technology and Financial sectors, connecting the “Strategic Themes” of the portfolio to the “User Stories” of the AI squad is essential.

  • Contextual Autonomy: Digital.ai’s analysis suggests that OKRs provide the necessary context for autonomous AI teams. When an AI squad understands the “Key Result,” they can make independent decisions about which model architecture or data source to use, without needing constant upward referral. This decentralization of decision-making is essential for scaling AI beyond a handful of pilot teams.20

4. The Product Operating Model: Structural Foundations for AI Scale

Perhaps the most significant structural differentiator identified in the research is the shift from a “Project Operating Model” to a “Product Operating Model” (POM). The data suggests that the Project model is structurally incapable of supporting the lifecycle of an AI asset.

4.1 The Correlation with Financial Performance

McKinsey’s research establishes a direct, quantifiable link between POM maturity and financial health. Companies in the top quartile of Product Operating Model maturity exhibit 60% greater total shareholder returns and 16% higher operating margins than their peers.5

Figure 4: Financial Advantage of High-Maturity Product Operating Models

Organizations treating AI as a “Product” rather than a “Project” see massive financial divergence.

Financial MetricLow POM MaturityHigh POM Maturity (Top Quartile)Visual Delta
Total Shareholder ReturnsBaseline+60%████████████
Operating MarginsBaseline+16%███

*Data derived from 6, 21, *

Mechanism of Action in AI

In a traditional “Project” model, a team is assembled to build a system, delivers it, and is then disbanded, handing the system over to “Maintenance.” This is fatal for AI.

  • Model Drift: Unlike traditional software, AI models degrade the moment they are deployed because the world they model changes (data drift/concept drift).
  • Continuous Ownership: A “Product” team is persistent. They own the AI capability throughout its lifecycle, monitoring its performance, retraining it on new data, and retiring it when necessary. This continuity is essential for realizing the long-term value of AI investments.14

4.2 Data as a Product

A critical sub-component of POM in the context of AI is the concept of “Data Products.” The “Gen-AI Paradox” (high adoption, low impact) is frequently attributed to poor data foundations.

  • The Paradox: High-performing organizations do not treat data as a byproduct of operations but as a product in itself. They assign “Data Product Owners” who are responsible for the usability, quality, and documentation of data sets.
  • Federated Governance: The research points to “Federated AI” and governance models where data remains local (e.g., in a specific factory or hospital unit) while the model training is centralized or federated. This requires a Product mindset where the “Local Data Owner” is a stakeholder in the global AI product, incentivized to maintain data quality.22

4.3 Redesigning Workflows

The most potent lever for AI value capture is not the model itself but the redesign of the workflow it inhabits. McKinsey notes that “Redesigning workflows” is the single management practice with the highest correlation to EBIT impact from AI.6 The Product Operating Model empowers Product Managers to look at the end-to-end process and insert AI where it adds value, rather than just “automating a task” in isolation. This holistic view prevents local optimization at the expense of systemic efficiency.

5. Human-Centred Design (HCD): The User Interface of AI Strategy

The inclusion of Human-Centred Design (HCD) has emerged as a critical differentiator in AI maturity. While often mistaken for mere user interface design, HCD in the context of AI (often termed “Human-Centric AI” or HCAI) is a strategic risk management and adoption framework. It ensures that AI systems are designed to augment human intelligence rather than obscure it.

5.1 The “Solutionism” Trap and the 5% Failure Rate

A significant barrier to AI maturity is “technological solutionism”—building AI because it is possible, not because it solves a user need. Research from the 2023 McKinsey State of AI Report reveals a startling statistic: only 5% of organizations report using user research before releasing AI-powered products.

This lack of human-centric inquiry leads to high rates of “rejection” where users ignore or work around AI tools because the tools fail to account for the nuance of the human workflow. High-maturity organizations reverse this flow; they define the human problem first via HCD methodologies (e.g., ethnographic observation, journey mapping) before writing a single line of code.

5.2 Human-Centered Agile: Integrating Discovery and Delivery

Mature organizations are merging HCD with Agile to create “Human-Centered Agile.” In this model, the Agile “Sprint” is not just for coding but for discovery.

  • Dual-Track Agile: Teams run two parallel tracks: a “Discovery Track” (led by designers/researchers) to validate user needs and an “Execution Track” (led by engineers) to build the validated solutions.
  • Case Evidence: At a large government agency, integrating HCD into Agile programs resulted in “stronger user stories, higher product quality, and shorter lead times,” proving that HCD accelerates, rather than slows, the Agile process by preventing the development of useless features.

5.3 HCD as a Governance Mechanism

HCD acts as a frontline defense against ethical risks and bias. By involving diverse stakeholder groups in the design phase (Co-Creation), organizations can identify potential biases in training data or model outputs that a homogeneous engineering team might miss.

  • The “Human-in-the-Loop” Mandate: HCAI frameworks explicitly design for human oversight. Instead of targeting full automation, they target “reliable augmentation,” where the AI provides recommendations but the human retains the “moral crumple zone” of decision-making. This is particularly critical in high-stakes sectors like Healthcare and Finance.

6. The Human Operating System: Culture, Psychological Safety, and Change Management

The “soft” side of management proves to be the “hard” barrier to AI success. The research overwhelmingly supports the conclusion that organizational culture—specifically psychological safety and structured change management—is a stronger predictor of AI maturity than technical talent alone.

6.1 Psychological Safety as an Operational Metric

A landmark report by Infosys and MIT Technology Review Insights reveals that 83% of business leaders believe psychological safety is critical to AI success. This is not merely an HR concern but a fundamental operational requirement for Generative AI.

Figure 5: The ROI of Psychological Safety in AI

Leaders recognize culture as a tangible driver of AI outcomes.

MetricPercentage
Leaders citing Psych Safety as critical to success83% (████████████████)
Leaders linking Safety to tangible business outcomes84% (████████████████)
Employees feeling safe to give feedback (High Maturity)73% (██████████████)
Leaders hesitating on AI due to fear22% (████)

*Data derived from 8, 25, *

  • The Fear Factor: 22% of leaders hesitate to lead AI projects due to fear of failure. In an environment where experimentation is punished, AI (which requires failure to learn) cannot thrive.
  • The Feedback Loop Necessity: 73% of employees feel safe to provide honest feedback in high-maturity organizations. This is vital for “Human-in-the-loop” (HITL) systems. If an AI agent makes a mistake (hallucination), an employee must feel safe reporting it without fear of being blamed for the system’s error. Without this transparency, “Silent Failure” occurs, where bad AI decisions propagate unchecked, degrading the model’s performance and the organization’s trust in it.7

6.2 Change Management Frameworks: Prosci vs. Kotter

The “install vs. instill” gap—the difference between installing software and instilling a new way of working—is bridged by Change Management (CM). Organizations investing in CM are 1.6 times more likely to exceed expectations in AI initiatives.26

Table 2: Comparative Change Management Models for AI

FrameworkCore MechanismRelevance to AI AdoptionCase Evidence
Prosci ADKARAwareness, Desire, Knowledge, Ability, Reinforcement.Individual-level adoption. Excellent for training employees on specific AI tools (e.g., Copilot).United Concordia Dental achieved 75% adoption of AI tools using ADKAR to coach employees individually.21
Kotter’s 8 StepsUrgency, Coalition, Vision, Communication, Empowerment, Wins, Consolidation, Anchoring.Systemic organizational transformation. Useful for large-scale shifts (e.g., moving to an AI-first strategy).Widely used in Healthcare to align diverse stakeholders (clinicians, admins) for systemic AI implementation.27

7. Sector Analysis: Financial Services

The Financial Services sector in North America represents the vanguard of AI maturity, driven by a unique combination of high resources, massive data volumes, and existential risk management needs.

7.1 Maturity Landscape

The 2025 Evident AI Index ranks JPMorgan Chase (JPMC), Capital One, and the Royal Bank of Canada (RBC) as the global leaders in AI maturity. These institutions have moved far beyond pilot stages into industrial-scale deployment.9

  • Investment Scale: The top 10 banks increased their AI scores significantly in 2025, driven by aggressive talent acquisition—AI headcount grew by over 25% across the index—and transparency in their AI roadmaps.29
  • Use Cases: The sector utilizes AI for fraud detection (real-time anomaly detection), algorithmic trading, and hyper-personalized customer banking experiences.

7.2 Management Practices

  • The “AI Factory” Model: Banks like BBVA and Capital One operate “AI Factories”—centralized hubs that churn out data products for various business units. This is the Product Operating Model in its purest form, treating AI model creation as a manufacturing process with standardized inputs and outputs.
  • Agile Governance: Given the regulatory burden (e.g., Explainable AI or XAI), banks have pioneered “Agile Compliance.” JPMC’s “COIN” (Contract Intelligence) platform reduced 360,000 hours of legal review to seconds, a feat achievable only through rigorous, agile development of NLP models that could withstand legal scrutiny.11
  • Customer Experience (CX) Integration: Leading banks are increasingly merging their UX Research and Data Science teams. By using HCD principles, they ensure that algorithmic personalization (e.g., loan offers) aligns with the customer’s financial journey rather than feeling predatory.

7.3 Challenges

The primary challenge remains “Legacy Debt.” While new AI layers are agile, the underlying core banking systems are often decades old (COBOL-based mainframes). The operational friction between the “Agile AI layer” and the “Waterfall Core” is a major source of inefficiency, requiring sophisticated API layers and “strangler fig” migration patterns to resolve.

8. Sector Analysis: Technology and Software

The Technology sector naturally exhibits the highest AI maturity, serving as both the creator and the primary consumer (“dogfooding”) of AI technologies.

8.1 “Dogfooding” as a Management Practice

Tech companies like Microsoft, Google, and Salesforce aggressively use their own AI tools internally before release. Microsoft’s internal use of “Copilot” for its developers is a prime example of testing AI at scale to improve productivity.

  • Feedback Loops: This practice creates a rapid feedback loop. If the AI tool fails to improve efficiency for the internal team, it is retooled. This aligns with the “Customer Zero” concept in Product Management, ensuring that the product is battle-tested before it reaches the market.

8.2 Platform Engineering and DevOps

The Technology sector is the birthplace of the DORA metrics, and high-performing tech companies have evolved from DevOps to “Platform Engineering.”

  • Internal Developer Platforms (IDP): Instead of every Agile team building their own AI infrastructure, a centralized “Platform Team” builds an IDP that offers AI capabilities (e.g., a “LLM Gateway”) as a self-service product. This reduces cognitive load on application developers and standardizes governance (security, cost controls) without creating bottlenecks.2
  • Documentation as Code: DORA 2024 emphasizes that while AI aids coding, the real bottleneck is often documentation and knowledge sharing. Tech companies are using AI to auto-generate documentation, improving the “Time to Restore” metric by making system knowledge more accessible. This suggests that AI is being used to pay down the “documentation debt” that accumulates in Agile environments.2

9. Sector Analysis: Healthcare and Life Sciences

Healthcare presents a dichotomy: highly advanced in clinical research (e.g., drug discovery) but often archaic in operational administration.

9.1 HCD in Clinical Workflows

In healthcare, the “User Interface” of AI can be a matter of life and death. HCD is critical here to prevent “alert fatigue.”

  • Problem-Driven Design: Successful implementations, such as those cited in recent Masterclasses on Human-Centered Design in Digital Health, start with the clinician’s workflow, not the data. Instead of flooding a doctor with AI probabilities, HCD principles dictate that the AI should only surface “actionable intelligence” at the precise moment of decision-making.
  • Trust Building: Integrating HCD builds trust. When clinicians are involved in the co-creation of the AI tool (e.g., defining how a sepsis alert should look), they are less likely to view the AI as “unqualified” or an intrusion on their expertise.

9.2 Agile in a Regulated Environment

Adopting Agile in healthcare requires modification. The “Fail Fast” mantra of Agile is unacceptable in clinical settings where patient safety is paramount.

  • Validation-First Agile: Successful healthcare AI teams use Agile for the development of the model but switch to rigorous validation phases (resembling Waterfall) before deployment.
  • Administrative AI: The lowest hanging fruit is in administration (scheduling, billing). Here, Agile is applied more traditionally. AI has been shown to reduce documentation time significantly, addressing the provider burnout crisis. Administrative costs typically represent 25-30% of healthcare spending, making this a prime target for AI-driven operational efficiency.11

10. Sector Analysis: Higher Education

Higher Education is grappling with AI both as an operational tool and an existential disruptor of its core product (learning/degrees).

10.1 Human-Centred Pedagogical Design

The integration of AI in education is shifting from “policing” (detecting plagiarism) to “designing” (AI as a tutor).

  • Student-Centricity: HCD approaches in education focus on “Personalized Learning Pathways.” Rather than a standardized curriculum, AI agents (designed with pedagogical experts) adapt content to the student’s pace. However, this requires a fundamental shift in university management from “Course Administration” to “Student Experience Design.”
  • Enrollment Management: This is the area of highest operational maturity. AI is used to analyze applicant data, predict enrollment yield, and automate communication. This follows a “Sales Funnel” management approach similar to the private sector, treating students as “leads” to be nurtured.30

10.2 Agile in Academia

Academia is traditionally slow-moving and consensus-driven, often relying on committee-based management. This conflicts with the speed required for AI adoption.

  • Agile Administration: Some progressive institutions (e.g., UC San Diego) are adopting Agile principles for their administrative data governance, allowing them to “vet” and approve AI tools faster.
  • The “Service” Model: Higher Ed is increasingly viewing the student experience through a “Product” lens—treating the degree pathway as a user journey that can be optimized via AI to improve retention and student success.25

11. Sector Analysis: Nonprofits and Philanthropy

The Nonprofit sector faces a “Resource vs. Demand” paradox. AI offers the efficiency needed to serve more beneficiaries with fewer resources, yet the sector often lacks the capital to invest in the necessary infrastructure.

11.1 The Digital Divide and HCD for Beneficiaries

HCD is uniquely powerful in the nonprofit sector because it focuses on “Beneficiary Experience,” which is often overlooked in favor of “Donor Experience.”

  • Accessibility: HCD ensures that AI tools (e.g., chatbots for service delivery) are accessible to vulnerable populations who may have low digital literacy or language barriers. By applying HCD, nonprofits ensure that efficiency gains do not come at the cost of exclusion.
  • Program Effectiveness: Nonprofits using HCD to co-create programs with beneficiaries report higher impact. For example, involving community health workers in the design of an AI triage tool ensures the tool reflects the reality of the field, not just the theory of the head office.

11.2 “Lightweight” Agile

Nonprofits are adopting a “scrappy” version of Agile. Lacking large engineering teams, they rely on “Low-Code/No-Code” platforms and “AI Agents” to automate workflows.

  • Donor Experience: The “Product” for a nonprofit is often the “Donor Experience.” Nonprofits that treat fundraising as a data product—using AI to personalize donor outreach—are seeing significant returns. This requires shifting from “Campaign-based” management (episodic) to “Relationship-based” management (continuous, data-driven).32

12. Cross-Sector Synthesis: Emerging Management Architectures

Synthesizing the data across all sectors reveals the emergence of a new “AI-Native” Management Architecture.

12.1 From “People Managing People” to “People Managing Agents”

The rise of “Agentic AI” (AI that can take action, not just generate text) is shifting the manager’s role. In the Salesforce State of Data and Analytics Report, the concept of the “Agentic Enterprise” is discussed. Management must now account for a “hybrid workforce” of humans and AI agents.

  • Orchestration: The manager becomes an “orchestrator” of workflows. The metric of success shifts from “hours worked” to “outcomes achieved” (reinforcing the need for OKRs). The manager defines the goal, and the AI agent (supervised by a human) executes the steps.33
  • Data Hygiene as a Management Duty: In an Agentic enterprise, data quality is not an IT problem; it is a business problem. If a manager feeds bad data to an agent, the agent takes bad actions. Thus, “Data Literacy” becomes a core management competency across all departments.34

12.2 The “Value Gap” Analysis

A recurring theme is the “Value Gap”—the difference between expected AI ROI and actual realized value.

  • Root Cause: The gap is rarely technical. It is almost always caused by a failure to redesign the process.
  • Solution: High-maturity organizations use “Value Stream Management” (a Lean/Agile concept) to map the process before applying AI. They identify the bottleneck and apply AI precisely there, rather than “sprinkling” AI everywhere. Organizations that focus on workflow redesign see the highest EBIT impact from their AI investments.6

13. Strategic Recommendations for the COO

Based on the exhaustive analysis of the patterns between management practices and AI maturity, the following strategic recommendations are proposed for the Office of the COO.

Recommendation 1: Institutionalize the Product Operating Model

Move away from funding “AI Projects” with start and end dates. Instead, fund “AI Products” or “Capabilities” with persistent teams and long-term OKRs.

  • Action: Restructure the AI organization into cross-functional “Pods” (Data Scientist + Engineer + Product Manager + Domain Expert).
  • Metric: Measure “Time to Value” and “Model Health/Drift” rather than “Project Completion.”

Recommendation 2: Mandate “Human-First” Prototyping (HCD)

Prohibit the development of any AI tool without a preceding HCD discovery phase.

  • Action: Implement “Dual-Track Agile” where user research and design run one sprint ahead of engineering.
  • Metric: Track “User Rejection Rate” (percentage of AI outputs ignored/overridden by human users) as a key quality indicator.

Recommendation 3: Implement “Psychological Safety” as a KPI

Recognize that fear is the enemy of AI adoption. If employees fear that AI will replace them, they will sabotage or underutilize it.

  • Action: Launch a “Change Management” program (utilizing ADKAR for individuals and Kotter for the enterprise) that explicitly frames AI as “Augmentation” (Superagency).
  • Metric: Survey “Psychological Safety” quarterly. Correlate this score with AI adoption rates in different business units to identify toxic pockets where AI adoption will fail.

Recommendation 4: Stabilize the “Agile-AI” Intersection

Acknowledge the “DORA Dip.” Do not punish teams for a temporary drop in velocity when they first adopt AI.

  • Action: Invest in “Platform Engineering.” Build a centralized infrastructure that handles the heavy lifting of AI (compliance, security, compute) so that Agile teams can focus on the business logic.
  • Metric: Track “Developer Experience” and “Platform Adoption” alongside traditional DORA metrics.

Recommendation 5: Align Strategy via AI-Driven OKRs

Use AI to close the loop between strategy and execution.

  • Action: Implement an OKR platform that utilizes predictive analytics to flag “At-Risk” objectives early.
  • Metric: “Strategy-to-Execution Cycle Time.”

14. Conclusion

The investigation into North American organizations reveals that AI Maturity is not a function of purchasing power, but of organizational will and architectural design. The correlation is clear: organizations that have embraced Modern Management Practices—specifically the quartet of Agile (Execution), OKRs (Alignment), Product Operating Models (Structure), and Human-Centred Design (Usability/Ethics)—are the ones converting AI hype into EBIT reality.

The divergence between the “AI Haves” (Financial Services, Tech) and the “AI Have-Nots” (Nonprofits, parts of Healthcare) is largely a divergence in management maturity. The “Have-Nots” are often trapped in rigid, waterfall, hierarchical structures that stifle the iterative learning required for AI. The “Haves” have built fluid, data-centric, psychologically safe environments where humans and machines collaborate in feedback loops.

For the COO, the mandate is clear: The path to AI maturity is not through the data center, but through the organizational chart. It requires dismantling the silos of the 20th-century corporation and rebuilding them into the responsive, product-centric, and human-empowered networks of the 21st.

Detailed Pattern Analysis & Research Synthesis

Table 3: The Impact of Management Practices on AI Maturity Metrics

Management PracticeAssociated AI OutcomeKey Statistic/FindingSource
Agile MethodologyHigher Success RateAI/Software projects have 42% success vs 13% for Waterfall.1
Product Operating ModelFinancial PerformanceTop-quartile POM maturity correlates with 60% higher shareholder returns.5
OKRs (AI-Enhanced)Strategic Agility52% faster strategy-to-execution cycle; 43% better strategic agility.4
Human-Centred DesignAdoption & SafetyOnly 5% of firms use user research pre-release; HCD correlates with higher adoption.
Psychological SafetyAdoption Success83% of leaders report it has a measurable impact on AI success.7
Change Management (CM)Outcome RealizationOrganizations investing in CM are 1.6x more likely to exceed AI goals.26

Table 4: Sector Maturity & Key Management Characteristics

SectorAI Maturity LevelDominant Management PracticeKey ChallengeLeading Organizations (Examples)
Financial ServicesHigh (Optimization)“AI Factory” / Product Operating ModelLegacy Core Systems, Regulatory ComplianceJPMC, Capital One, RBC 9
TechnologyVery High (Transformation)Platform Engineering, “Dogfooding”Managing “Shadow AI”, Talent RetentionMicrosoft, Google, Salesforce 36
HealthcareMedium (Adoption)HCD for Clinical Workflows, Governance-Heavy AgileData Privacy (HIPAA), Fragmented DataMayo Clinic, United Concordia 21
Higher EducationLow-Medium (Awareness)“Service” Model (Enrollment), Student-Centric DesignDigital Divide (Student vs Faculty), BudgetUC San Diego 25
NonprofitLow (Exploration)Lightweight Agile, Beneficiary-Centric HCDResource Constraints, Lack of Digital StrategyPacific Clinics 32

Works cited

  1. Agile vs. Waterfall: Comparing Success Rates in Project Management, accessed December 19, 2025, https://www.agilegenesis.com/post/agile-vs-waterfall-comparing-success-rates-in-project-management
  2. Highlights from the 2024 DORA State of DevOps Report – DX, accessed December 19, 2025, https://getdx.com/blog/2024-dora-report/
  3. Announcing the 2024 DORA report | Google Cloud Blog, accessed December 19, 2025, https://cloud.google.com/blog/products/devops-sre/announcing-the-2024-dora-report
  4. OKR Execution 3.0: When AI Meets Human Performance | Medium, accessed December 19, 2025, https://medium.com/@muthu16598/ai-driven-okr-execution-239ce74ab039
  5. Bottom-line benefit of the product operating model | McKinsey, accessed December 19, 2025, https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-bottom-line-benefit-of-the-product-operating-model
  6. The new economics of enterprise technology in an AI world | McKinsey, accessed December 19, 2025, https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/the-new-economics-of-enterprise-technology-in-an-ai-world
  7. Psychological safety’s role in AI initiatives under the microscope in Infosys and MIT Technology Review Insights report, accessed December 19, 2025, https://www.mi-3.com.au/18-12-2025/psychological-safetys-role-ai-initiatives-under-microscope-infosys-and-mit-technology
  8. Infosys and MIT Technology Review Insights Report Reveals the Critical Role of Psychological Safety in Driving AI Initiatives — with 83% of Business Leaders Reporting a Measurable Impact – Seeking Alpha, accessed December 19, 2025, https://seekingalpha.com/pr/20341510-infosys-and-mit-technology-review-insights-report-reveals-the-critical-role-of-psychological
  9. 2025 Evident AI Banking Index: Who’s Leading in AI? – Teradata, accessed December 19, 2025, https://www.teradata.com/insights/articles/which-banks-are-leading-in-ai
  10. Artificial Intelligence (AI) – Charity Digital Skills Report, accessed December 19, 2025, https://charitydigitalskills.co.uk/report/detailed-findings/artificial-intelligence/
  11. 200+ AI Statistics & Trends for 2025: The Ultimate Roundup – Fullview AI, accessed December 19, 2025, https://www.fullview.io/blog/ai-statistics
  12. Everyone Uses AI. Almost No One Wins — Yet | by Eray Alguzey | Dec, 2025 – Medium, accessed December 19, 2025, https://medium.com/@ealguzey/everyone-uses-ai-almost-no-one-wins-yet-5b855286511f
  13. Superagency in the workplace: Empowering people to unlock AI’s full potential – McKinsey, accessed December 19, 2025, https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work
  14. AI is transforming work, but people decide its success – Projective Group, accessed December 19, 2025, https://www.projectivegroup.com/ai-is-transforming-work-but-people-decide-its-success/
  15. AI in Healthcare: A Provider’s Guide to AI Maturity – TELUS Digital, accessed December 19, 2025, https://www.telusdigital.com/insights/data-and-ai/article/ai-in-healthcare-provider-guide
  16. Grow Enterprise AI Maturity for Bottom-Line Impact | MIT CISR, accessed December 19, 2025, https://cisr.mit.edu/publication/2025_0801_EnterpriseAIMaturityUpdate_WoernerSebastianWeillKaganer
  17. Agile vs Waterfall: Why Agile Wins for Unique Projects | Catapult CX, accessed December 19, 2025, https://catapult.cx/blog/agility-versus-waterfall-in-the-age-of-the-unique-product/
  18. DORA Report 2024 – A Look at Throughput and Stability – Alt + E S V – RedMonk, accessed December 19, 2025, https://redmonk.com/rstephens/2024/11/26/dora2024/
  19. Agile adoption: Evolving delivery with AI and platforms – Pyramid Consulting, accessed December 19, 2025, https://pyramidci.com/blog/is-agile-dying-or-just-growing-up/
  20. Unleash the Power of OKRs in Digital.ai Agility, accessed December 19, 2025, https://digital.ai/catalyst-blog/unleash-the-power-of-okrs-in-digital-ai-agility/
  21. United Concordia Dental Achieves 75% AI Adoption Rate Using Prosci ADKAR Model, accessed December 19, 2025, https://www.prosci.com/resources/success-stories/united-concordia-dental
  22. Federated Artificial Intelligence for Product-Led Growth | by Khmaïess Al Jannadi – Medium, accessed December 19, 2025, https://medium.com/@jannadikhemais/federated-artificial-intelligence-for-product-led-growth-a-freemium-model-for-scalable-and-fa298bcb129b
  23. Why Current Data Architectures Are Failing AI: A 2025 Modernization Guide – NStarX Inc., accessed December 19, 2025, https://nstarxinc.com/blog/why-current-data-architectures-are-failing-ai-a-2025-modernization-guide/
  24. The state of AI in 2025: Agents, innovation, and transformation – McKinsey, accessed December 19, 2025, https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
  25. AI Playbook: A Comprehensive Strategy for Higher Education | EdTech Magazine, accessed December 19, 2025, https://edtechmagazine.com/higher/article/2025/10/ai-playbook-comprehensive-strategy-higher-education-perfcon
  26. AI transformation and culture shifts | Deloitte US, accessed December 19, 2025, https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intelligence/articles/build-ai-ready-culture.html
  27. The Importance of Change Management in Introducing AI-Based Diagnostics in Hospitals, accessed December 19, 2025, https://interreg-baltic.eu/project-posts/caidx/now-available-caidx-implementation-and-change-management-guide-caidx-icm-guide/
  28. Mastering change management in the age of AI: A guide for professionals – Multiverse, accessed December 19, 2025, https://www.multiverse.io/en-GB/blog/ai-change-management
  29. Evident – Here’s the 2025 Evident AI Index – Banking Brief, accessed December 19, 2025, https://evidentinsights.com/bankingbrief/heres-the-2025-evident-ai-index/
  30. AI Adoption in Education: A New Era for Undergraduate Programs – Liaison International, accessed December 19, 2025, https://www.liaisonedu.com/resources/blog/ai-adoption-in-education-a-new-era-for-undergraduate-programs/
  31. How Artificial Intelligence is Transforming Higher Education Marketing and Enrollment Management, accessed December 19, 2025, https://www.career.org/common/Uploaded%20files/AI%20Guide/CECU%20AI%20Task%20Force%20Guide%20Final.pdf
  32. Salesforce Introduces the Future of Nonprofit Technology with Real-Time Data, AI, and Automation, accessed December 19, 2025, https://www.salesforce.com/news/stories/nonprofit-cloud-innovations-2023/
  33. Study: 84% of Technical Leaders Need Data Overhaul for AI Strategies to Succeed, accessed December 19, 2025, https://www.salesforce.com/news/stories/data-analytics-trends-2026/
  34. Salesforce State of Data and Analytics, 2nd Edition, accessed December 19, 2025, https://www.salesforce.com/en-us/wp-content/uploads/sites/4/documents/research/salesforce-state-of-data-and-analytics-2nd-edition.pdf
  35. 2025 Product-Led Growth Metrics: Case Studies And Predictions – Troy Lendman, accessed December 19, 2025, https://troylendman.com/2025-product-led-growth-metrics-case-studies-and-predictions/
  36. The art of AI maturity—North America | Accenture, accessed December 19, 2025, https://www.accenture.com/content/dam/system-files/acom/custom-code/ai-maturity/Accenture-Art-AI-Maturity-NA.pdf

This article was written with my brain and two hands (primarily) with the help of Google Gemini, ChatGPT, Claude, and other wondorous toys.

Leave a comment