2025 AI Governance Survey

Ben Lorica and David Talby on Generative AI Adoption Rates, Risk Management Gaps, and Implementation Barriers.

Subscribe: AppleSpotify OvercastPocket CastsAntennaPodPodcast AddictAmazon •  RSS.

Ben Lorica and David Talby present findings from the 2025 AI Governance Survey of 350+ respondents, revealing that while 30% have AI models in production, there are significant gaps in risk management practices. The survey highlights that speed to market is the primary barrier to implementing proper governance, with surprisingly low rates of monitoring and incident response planning, particularly among smaller companies despite growing regulatory awareness. [This episode is based on an online presentation, see the YouTube version.]

Subscribe to the Gradient Flow Newsletter

Interview highlights – key sections from the video version:

Jump to transcript


Related content:


Support our work by subscribing to our newsletter📩


Transcript

Below is a heavily edited excerpt, in Question & Answer format.

Survey Overview and AI Adoption Trends

What was the purpose of the AI governance survey and who participated?

The survey aimed to understand how companies already using generative AI are handling governance and risk management. Over 350 professionals participated, primarily from U.S.-based firms across technology, healthcare, finance, and other industries. Approximately 43% were director-level or above, with about one-quarter working at enterprises with over 5,000 employees and one-third at firms under 500 employees. The survey specifically screened out companies with no generative AI activity, so the results reflect teams already on the adoption path.

What is the current state of generative AI deployment in organizations?

About 30% of respondents have at least one model in production, with another 40% running pilots. The remaining third are still evaluating or experimenting. This indicates that while adoption is progressing, most organizations are still in early stages of their generative AI journey.

How many generative AI use cases are organizations planning to deploy?

Technical leaders are more aggressive in their deployment plans, with most targeting 3-5 use cases in the next 12 months. The broader respondent base typically plans 1-2 use cases. Company size significantly impacts these plans – approximately 75% of small companies (under 500 employees) are limiting themselves to 1-2 use cases, likely due to resource constraints. Large companies show more ambitious deployment schedules.

AI Development vs. Deployment

How are the roles of AI developer and AI deployer evolving?

The distinction between “AI developer” and “AI deployer” is increasingly blurred. Nearly half of technical leaders surveyed consider themselves both developers and deployers. This shift reflects the reality that most “AI development” today involves post-training activities rather than training models from scratch. Teams see themselves as developers once they invest significant effort in adding proprietary data, building guardrails, implementing RAG systems, and creating reasoning layers that embody their intellectual property – even if the base model weights came from elsewhere.

What does being an “AI developer” mean in today’s landscape?

Being an AI developer today centers on “post-training” activities that customize and enhance existing foundation models. Key activities include:

  • Prompt engineering and instruction tuning
  • Supervised fine-tuning using labeled prompt/response pairs
  • Reinforcement fine-tuning for reasoning-heavy tasks
  • Retrieval-Augmented Generation (RAG) and structured context injection
  • Model distillation and compression for efficient inference
  • Building custom guardrails and alignment mechanisms

Why should teams focus on post-training rather than training models from scratch?

Pre-training frontier-scale models requires budgets, talent, and infrastructure that only a handful of labs possess. Post-training allows teams to adapt open-weights or API models quickly and cost-effectively with domain specificity. Additionally, new foundation models arrive every few months, so investing months in extensive fine-tuning can become obsolete when the next release offers better baseline performance. Teams should optimize for quick model swapping and focus on downstream tasks like grounding, alignment, and guardrails.

AI Governance and Regulatory Frameworks

What is the current adoption rate of AI governance frameworks in organizations?

Over half of companies report having formal AI policy frameworks and incident response playbooks, though adoption varies significantly by company size. Large enterprises are far more likely to have dedicated AI governance offices and annual AI safety training. However, the substance of these policies varies dramatically – some are as basic as instructing employees not to upload confidential information to ChatGPT, while others are comprehensive frameworks addressing multiple risk dimensions.

Which regulatory frameworks are organizations most aware of and why?

The NIST AI Risk Management Framework (AI RMF) stands out as the most recognized framework, particularly among U.S. technical leaders. This prominence stems from NIST’s strong track record with cybersecurity standards that have become de facto legal standards. The expectation is that following the NIST AI RMF will be considered “commercially reasonable” risk mitigation, potentially offering legal safe harbor in future disputes. The framework is built around measuring, managing, and mapping risks systematically.

How can organizations keep up with the rapidly evolving regulatory landscape?

The regulatory environment is changing rapidly, with 135 different state AI laws passed in the U.S. in 2024 and over 800 submitted at the start of the year. Organizations should:

  • Subscribe to curated policy trackers for NIST drafts, state-level AI bills, and federal guidance
  • Analyze regulatory actions and complaint letters (like the FTC’s letter to OpenAI) to understand what documentation and processes regulators expect
  • Maintain a living compliance checklist mapping controls to owners, evidence sources, and review cadences
  • Leverage free resources from organizations like Pacific AI that publish quarterly-updated AI governance policy suites

Risk Management and Monitoring

What AI risk management measures are organizations currently implementing?

Monitoring for accuracy, misuse, and drift is the most widely adopted practice, with just under half of respondents implementing it. Other common measures include formal processes for evaluating AI risks. However, adoption rates for comprehensive risk management practices remain concerningly low – less than 20% have implemented model cards, dedicated incident reporting tools, or regular red teaming exercises. This is particularly alarming for regulated industries like healthcare and finance where such controls should be standard.

What are the essential components of an AI incident response plan?

A comprehensive AI incident response plan should include:

  1. Preparation: Clear definition of what constitutes an “AI incident” for your organization
  2. Identification: Mechanisms like monitoring to detect incidents when they occur
  3. Containment and Eradication: Processes to isolate issues and remove root causes
  4. Recovery: Procedures to restore systems to normal operation
  5. Post-Mortem: Analysis to prevent recurrence and capture lessons learned

The primary goal is minimizing time to recovery, recognizing that probabilistic models will eventually misbehave. While nearly half of companies claim to have incident response playbooks, adoption is much lower among small companies.

How should organizations implement guardrails for AI applications?

Guardrails are essential design patterns that work by intercepting prompts and responses:

  • Input guardrails screen user prompts for personally identifiable information (PII), proprietary data, prompt injection attacks, and inappropriate requests
  • Output guardrails check model responses for hallucinations, toxicity, sensitive topics, or off-brand content

While major cloud providers offer embedded guardrail services, organizations often need custom guardrails for their specific domains. Healthcare applications require guardrails around end-of-life questions and patient privacy, while financial services need controls for regulatory compliance and investment advice limitations. The key lesson is that generic solutions are often insufficient for specialized use cases.

Organizational Challenges and Solutions

What are the main barriers preventing organizations from implementing AI governance?

The top barriers vary by organization size:

  • Large companies: Speed-to-market pressure and lack of internal knowledge are primary concerns
  • Small companies: Budget constraints and lack of allocated resources are the main blockers
  • All organizations: The tension between rapid deployment and comprehensive safety implementation creates ongoing challenges

What operational challenges do teams face when managing AI risk?

The biggest operational challenge is fragmentation in two forms:

  1. Tool fragmentation: Different best-of-breed tools for bias detection, copyright scanning, privacy protection, and other risk areas don’t integrate well
  2. Persona fragmentation: Data scientists, legal teams, compliance officers, and product managers use different systems with incompatible workflows

This leads to inefficient processes where approval workflows bounce between GitHub tickets, shared drives, email threads, and spreadsheets – adding weeks or months to deployment timelines.

What would a unified AI governance platform provide?

An ideal unified platform should:

  • Provide a single environment where all stakeholders (data scientists, ML engineers, lawyers, compliance officers) can collaborate and track progress
  • Embed controls and workflows from frameworks like NIST AI RMF directly into the development lifecycle
  • Offer built-in, commercially reasonable tests for common risks while allowing teams to plug in specialized tools
  • Automate control mapping to various regulatory frameworks
  • Enable continuous testing pipelines so risk checks run with every model or data change

Practical Recommendations

What immediate steps should organizations take to improve AI risk management?

  1. Instrument and monitor first: Begin tracking prompts, outputs, latency, cost, accuracy, and user feedback – governance starts with observability
  2. Implement basic guardrails and red teaming: These are inexpensive compared to breach consequences
  3. Create minimal incident response capabilities: Accept that models will misbehave and define processes to handle issues quickly
  4. Leverage existing frameworks: Use resources like NIST AI RMF and OWASP LLM security handbooks rather than creating policies from scratch
  5. Integrate into CI/CD pipelines: Embed risk checks into automated workflows to avoid high-friction manual processes

How should organizations prepare for the future of AI development?

Teams should optimize their platforms for rapid model replacement and post-training activities. Design evaluation harnesses and post-training pipelines that can swap base models in days, not months. Focus on building excellence in areas that persist across model versions: data curation, context injection, guardrails, monitoring, and domain-specific safety measures. By embedding these fundamentals early, teams can maintain development velocity while meeting the rising bar for trustworthy AI in production systems.