Andrew Burt on how to Navigate the Disconnect Between AI and Legal Compliance.
Subscribe: Apple • Spotify • Overcast • Pocket Casts • AntennaPod • Podcast Addict • Amazon • RSS.
Andrew Burt is CEO of Luminos AI, a startup that provides AI and legal teams the tools they need to reduce AI liabilities and sign off on AI risks. This episode explores the challenges of deploying AI systems at scale, including the disconnect between technical and legal/compliance teams, the lengthy legal sign-off process, and the unique risks posed by generative AI models. The discussion highlights the need for an “AI alignment platform” to streamline risk management and the evolving regulatory landscape, with a focus on increasing state-level AI regulations that companies will need to navigate.
Interview highlights – key sections from the video version:
-
- Introduction and Overview of AI Challenges
- The Disconnect Between Technology and Risk Teams
- Barriers to AI Adoption and Legal Compliance
- Internal vs. External AI Applications: Risk Considerations
- Challenges of Bias in AI Systems and Legal Oversight
- Generative AI: Copyright, Privacy, and New Risks
- The Role of AI Alignment Platforms
- Managing Multi-Agent AI Systems and Risk Complexity
- Regulatory Landscape and the Role of States
- Closing Thoughts on AI Risk Management
Related content:
- A video version of this conversation is available on our YouTube channel.
- What Is An AI Alignment Platform?
- What AI Teams Need to Know for 2025
- Red Teaming AI: Why Rigorous Testing is Non-Negotiable
- Shreya Rajpal → The Essential Guide to AI Guardrails
- Andrew Burt → From Preparation to Recovery: Mastering AI Incident Response
If you enjoyed this episode, consider supporting our work by leaving a small tip here and inviting your friends and colleagues to subscribe:
Transcript.
Below is a heavily edited excerpt, in Question & Answer format.
What disconnect do you see between technical teams and legal/compliance teams when it comes to AI deployment?
There’s a significant disconnect between people thinking about big philosophical AI issues versus those building and deploying AI systems. While some focus on existential questions, technical teams in the trenches are frustrated because AI deployment takes too long when legal or compliance gets involved. The biggest barrier to both AI adoption and ensuring AI aligns with our values is figuring out how to combine these different teams and expertise sets effectively. Without bridging this gap, we’ll continue to have philosophical debates on one hand and AI being deployed without proper review on the other.
How do internal versus external AI applications differ in terms of risk and oversight requirements?
The fundamental question is “what could go wrong?” This determines the level of oversight needed. Internal applications with humans in the loop (like code suggestion tools or OCR for processing receipts) generally require less oversight. However, internally-used systems that affect people’s opportunities or evaluations – like HR tools for skills assessment – can still pose significant risks.
External applications, especially those touching customers or the public, typically need much more oversight. Examples include customer segmentation for call centers or facial recognition in public spaces, which have demonstrated disparate performance across racial groups. These systems have real potential to cause harm when deployed without proper review.
What are the main risks that technical teams need to address when deploying AI systems?
The three main risks we focus on are fairness, privacy, and copyright. With generative AI, copyright has become a particularly significant concern. But fairness and bias issues continue to be prevalent across AI applications.
When teams identify potential bias issues, they often seek sign-off from legal or compliance teams. This creates a bottleneck, as there’s no way to completely eliminate bias – the real question is what level is acceptable. This assessment process typically involves a month-long back-and-forth between technical and legal teams, with the technical team gathering documentation, legal requesting clarifications, and finally determining what quantitative tests need to be run. This can take 2-6 months for a single model, and we’ve seen cases where it takes up to 12 months.
How do legal teams evaluate AI systems when they may not have technical expertise?
This is a major challenge. Legal teams need to see bias tests but often don’t know which tests provide adequate coverage or are appropriate. They may consult outside counsel or specialized firms, but the process remains laborious and time-consuming.
Even when technical teams conduct their own testing, they often focus on technically interesting approaches that may be meaningless from a legal defensibility perspective. For risk assessment, you need clear thresholds – knowing what constitutes a pass or fail – ideally anchored in established legal precedent rather than novel metrics without clear standards.
How have risks evolved with the rise of generative AI and foundation models?
Generative AI has amplified existing risks and introduced new ones, particularly around copyright and intellectual property. The key difference is scale – the three Vs (variety, velocity, and volume) are on an entirely different level with generative AI.
Instead of simply generating predictions, we’re now generating content – text, videos, audio, and images – which puts all the traditional issues on steroids. Bias problems that existed in classification systems now manifest in content generation, potentially at a much larger scale.
While there’s enormous hype around generative AI, actual deployment in high-stakes environments remains limited due to these risk barriers. Most current applications are internal, focused on intelligence augmentation rather than fully autonomous AI.
What new challenges do you see with the emergence of AI agents?
With agents, we’re stacking risks on top of each other exponentially. When you have multiple generative AI systems interacting with each other, traditional human oversight becomes practically impossible. The only viable approach is to carefully build oversight AI into these systems.
Current efforts in this direction, like using “LLM as a judge,” are quite primitive. Even sophisticated examples like Anthropic’s evaluations for honesty, helpfulness, and harmlessness don’t align with real regulatory requirements like FTC oversight – they’re too simplistic. To be legally defensible, you need much more granularity and sophistication, and likely multiple models working together, though this creates latency challenges.
What is an AI alignment platform and why is it necessary?
An AI alignment platform is a central place that unifies the management of AI risks holistically across teams with different backgrounds and requirements. Currently, the landscape is completely misaligned – different teams, different risks, different tools – and these silos create bottlenecks that prevent scaling.
The only solution is to align all these elements in a place where approval and oversight can be automated. This isn’t just about having people from different backgrounds collaborate – it’s about creating shared user interfaces, shared language, and shared reports to make communication efficient. It also includes triage capabilities, so human experts can focus their time on the most important risks.
Without such a platform, companies face an impossible choice between deploying AI without proper oversight or not deploying AI at all.
How do you see the regulatory landscape for AI evolving, especially with the change in US administration?
While the federal approach may be more deregulatory under the new administration, we’re likely to see states stepping in to fill the regulatory gap – similar to what happened with privacy laws. There probably won’t be a comprehensive federal AI law, but instead a patchwork of state regulations.
Colorado and Utah have already passed AI laws, and by the end of the next administration, we might have between a dozen and two dozen state-level laws that companies will need to comply with. This fragmented landscape will actually be more challenging for companies, as they’ll need to navigate different requirements across states.
This strengthens the argument for implementing robust risk management and legal compliance frameworks now. Every year that passes, these requirements will become less optional and more necessary for successfully deploying AI at scale.
