Andrew Burt on Proactive Strategies for Effective AI Incident Management.
Subscribe: Apple • Spotify • Overcast • Pocket Casts • AntennaPod • Podcast Addict • Amazon • RSS.
Andrew Burt is co-founder of both Luminos.Law and Luminos.ai1, entities building tools to help companies mitigate and manage AI risks. We dive into the critical topic of AI incident response, highlighting its unique challenges compared to traditional software incidents. They discuss the importance of clearly defining AI incidents, detecting them beyond accuracy issues, and the comprehensive phases of an incident response plan, including preparation, containment, and recovery. Emphasizing the need for proactive containment plans, they outline how companies can better prepare for and mitigate the impacts of AI-related incidents.
Reading List from Andrew Burt:
- What is an AI Alignment Platform?
- AI Incident Database
- AI incident response plans: Not just for security anymore
- Flat Light: data protection for the disoriented, from policy to practice
- What is a Cybersecurity Incident?
- How to Red Team a Gen AI Model
- CyberInsecurity: The Cost of Monopoly
Interview highlights – key sections from the video version:
-
- The Lack of Alerts for AI Incidents and the Need for Preparation
- Defining AI Incidents in the Age of Large Language Models
- Best Practices for Defining AI Incidents and Varying Risk Appetites
- Two Types of Companies: Prepared vs. Unprepared for AI Incidents
- The Importance of Short-Term Containment Plans in AI Incident Response
- Connecting Incident Response to AI Model Development and Testing
- Mean Time to Respond vs. Mean Time to Repair in AI Systems
- Incident Response for External APIs vs. Internally Controlled Models
- Phases of AI Incident Response: Preparation, Identification, Containment, Eradication, Recovery, and Lessons Learned
- Leveraging Existing Incident Response Frameworks for AI Systems
- The Need for Updated Playbooks for Generative AI Incident Response
- Addressing Concerns About Increased Testing and Preparation for AI Systems
- Assessing the State of Incident Response: Defining Incidents and Detection
- Tools and Best Practices for Defining and Detecting AI Incidents
- Resources and Support for AI Incident Response
- Discussion on SB 1047 and Regulating Frontier Models
- Balancing Regulation and Innovation in the Rapidly Evolving AI Landscape
- Concluding Thoughts: The Importance of AI Incident Response in a World of AI-Infused Software
Related content:
- A video version of this conversation is available on our YouTube channel.
- What is an AI Alignment Platform?
- Judicial AI: A Legal Framework to Manage AI Risks
- A Critical Look at Red-Teaming Practices in Generative AI
- Reducing AI Hallucinations: Lessons from Legal AI
- Nestor Maslej → 2024 Artificial Intelligence Index
- Andrew Burt → Navigating the Risk Landscape – A Deep Dive into Generative AI
- Dan Geer and Andrew Burt → Security and privacy for the disoriented
If you enjoyed this episode, please support our work by encouraging your friends and colleagues to subscribe to our newsletter:
[Ben Lorica is an advisor to Luminos.Law, Luminos.AI, and other startups.]

