Navigating the Risk Landscape: A Deep Dive into Generative AI

Andrew Burt on Lessons from the FTC’s Probe into OpenAI.


Subscribe: AppleSpotify OvercastGoogleAntennaPodPodcast AddictAmazon •  RSS.

Andrew Burt is the Managing Partner at Luminos.Law1, the first law firm focused on helping teams manage the privacy, fairness, security, and transparency of their AI and data — including generative AI systems. This conversation delves into the topic of AI risk mitigation, with a particular focus on generative AI. We explore the state of risk and compliance in light of generative AI. This episode further explores the challenges and risks posed by AI, and the implications of the FTC probe into OpenAI, as well as the NIST AI Risk Management Framework.

Subscribe to the Gradient Flow Newsletter

 

A Mind Map of the questions posed by the FTC to OpenAI.

    ❛ There’s just no way to prevent bad stuff from happening with Generative AI. So what that means is, rather than prevention, incident response is really important. You want to be able to detect things when they go wrong. And that also means that testing is way, way, way more important with Generative AI systems than it is with traditional AI systems. ❜
    Andrew Burt, Managing Partner at Luminos.Law

 
Interview highlights – key sections from the video version:

 


Learn how to build practical, robust and safe AI applications by attending the AI Conference in San Francisco (Sep 26-27). Use the discount code FriendsofBen18 to save 18% on your registration.



Related content:


If you enjoyed this episode, please support our work by encouraging your friends and colleagues to subscribe to our newsletter:


[1] Ben Lorica is an advisor to Luminos.Law and other startups.