Site icon The Data Exchange

Navigating the Risk Landscape: A Deep Dive into Generative AI

Andrew Burt on Lessons from the FTC’s Probe into OpenAI.


Subscribe: AppleSpotify OvercastGoogleAntennaPodPodcast AddictAmazon •  RSS.

Andrew Burt is the Managing Partner at Luminos.Law1, the first law firm focused on helping teams manage the privacy, fairness, security, and transparency of their AI and data — including generative AI systems. This conversation delves into the topic of AI risk mitigation, with a particular focus on generative AI. We explore the state of risk and compliance in light of generative AI. This episode further explores the challenges and risks posed by AI, and the implications of the FTC probe into OpenAI, as well as the NIST AI Risk Management Framework.

Subscribe to the Gradient Flow Newsletter

 

A Mind Map of the questions posed by the FTC to OpenAI.

 
Interview highlights – key sections from the video version:

  1. State of Risk and Compliance in Generative AI
  2. The Rise of Custom Foundation Models, and FTC’s probe into OpenAII
  3. The challenge of risk management in light of the fact that Generative AI models have so many possible applications
  4. Hallucination
  5. AI Incident Response tabletop exercises
  6. Mind Map from the FTC complaint, and the NIST AI Risk Management Framework
  7. Rethinking how we structure and manage AI projects
  8. Concrete steps that AI teams should be taking today
  9. Luminos.Law

 


Learn how to build practical, robust and safe AI applications by attending the AI Conference in San Francisco (Sep 26-27). Use the discount code FriendsofBen18 to save 18% on your registration.



Related content:


If you enjoyed this episode, please support our work by encouraging your friends and colleagues to subscribe to our newsletter:


[1] Ben Lorica is an advisor to Luminos.Law and other startups.

Exit mobile version