The Data Exchange Podcast: Andrew Burt on the state of AI risk mitigation and responsible AI.
Subscribe: Apple • Android • Spotify • Stitcher • Google • RSS.
In this episode of the Data Exchange I speak with Andrew Burt, co-founder and Managing Partner of BNH.ai1, a new law firm focused on AI compliance, risk mitigation, and related topics. BNH is the first law firm run by lawyers and technologists focused on helping companies identify and mitigate risks associated with machine learning and AI. Andrew has been on this podcast before and I asked him back to give an update on how companies are doing as far as mitigating risks associated with machine learning and AI. As someone who speaks to companies and legal counsels across many sectors, I wanted to get a better sense of the true state of AI risk mitigation and the state of adoption of responsible AI tools and practices.
- Andrew Burt: ❛ I think one of the surprises and one of the interesting things is that there is no single customer or client profile for us. We have clients in all industries, all shapes and sizes. At one end, we have some clients that are just starting to seriously invest in AI, and they don’t have AI models in production. But they know that they can’t afford to have their AI get them into trouble. And so they’re doing the very strategic intelligent thing which we would recommend, which is investing in risk and liability minimization. Before something goes wrong.
Because when you move fast and break things, things break. And if you’re doing something that’s important, if you’re working in any sector, or making decisions that are important at a societal level, which I think many data scientists want to do, breaking things is not something to celebrate. It’s a liability – it can get you fined. … When things break, if what you’re doing is very important, your brand can suffer tremendously … you can hurt consumers.
So I think what you really want should be to find some type of balance between moving fast and also minimizing risk.
Download a complete transcript of this episode by filling out the form below:
- A video version of this conversation is available on our YouTube channel.
- “Navigate the road to Responsible AI”
- Andrew Burt: “Identifying and mitigating liabilities and risks associated with AI”
- Rumman Chowdury: “Responsible AI meets Reality”
- Ram Shankar: “Securing machine learning applications”
- Dan Geer and Andrew Burt: “Security and privacy for the disoriented”
- Jack Morris: “Increasing the robustness of natural language applications”
Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.
 I am an advisor to BNH.ai.
[Image by Briam Cute from Pixabay,]