The Data Exchange Podcast: Rayid Ghani and Andrew Burt on how organizations are building models that can be trusted to achieve fair and equitable outcomes.
Subscribe: Apple • Android • Spotify • Stitcher • Google • RSS.
This week’s guest are Rayid Ghani, Distinguished Career Professor in the Machine Learning Department and the Heinz College of Information Systems and Public Policy at Carnegie Mellon University, and Andrew Burt, co-founder and Managing Partner of BNH.ai1, a new law firm focused on AI compliance, risk mitigation, and related topics. BNH is the first law firm run by lawyers and technologists focused on helping companies identify and mitigate risks associated with machine learning and AI.
We discussed a range of topics including:
- Addressing bias and fairness in machine learning.
- One of Rayid’s projects, Aequitas, an open source bias audit toolkit for machine learning developers, analysts, and policymakers.
- Aspects of responsible AI including explainability and interpretability, privacy, safety and reliability.
- The use of machine learning and data science in public policy.
Rayid Ghani:
I think a lot of the research community in this space is kind of detached from real world use cases, real problems, real data, and real people who are impacted by these problems. So a lot of people who are doing this research are kind of coming in from a theoretical side, however there are practical needs on the ground. And what’s starting to happen in certain areas that are coming together. … So I don’t think the tools are there. But I also don’t think that the methodology is there. I don’t think companies know what they want. They don’t know what fairness means? And how to elicit that and how to define that. And then mapping that to kind of operational requirements. So I don’t think we’re there yet. But I think we’re moving towards that.
Download a complete transcript of this episode by filling out the form below:
Related content:
- A video version of this conversation is available on our YouTube channel.
- Data Cascades: Why we need feedback channels throughout the machine learning lifecycle
- “Navigate the road to Responsible AI”
- Andrew Burt: “How Companies Are Investing in AI Risk and Liability Minimization”
- Rumman Chowdury: “Responsible AI meets Reality”
- Ram Shankar: “Securing machine learning applications”
- Dan Geer and Andrew Burt: “Security and privacy for the disoriented”
- Steven Feng and Eduard Hovy: “Data Augmentation in Natural Language Processing”
Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.
[1] I am an advisor to BNH.ai.
[Image: Pair Programming by Ken Bauer on Flickr.]