The Data Exchange Podcast: Rumman Chowdury on prevalent practices for building ethical and responsible AI.
Subscribe: Apple • Android • Spotify • Stitcher • Google • RSS.
In this episode of the Data Exchange I speak with Dr. Rumman Chowdhury, founder of Parity, a startup building products and services to help companies build and deploy ethical and responsible AI. Prior to starting Parity, Rumman was Global Lead for Responsible AI at Accenture Applied Intelligence.
Rumman is co-author of one of my favorite recent papers entitled “Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for shifting Organizational Practices”. She and her co-authors conducted an ethnographic study comprised of 26 semi-structured interviews with people from 19 organizations on four continents. While their focus was on fair ML, many of the observations likely (partially) map to other areas of Responsible AI:

I included their paper in a recent post summarizing a series of surveys meant to provide a flavor for how companies are ensuring that their AI and machine learning models are ethical, safe, transparent, and secure. As Rumman and others have pointed out, the call for Responsible AI comes at a time when companies are beginning to incorporate lessons from earlier deployments of machine learning and AI:
- But one of the things that basically, every single person that we interviewed mentioned is we have to be proactive, we cannot just be reactive. So there is a there is massive value in the post mortem. But the purpose of a post mortem isn’t just to say, Oh, I guess that’s how we screwed up too bad. It’s to start creating the right kinds of methodologies and infrastructures, or in this case, like bias detection methods, etc, so that we can not repeat that mistake again. So it’s been incredibly valuable the last few years to retroactively look at the mistakes that are being made. And now in a position where we can say, okay, we can confidently say, here’s where we’re seeing problems over and over, how can we start building solutions?
Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.
Related content and resources:
- A video version of this conversation is available on our YouTube channel.
- Navigate the road to Responsible AI
- Download the 2020 NLP Survey Report and learn how companies are using and implementing natural language technologies.
- Marco Ribeiro: “Testing Natural Language Models”
- Krishna Gade: “What businesses need to know about model explainability”
- Xiyin Zhou: “Detecting Fake News”
- Jack Morris: “Improving the robustness of natural language applications”
- Alan Nichol: “Best practices for building conversational AI applications”
Register to join live or watch on-demand.
[Image: trek, nepal, bridge, man, himalaya, trekking, mountain, landscape, adventure, nature – from pxfuel.]