The State of Responsible AI

The Data Exchange Podcast: Rumman Chowdury on prevalent practices for building ethical and responsible AI.


SubscribeApple • Android • Spotify • Stitcher • Google • RSS.

In this episode of the Data Exchange I speak with Dr. Rumman Chowdhury, founder of Parity, a startup building products and services to help companies build and deploy ethical and responsible AI. Prior to starting Parity, Rumman was Global Lead for Responsible AI at Accenture Applied Intelligence.

Are you using AI Responsibly? Rumman Chowdury is part of a stellar lineup speaking on December 15, 2020: join us for a series of short talks on Responsible AI. It’s free, and you can join the livestream or access the sessions on-demand.

Rumman is co-author of one of my favorite recent papers entitled “Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for shifting Organizational Practices”.  She and her co-authors conducted an ethnographic study comprised of 26 semi-structured interviews with people from 19 organizations on four continents. While their focus was on fair ML, many of the observations likely (partially) map to other areas of Responsible AI:

A 2020 ethnographic study investigated the practicality of integrating Responsible AI.
A 2020 ethnographic study investigated the practicality of integrating Responsible AI.

I included their paper in a recent post summarizing a series of surveys meant to provide a flavor for how companies are ensuring that their AI and machine learning models are ethical, safe, transparent, and secure. As Rumman and others have pointed out, the call for Responsible AI comes at a time when companies are beginning to incorporate lessons from earlier deployments of machine learning and AI:

    But one of the things that basically, every single person that we interviewed mentioned is we have to be proactive, we cannot just be reactive. So there is a there is massive value in the post mortem. But the purpose of a post mortem isn’t just to say, Oh, I guess that’s how we screwed up too bad. It’s to start creating the right kinds of methodologies and infrastructures, or in this case, like bias detection methods, etc, so that we can not repeat that mistake again. So it’s been incredibly valuable the last few years to retroactively look at the mistakes that are being made. And now in a position where we can say, okay, we can confidently say, here’s where we’re seeing problems over and over, how can we start building solutions?

Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.

Related content and resources:


Register
Register to join live or watch on-demand.


[Image: trek, nepal, bridge, man, himalaya, trekking, mountain, landscape, adventure, naturefrom pxfuel.]