What businesses need to know about model explainability

The Data Exchange Podcast: Krishna Gade on transparency and explainability in machine learning.


SubscribeiTunesAndroidSpotifyStitcherGoogle, and RSS.

In this episode of the Data Exchange I speak with Krishna Gade, founder and CEO at Fiddler Labs, a startup focused on helping companies build trustworthy and understandable AI solutions. Prior to founding Fiddler, Krishna led engineering teams at Pinterest and Facebook.

Ray Summit has been postponed until the Fall. In the meantime, enjoy an amazing series of virtual conferences beginning in mid May on the theme “Scalable machine learning, scalable Python, for everyone”. Go to anyscale.com/events for details.

Our conversation included a range of topics, including:

  • Krishna’s background as an engineering manager at Facebook and Pinterest.
  • Why Krishna decided to start a company focused on explainability.
  • Guidelines for companies who want to begin working on incorporating model explainability into their data products.
  • The relationship between model explainability (transparency) and security (ML that can resist adversarial attacks).
  • (Full transcript of our conversation is below.)

Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.


Download a complete transcript of this episode by filling out the form below:

Short excerpt:
Ben Lorica:​ It sounds like you’re a long-standing data engineer, and there are many aspects in an end-to-end machine learning (ML) platform: why on earth did you focus on explainability? There are so many other things you could be working on.

Krishna Gade: ​That’s a great question. Machine learning has been part of technology companies for the past two decades. Back in the day when I was a search engineer at Bing, I remember they were one of the first teams to productize search ranking. So, machine learning has been in play for companies that had large amounts of data, had compute power, and could devise algorithms to process this data. However, in the last few years, it has broken through into general enterprise. You can see companies in finance, healthcare, oil and gas, and other sectors trying to deploy machine learning.

The biggest problem today with respect to machine learning is how to operationalize it, and how to take this technology and affect the business process workflow. There is a big gap between how machine learning technology works and how business owners and business leaders operate today. There’s a gap in terms of literacy. There’s a gap in terms of trust. “How do I trust this machine learning decision?” Then there’s an aspect of regulation coming up because of increasing reports around bias in machine learning.

Companies need to be able to look at how machine learning is being built and how the models are actually working when they’re deployed. Therefore, explainability becomes a lens to look at what’s going on; it provides visibility and layers of insight so you can build trust with AI. We feel this is the last link, the missing link, for AI adoption. If we can crack it, then we can see AI adoption perforate across the board in enterprise. That’s why I’ve been working on explainability in AI.


Related content:


[Image: “box cube empty clear glass” by Terri Oda.]