Towards Simple, Interpretable, and Trustworthy AI

The Data Exchange Podcast: Sheldon Fernandez and Alex Wong on building tools to help companies operationalize machine learning and AI.


SubscribeApple • Android • Spotify • Stitcher • Google • RSS.

In this episode of the Data Exchange I speak with Sheldon Fernandez, CEO at Darwin AI, and Alex Wong, Professor at the University of Waterloo, Co-Founder of DarwinAI (Chief Scientist) and Euclid Labs.

Download the 2021 Business At The Speed Of AI Report and learn how leading companies are using and implementing data and machine learning technologies.

Darwin AI provides  tools to help companies operationalize machine learning and AI. As Alex notes, a key requirement for many companies is transparency:

    ❛ Having looked at the entire MLOps pipeline, one of the things a lot of people have missed, is the notion that trust as well as transparency is actually critical and pivotal, at many different steps in the MLOps pipeline. Right now, it’s kind of treated as an afterthought. …  Then you start discovering problems and biases, gaps, and so on, and so forth. And with trust issues, guess what you’re going back to the beginning of your MLOps pipeline, you’re essentially starting from scratch. So what we see particular value about trust, as well as explainability, is that it actually fits in many different locations in the MLOps pipeline, from design, to training, to deployment all the way to monitoring, it allows us essentially, to have entry points, at each point in the pipeline.

We discussed trust and transparency through the lens of research from Alex Wong’s lab at the University of Waterloo, as well Sheldon’s interaction with users across many settings and industries. In conjunction with the tools they are building an AI they have been publishing a series of papers and articles on trust and transparency, here are a few examples:

We also discussed the emergence of Responsible AI (RAI), and how some of the tools they are building can help companies meet their needs in the many aspects of RAI. We recently organized a well-received webinar – Responsible AI in Practice – featuring experts in security, fair ML, and legal and compliance issues.  It’s free and you can still watch it on-demand here.

Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.

Related content and resources:


FREE Report

DOWNLOAD


[Photo by Santiago Gomez on Unsplash.]