The Data Exchange Podcast: Sheldon Fernandez and Alex Wong on building tools to help companies operationalize machine learning and AI.
Subscribe: Apple • Android • Spotify • Stitcher • Google • RSS.
In this episode of the Data Exchange I speak with Sheldon Fernandez, CEO at Darwin AI, and Alex Wong, Professor at the University of Waterloo, Co-Founder of DarwinAI (Chief Scientist) and Euclid Labs.
Darwin AI provides tools to help companies operationalize machine learning and AI. As Alex notes, a key requirement for many companies is transparency:
- ❛ Having looked at the entire MLOps pipeline, one of the things a lot of people have missed, is the notion that trust as well as transparency is actually critical and pivotal, at many different steps in the MLOps pipeline. Right now, it’s kind of treated as an afterthought. … Then you start discovering problems and biases, gaps, and so on, and so forth. And with trust issues, guess what you’re going back to the beginning of your MLOps pipeline, you’re essentially starting from scratch. So what we see particular value about trust, as well as explainability, is that it actually fits in many different locations in the MLOps pipeline, from design, to training, to deployment all the way to monitoring, it allows us essentially, to have entry points, at each point in the pipeline.
We discussed trust and transparency through the lens of research from Alex Wong’s lab at the University of Waterloo, as well Sheldon’s interaction with users across many settings and industries. In conjunction with the tools they are building an AI they have been publishing a series of papers and articles on trust and transparency, here are a few examples:
- Dark AI and the Promise of Explainability
- Towards Simple, Interpretable Trust Quantification Metrics for Deep Neural Networks
- Multi-scale Trust Quantification for Financial Deep Learning
We also discussed the emergence of Responsible AI (RAI), and how some of the tools they are building can help companies meet their needs in the many aspects of RAI. We recently organized a well-received webinar – Responsible AI in Practice – featuring experts in security, fair ML, and legal and compliance issues. It’s free and you can still watch it on-demand here.
Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.
Related content and resources:
- A video version of this conversation is available on our YouTube channel.
- Navigate the road to Responsible AI
- Rumman Chowdury: “The State of Responsible AI”
- Dan Geer and Andrew Burt: “Security and privacy for the disoriented”
- Krishna Gade: “What businesses need to know about model explainability”
- Andrew Burt: “Identifying and mitigating liabilities and risks associated with AI”

[Photo by Santiago Gomez on Unsplash.]