Yaron Singer on building tools to guard against and reduce model failures.
Subscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.
Yaron Singer is the CEO of Robust Intelligence1, a company building tools to help manage and mitigate risks associated with machine learning models and applications. They are specifically creating solutions that integrate seamlessly into the ML lifecycle to guard against and reduce model failures.
There’s growing interest in tools and processes that inject software engineering rigor into how AI applications get built and deployed. I’ve seen increased interest in tools for model validation and testing, more rigorous development processes, and better documentation. The emerging discipline of AI Engineering signals interest in formalizing how AI systems are designed and built.

Highlights in the video version:
- Introduction to Yaron Singer
Model training, software engineering, and stress testing
Tools for testing and a checklist for fairness/responsible AI
What to feed into the system once a model has been trained?
Monitoring tools, tracing, and NLP model testing
Companies in regulated industries and compliance
Model inversion/extraction
The fure of ML testing and Industrial AI
Model detoxification and controls for language models
Related content:
- A video version of this conversation is available on our YouTube channel.
- Model Monitoring Enables Robust Machine Learning Applications
- Large Image Datasets Today Are a Mess
- Oren Razon: Machine Learning Model Observability
- Christopher Nguyen: What is AI Engineering?
- Danny Bickson and Amir Alush: Data Infrastructure for Computer Vision
- A Guide to Data Annotation and Synthetic Data Generation Tools
- Machine Learning Trends You Need to Know
If you enjoyed this episode, please support our work by encouraging your friends and colleagues to subscribe to our newsletter:
[1] This post and episode are part of a collaboration between Gradient Flow and Robust Intelligence. See our statement of editorial independence.