The Data Exchange Podcast: Ofer Razon on building machine learning tools to scale AI operations.
Subscribe: iTunes, Android, Spotify, Stitcher, Google, and RSS.
In this episode1 of the Data Exchange I speak with Ofer Razon, co-Founder & CEO at Superwise, a startup focused on building tools that help companies gain more visibility and control of machine learning models in production. Ofer and Superwise are part of a community in the early stages of building tools and best practices for scaling AI operations. The goal is to help multiple stakeholders build the necessary solutions to evaluate models, receive alerts and troubleshoot on time, validate, observe, and gather insights for more efficiency. AI assurance will ultimately bring together different parts of an organization including business, data science and operational teams, legal and compliance, and privacy and security.
A few years ago I gave a keynote on building machine learning tools to help monitor and manage machine learning models. The scenario I had in mind was one where a company found itself with lots of models that needed monitoring, testing, and tuning. We are beginning to see new tools (like Superwise) that do this and more: in the case of Superwise, their tools also monitor features and related data quality issues.
Another topic I raised a few years ago was risk – specifically managing risk in machine learning. AI assurance concepts and tools fit nicely in this context. I have long believed that “risk” is a good umbrella to use to group areas that include safety and reliability, privacy and security, fairness and bias, transparency, and reproducibility. As I noted in a previous post, companies in financial services and healthcare have long emphasized the need for managing risk, and for ensuring safety and reliability, and ML teams can learn from what analytics teams in these sectors have done.
The emerging area of AI assurance includes practical capabilities to extract, timely and understandable insights about machine learning. It will also require tools that bridge and create a common language between data science and operational teams.
Our conversation covered a range of topics including:
- A description of two nascent fields: MLOps and AI assurance.
- What insights and metrics should AI assurance tools provide.
- Best practices for adopting MLOps and AI assurance, including when it makes sense for a company to begin investigating tools in these areas.
- Why simple metrics and traditional monitoring tools are not sufficient for machine learning and AI models.
- [A video version of this conversation is available on our YouTube channel.]
Download a complete transcript of this episode by filling out the form below:
Short excerpt:
Ben: What is AI Assurance, and why do you think it’s so important?
Ofer: Let’s start from the top. AI Assurance is all about ensuring that your AI is going to operate in an optimal and risk-free method over time. This is the objective. The statement, “optimal and risk-free operation of AI at scale over time” comprises so many things. One is to monitor your models to make sure they act as expected and that they don’t have any performance pitfalls—whether global or local; maybe there’s a specific audience or specific subgroup of your data, or a specific segment of your customers where your model is not performing as well as you expect. That’s one category. The other category is the ability to have the observability and the transparency you want—to have the analytics and insights to be able to look and understand how your model operates. You need a set of model analytics capabilities that allows you to really understand how your model operates, whether it’s about the data it’s processing, the inference of the model, or the performance, and then the ability to slice and dice it to different subgroups to identify weak spots in your model. The third thing, you mentioned: bias and fairness. A lot of the issues around bias have to do with the way you develop and build your datasets during the development of your models. But, how do you make sure that your bias levels, or unbiased levels, remain in your safety zone when the model goes live? You don’t have control anymore over the data and over the feedback that’s going to come back.
Ben: Having come from a company that deployed a lot of ML models, who are the stakeholders for an AI Assurance tool? Is it just the technical people? Is it just data scientists? The idea here is that as AI and machine learning cuts across so many products and systems, you’re going to have to have tools that can be used by cross functional teams, right?
Ofer: We see three stakeholders. First, the data science and data engineering teams, which pretty much fall under ML OPS. As part of the overall process of developing, deploying, running, and monitoring model, you need to have the assurance capabilities to make sure your model performs well and that you have the visibility into how it operates in production.
Related content:
- Andrew Burt: “Identifying and mitigating liabilities and risks associated with AI”
- Ameet Talwalkar: “Democratizing Machine Learning”
- Amy Heineike: “Machines for unlocking the deluge of COVID-19 papers, articles, and conversations”
- Matthew Honnibal: “Building open source developer tools for language applications”
- Pete Warden: “Why TinyML will be huge”
[1] This post and episode are part of a collaboration between Gradient Flow and Superwise. See our statement of editorial independence.
[Image: from pikist]