The Data Exchange Podcast: Parisa Rashidi on deep learning, Responsible AI, and MLOps in healthcare.
Subscribe: Apple • Android • Spotify • Stitcher • Google • RSS.
In this episode of the Data Exchange, I speak with Parisa Rashidi, Associate Professor at the Department of Biomedical Engineering at University of Florida. Parisa is a computer scientist and machine learning researcher who specializes in applications of ML to healthcare and biomedical domains.
Parisa and her colleagues recently published papers that describe applications of deep learning to health records and severity assessments:
- Deep EHR: A Survey of Recent Advances in Deep Learning Techniques for Electronic Health Record (EHR) Analysis
- DeepSOFA: A Continuous Acuity Score for Critically Ill Patients using Clinically Interpretable Deep Learning
Healthcare is a challenging domain and many of the tools for building Responsible AI applications are battle tested in this sector. Healthcare applications have to be safe, reliable, and robust; fairness, transparency and explainability are essential; and there are strict laws pertaining to data privacy and security. Parisa described the types of testing needed before machine learning models can be deemed ready for real-world deployments in healthcare:
- ❛ So it’s very important for models that are developed in this area to go through many, many different iterations of testing and many different types of testing. So for the Deep SOA paper we did the testing in an internal manner, which was basically we used a large dataset that we had collected at University of Florida, we trained on that.
Then we do external validation: if you have access to another data set, for example, from another institution, then you do external validation to see how well your results can generalize, say, in Boston versus Florida. Most of the time, you see a decrease in performance, which is what you actually expect.
After that is the most important thing, that actually these days you just see it in a handful, couple of examples in literature, which is prospective validation. So once you have done careful internal validation, external validation, multicenter validation, then you have to move on to prospective validation. And usually the first phase of that would be in a silent mode. So really, you’re not providing a recommendation to the physicians. You’re just observing data as it is coming in in real time and you’re making predictions, which you later compare to the actual outcome. And after that would be the actual prospective validation , which may involve actually using your system in a limited manner to make a recommendation to the physicians.
Prospective validation is difficult in that it involves additional administrative work and regulatory issues. Also many people don’t advance to the stage where prospective validation is needed. Most of the papers that you see in the literature, they do just do internal validation. And sometimes they might do external validation. Rarely, they might do multi-center validation. And very, very rarely, they might do prospective validation. But that’s the process that you should go through, really.
Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.
Related content and resources:
- A video version of this conversation is available on our YouTube channel.
- Bharath Ramsundar: “Deep Learning in the Sciences”
- Omer Dror: “Data exchanges and their applications in healthcare and the life sciences”
- “Data collection and data markets in the age of privacy and machine learning”
- Jian Pei: “Pricing Data Products”
- Assaf Araki and Ben Lorica: The Growing Importance of Metadata Management Systems
- Alex Wong and Sheldon Fernandez: “Towards Simple, Interpretable, and Trustworthy AI”
[Image: Cardiogram from Piqsels.]