The Data Exchange Podcast: Pete Warden on the many applications of machine learning in embedded devices.
In this episode of the Data Exchange I speak with Pete Warden, Staff Research Engineer at Google. Pete is a prolific author and teacher, and he has made many important contributions across many open source software projects. To name just a couple of his projects: he put together the Data Science toolkit (open data sets and open-source tools for data science) and he assembled tools to help developers get started using deep learning, long before TensorFlow and PyTorch were available. Most recently, Pete has been focused on implementing machine learning in ultra-low power systems (TinyML).
Our conversation focused on TinyML and other topics including:
- The early days of using deep learning for computer vision
- TensorFlow – Pete was part of the team at Google that originated TF.
- What is TinyML and why is going to be an important topic in the years ahead.
- Privacy and security in the context of TinyML.
- Pete’s new book and accompanying video series on YouTube, both designed to help developers get started building TinyML applications.
(Full transcript of our conversation is below.)
Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.
Download a complete transcript of this episode by filling out the form below:
Ben: I’ve known you for many years, so before we jump into the main topic of this conversation, which is your new book and your Screencast series, I wanted to review some highlights from your background as a way to introduce you to the audience. First of all, you were into GPUs before GPUs were hip. So, what were you doing at Apple and were GPUs a core part of what you were doing back then?
Pete: I actually ended up at Apple because I’d been working on open source software for visual artists using laptops at concerts or art galleries to do live video effects. This was back in the early 2000s when laptops were only just beginning to be powerful enough to even process 640 x 480 video in real time. That software actually caught Apple’s eye, so I ended up joining the Final Cut team at Apple that does their pro video work. I helped work on a product that’s still going strong today called Motion, which was all about running video effects and motion graphics, especially on the GPUs that are in all of these laptops—so, taking stuff we had been doing on the CPU, everything from visual effects to red eye removal and all of these things that you want to do on video, and actually getting them running on the GPUs of the era. This was really fun because this was before Compute Unified Device Architecture (CUDA) or GLSL or any of these high-level languages, effectively doing assembler. Yes.
Ben: Oh wow. This is really hardcore.
Pete: It was, and it was so much fun. It was like going back to the 80s. I remember on one ATI card I was working with, the program length you could have was 64 instructions. Shaders could only have 64 assembler-like instructions in them to run on this card. As we were trying to fit in effects and things, we were having to use a lot of old-school demo scene techniques to try to get these graphics cards to effectively do things they weren’t designed to do.
- Rajat Monga: “The evolution of TensorFlow and of machine learning infrastructure”
- Evan Sparks: “An open source platform for training deep learning models”
- Reza Zadeh: “Building large-scale, real-time computer vision applications”
- Dean Wampler: “Scalable Machine Learning, Scalable Python, For Everyone”
- Edo Liberty: “How deep learning is being used for search and information retrieval”
- Morten Dahl: “The state of privacy-preserving machine learning”
- We are beginning to release high-quality transcripts, here is a list of episodes with transcripts.
[Image: Photo by Dani Mota from Pexels.]