Tim Davis on a programming language for AI, an efficient, user-friendly inference engine for seamless model execution.
Subscribe: Apple • Spotify • Stitcher • Google • AntennaPod • Podcast Addict • Amazon • RSS.
Tim Davis is the Co-Founder & Chief Product Officer of Modular, a startup focused on building tools to help simplify AI infrastructure. We discuss two recent announcements:
- Mojo, a new programming language that combines the usability of Python with the performance of C, unlocking unparalleled programmability of AI hardware and extensibility of AI models. Mojo is currently accessible via a cloud-based playground that allows for the execution of existing Python code and enables you to convert parts of it into Mojo for significant performance boosts. The language was initially created to facilitate the development of a next-generation machine learning infrastructure stack, capable of scaling machine learning workloads in novel ways.
- An Inference Engine which executes TensorFlow and PyTorch models with no model rewriting or conversions. It is effectively a compiler and runtime that processes the requests received from the model server and carries out the necessary computations. Bring your model as-is and deploy it anywhere, across server and edge, with unparalleled usability and performance. Tim describes the strong demand for their inference engine among enterprise customers who commonly use Kubernetes for deployments. He explains the benefits of using this engine, noting that it provides improved performance and efficiency, which subsequently allows customers to build larger models and achieve better accuracy.
Interview highlights – key sections from the video version:
- What is the Mojo programming language?
- Who are the target users of Mojo? What are the target use cases?
- Relation between Mojo and other software stacks like CUDA
- Why now? What is the rationale for introducing a new language like Mojo?
- Explaining Mojo to CxOs
- Fast forward a year or two: what will be the impact of Mojo on new developments in machine learning?
- Mojo and open source
- Cool apps built with Mojo
- What is Modular’s inference engine?
- Building a very fast and very efficient inference engine
- Demand for their inference engine among enterprise customers who commonly use Kubernetes for deployments
- Lessons from computer vision
- Mojo: user demographics to date
Related content:
- A video version of this conversation is available on our YouTube channel.
- Andrew Feldman: The Rise of Custom Foundation Models
- Dylan Patel: The Open Source Stack Unleashing a Game-Changing AI Hardware Shift
- Building LLM-powered Apps: What You Need to Know
- Jonas Andrulis: Building and Deploying Foundation Models for Enterprises
- Percy Liang: Evaluating Language Models
- Jakub Zavrel: Uncovering and Highlighting AI Trends
- Raymond Perrault: 2023 AI Index
- Pablo Villalobos: Exhaustion of High-Quality Data Could Slow Down AI Progress in Coming Decades
- Roy Schwartz: Efficient Methods for Natural Language Processing