Ben Lorica and Paco Nathan on LLMs, Open Source, Custom Foundation Models, and Generative AI for audio.
Subscribe: Apple • Spotify • Overcast • Google • AntennaPod • Podcast Addict • Amazon • RSS.
In this episode, Paco Nathan and I dive into insights from the inaugural AI Conference in San Francisco (video of talks can be found here). We journey through the results of early experiments with Large Language Models (LLMs) to their current status as “empirical best practices.” Further, we navigate the ascendancy of open-source foundation models, highlighting their importance and core objectives. A standout topic from the conference was how to effectively convey the strengths and limitations of LLMs to decision-makers.
As AI progresses, custom foundation models and LLMs are emerging. We’ll explore their early usage patterns and understand the innovative ways organizations are adopting them. Next, we’ll touch upon the highly anticipated path to ROI in LLM applications, demystifying the economics of AI. Using the example of retrieval augmented generation, we discuss the increasing importance of distributed computing and why experimentation in this arena is pivotal. Finally, we’ll wrap up by discussing the thrilling advancements in Generative AI for Audio and Speech.
Interview highlights – key sections from the video version:
- Early experiments with LLMs are translating into “empirical best practices”
- Demographics of attendees at the AI Conference
- Open source foundation models: what and who
- LLMs: the challenge of explaining their strengths and limitations to decision makers
- Custom foundation models and custom LLMs: early usage patterns
- The path to ROI in LLM Applications
- Distributed computing and the importance of experimentation
- Developments in Generative AI for Audio and Speech
Related content:
- A video version of this conversation is available on our YouTube channel.
- Philipp Moritz and Goku Mohandas: Navigating the Nuances of Retrieval Augmented Generation
- Open Source Principles in Foundation Models
- Building a Fleet of Custom LLMs
- Ivy: Streamlining AI Model Deployment and Development
- Daniel Lenton: Ivy – The One-Stop Interface for AI Model Deployment and Development
- Michele Catasta: Software Development with AI and LLMs
- Brian Raymond: ETL for LLMs
- Jerry Liu: An Open Source Data Framework for LLMs
If you enjoyed this episode, please support our work by encouraging your friends and colleagues to subscribe to our newsletter:
Images from Infogram.