Reflections from the First AI Conference in San Francisco

Ben Lorica and Paco Nathan on LLMs, Open Source, Custom Foundation Models, and Generative AI for audio.


Subscribe: AppleSpotify OvercastGoogleAntennaPodPodcast AddictAmazon •  RSS.

In this episode, Paco Nathan and I dive into insights from the inaugural AI Conference in San Francisco (video of talks can be found here). We journey through the results of early experiments with Large Language Models (LLMs) to their current status as “empirical best practices.” Further, we navigate the ascendancy of open-source foundation models, highlighting their importance and core objectives. A standout topic from the conference was how to effectively convey the strengths and limitations of LLMs to decision-makers.

Subscribe to the Gradient Flow Newsletter

 

As AI progresses, custom foundation models and LLMs are emerging. We’ll explore their early usage patterns and understand the innovative ways organizations are adopting them. Next, we’ll touch upon the highly anticipated path to ROI in LLM applications, demystifying the economics of AI.  Using the example of retrieval augmented generation, we  discuss the increasing importance of distributed computing and why experimentation in this arena is pivotal. Finally, we’ll wrap up by discussing the thrilling advancements in Generative AI for Audio and Speech.

 

Interview highlights – key sections from the video version:

 

Related content:


If you enjoyed this episode, please support our work by encouraging your friends and colleagues to subscribe to our newsletter:


Images from Infogram.