Barret Zoph and Liam Fedus on recent trends and challenges in NLP and Large Language Models.
Subscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.
This week’s guests are Barret Zoph and Liam Fedus, research scientists at Google Brain. Our conversation centered around Large Language Models (LLM), specifically recent work by Barret, Liam, and their collaborators on efficient scaling of large language models. The recent announcement of a 540-billion parameter model trained with Google’s Pathways system suggests that researchers are starting to focus on tools and techniques for building LLMs more efficiently. In this episode, Barret and Liam explain the current state of LLMs, key challenges, and emerging trends.
Here are some recent papers (co-authored by Barrett, Liam. and others) that we allude to in this episode:
- Designing Effective Sparse Expert Models
- GLaM: Efficient Scaling of Language Models with Mixture-of-Experts
Highlights in the video version:
- Introduction to Liam Fedus and Barret Zoph and to Google Brain
Scaling language models
Efficiency metrics and magnitudes of improvement
Are these language models multilingual?
Models and specialized training for setup
Language Models for data scientists and other practitioners
API and multistage pipelines
What is doable and practical in multimodal models?
What are one or two things you are most excited about?
- A video version of this conversation is available on our YouTube channel.
- Jack Clark: The 2022 AI Index
- Yoav Shoham: Making Large Language Models Smarter
- What Is Graph Intelligence?
- Resurgence of Conversational AI
- Navigate the road to Responsible AI
- Connor Leahy and Yoav Shoham: Large Language Models
[Image: Text Mining by Ben Lorica, using images from Infogram.]