Efficient Scaling of Language Models

Barret Zoph and Liam Fedus on recent trends and challenges in NLP and Large Language Models.

SubscribeApple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.

This week’s guests are Barret Zoph and Liam Fedus, research scientists at Google Brain. Our conversation centered around Large Language Models (LLM), specifically recent work by Barret, Liam, and their collaborators on efficient scaling of large language models. The recent announcement of a 540-billion parameter model trained with Google’s Pathways system suggests that researchers are starting to focus on tools and techniques for building LLMs more efficiently. In this episode, Barret and Liam explain the current state of LLMs, key challenges, and emerging trends.

Download the 2021 NLP Survey Report and learn how companies are using and implementing natural language technologies.

Here are some recent papers (co-authored by Barrett, Liam. and others) that we allude to in this episode:


Highlights in the video version:

Related content:

FREE report:

[Image: Text Mining by Ben Lorica, using images from Infogram.]