Connor Leahy and Yoav Shoham on recent developments in NLP and potential applications of large language models.
Subscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.
This episode features conversations with two experts who have helped train and release models that can recognize, predict, and generate human language on the basis of very large text-based data sets. These large language models have shown impressive new abilities to generate natural language and are starting to be used in a variety of applications from computer programming, chatbots, and writing. First is an excerpt of my conversation with Connor Leahy, AI Researcher at Aleph Alpha GmbH, and founding member of EleutherAI, (pronounced “ee-luther”) a collective of researchers and engineers building resources and models for researchers who work on natural language models. Next up is an excerpt from a recent conversation with Yoav Shoham, co-founder of AI21 Labs, creators of the largest language model available to developers. Yoav is also a Professor Emeritus of Computer Science at Stanford University, and a serial entrepreneur who has co-founded numerous data and AI startups.
Highlights from our recent episode featuring Yoav Shoham:
-
AI21 Labs, Train Billion Parameter Model, and Large Language Models Available to Developers
State of AI in the 1980s, 1990s, and Today
Impact in Language, Object Recognition, Transformers, and Academic Benchmarks
Natural Language Understanding and Prompt Engineering
Inject Structured Knowledge, Semantics, and Priors into the Networks
Will language models become more manageable over time?
WordTune.com
LLM: Size Will Matter, AI21:ab’s Jurassic Language Model, using Large and Small Models
Tuning Language Models: Training Data and Tuning Setup
NLP: Benchmarks, and Performance
Academia vs. Industry, Resources, and Engineering Talent
NeurIPS and Deep Learning
Neural Network, Reinforcement Learning, and Neuro Symbolic Programming
Theoretical Computer Scientists, Understanding How things Work, What Else Do We Know?
Is creativity hard to measure? Can computers think, have feelings, have free will?
Summarization, Benchmarks and Evaluation Methods
NLP, Tuning, and Thoughts on Transfer Learning
Multimodal Systems, Co-pilot, and other Coding Assistants
What will AI21 Labs focus on moving forward?
What’s your advice to someone who wants to get a PhD in NLP?
Related content:
- “Resurgence of Conversational AI”
- Yoav Shoham: “Making Large Language Models Smarter”
- Connor Leahy: “Training and Sharing Large Language Models”
- “Resurgence of Conversational AI”
- Matthew Honnibal: “Building open source developer tools for language applications”
- Alan Nichol: “Best practices for building conversational AI applications”
- Lauren Kunze: “How to build state-of-the-art chatbots”
Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.