Jack Clark on recent progress in deep learning, the rise of AI ethics, and what lies ahead for language models.
Subscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.
Jack Clark is co-director of the AI Index Steering Committee. In this episode we discuss key findings of the fifth edition of the AI Index. The report uses multiple metrics (benchmarks, publications, patents, legislation, etc.) to track progress in AI (mainly deep learning) in key areas that include computer vision, speech recognition, and language models. This year the report contains a chapter dedicated to AI Ethics where they focus on new benchmarks and metrics that have been developed to measure bias in AI systems.
Jack Clark:
The approach we’re currently doing as a research community, is to try and get models to not generate toxic stuff (“detoxification”). I think that this approach does not work in the long term. It looks a bit like training humans that there’s certain things they can say, and certain things they can’t say. Obviously, if you’re deploying a commercial service, you can’t have a language model generate racial slur words.The move towards detoxification has actually created a counter reaction, where some AI researchers are trying to train so-called uncensored models as a response to this. … I think of it as like liberals versus libertarians.
Highlights in the video version:
- Introduction to Jack Clark
Key Metrics used to support findings
How important is it for researchers to follow benchmarks in ML?
ML Benchmarks
Qualitative Evaluation Metrics
RoboCup and Multiplayer Games
Large language model coding assistants
Which model can be productionized? Which model is practical?
Speech Technologies
Industry and Academia
What is your favorite sections of the report?
What is the general approach to address toxicity in large language models?
Model that uses different types of inputs
Green AI and Responsible AI
Robotics
Related content:
- A video version of this conversation is available on our YouTube channel.
- My short post on the 2022 Index: “Most State-Of-The-Art AI Systems Are Trained With Extra Data”
- Yoav Shoham: Making Large Language Models Smarter
- Elham Tabassi and Andrew Burt: The New AI Risk Management Framework from NIST
- The AI $100M Revenue Club
- What Is Graph Intelligence?
- Resurgence of Conversational AI
- Navigate the road to Responsible AI
FREE report:
[Image: “The 2022 AI Index” by Ben Lorica, using images from Infogram and the AI Index Report.]