Efficient Methods for Natural Language Processing

Roy Schwartz on maximizing limited resources when building large language models.

SubscribeApple • Spotify • Stitcher • Google • AntennaPod • Podcast Addict • RSS.

Roy Schwartz is Professor of Natural Language Processing at The Hebrew University of Jerusalem. We discussed a recent survey paper (co-written by Roy) that presents a broad overview of existing methods to improve NLP efficiency through the lens of traditional NLP pipelines. Our conversation covered the following key areas:

  • Data: Using fewer training instances or better utilizing those available can increase efficiency.
  • Model Design
  • Training:  pre-training and fine tuning.
  • Inference and compression

Subscribe to the Gradient Flow Newsletter

Figure: Taxonomy of efficient NLP methods.


Highlights in the video version:

Related content:

If you enjoyed this episode, please support our work by encouraging your friends and colleagues to subscribe to our newsletter:

[Image: Cuypers Library – Inside the Rijksmuseum by Ben Lorica.]