Moshe Wasserblat on transfer learning, active learning, and other tools to help non-experts customize and fine-tune NLP models.
Subscribe: Apple • Android • Spotify • Stitcher • Google • AntennaPod • RSS.
Moshe Wasserblat is a Senior Principal Engineer at Intel, where he serves as a Research Manager focused on NLP and Deep Learning. Moshe and his team publish papers at leading academic conferences, but they also build tools for users within and beyond Intel. They’ve recently been building tools to help analysts and other non-expert NLP users, fine tune, test and customize NLP models for their specific domains and use cases.
Download the 2021 NLP Survey Report and learn how companies are using and implementing natural language technologies.
Moshe Wasserblat:
In a customer environment, you have a data science team that are experts in optimizing models. But when you go to production, you have business data analysts and other non-experts, and they are truly working with the system. You need to provide some simple tools for non-experts so that they can actually fine tune models. … Basically Active Learning for non-experts. This is quite powerful, because you get to empower the business owner to improve the system performance, and build the ontology for the domain.
Highlights in the video version:
- Introduction to Moshe Wasserblatt, NLP and Deep Learning Research Manager at Intel
NLP projects you and your team are work on
Breakdown your time in terms of research vs. tools
Key challenges facing researchers and practitioners
How does your fine tuning tool work for business analysts?
Number of examples to show reasonable performance
Open Source, Library, and NLP Architect
What challenges are you seeing?
Data Augmentation
What’s the state of tools for developers?
Pre-trained models, quantization, fine tuning, and optimization
Deploy in massive clusters on a mobile phone
Have you explored auto ML for NLP?
Declarative interface and deployment
NLP applications, speech recognition, conversational AI
What is the gold standard data set for summarization?
Large language models and zero shot learning
Models for specific tasks, task description, document certification, and entity recognition
How do you democratize research if it’s all about size?
Large language models in multiple languages and ability to write short stories
NLP Community views on responsible AI and testing
What tools will be available to developers by the end of 2022?
What are you hearing a lot about on the research community side?
Related content:
- A video version of this conversation is available on our YouTube channel.
- FREE Report: “2022 Trends Report: Data, Machine Learning, and AI”
- “What is Graph Intelligence?”
- Yoav Shoham: “Making Large Language Models Smarter”
- Mike Tung: “Applications of Knowledge Graphs”
- Azeem Ahmed: “Data and Machine Learning Platforms at Shopify”
- Che Sharma: “Modern Experimentation Platforms”
- “Taking Low-Code and No-Code Development to the Next Level”
FREE Report:
[Image: Knowledge Library Books Education School Study from Maxpixel.]