The Data Exchange Podcast: Jack Morris on adversarial attacks, data augmentation, and adversarial training in NLP.
Subscribe: Apple • Android • Spotify • Stitcher • Google • RSS.
In this episode of the Data Exchange I speak with Jack Morris, a member of Google’s AI Residency program. He is co-creator of
TextAttack, an open source framework for adversarial attacks, data augmentation, and adversarial training in NLP (paper, code).
Adversarial examples are inputs used to fool a machine learning model. In recent years adversarial attacks against computer vision models have been covered in numerous media articles. Similar attacks have surfaced for NLP models and there have been a series of research projects dedicated to generating adversarial examples and defending against these adversaries. In fact, adversarial attacks against language applications is an active research are, here are some recent examples of attacks against language models:
So how exactly does one mount an attack against a language model. In computer vision one can attempt to fool a model by manipulating a few pixels or frames. While harder to mount, Jack described some of the general ways one might attack a language model:
- You brought up words and letters. … There are two branches for those types of attacks: the first tries to find word & phrase replacements that make sense in context (so-called preservation of semantics), the second uses character level changes. It turns out that for shorter input that changing a few characters in a few words is enough to make many of the state-of-the-art NLP models confused. This is an issue for chatbots.
TextAttack unifies adversarial attack methods into a single framework. Its creators decompose NLP attacks into a goal function, a set of constraints, a transformation, and a search method.
Developers can use TextAttack to test attacks on models and datasets. It comes with dozens of pretrained models, integrated with Hugging Face Transformers, and supports many tasks including summarization, machine translation, and all nine tasks from the GLUE benchmark.
Subscribe to our Newsletter:
We also publish a popular newsletter where we share highlights from recent episodes, trends in AI / machine learning / data, and a collection of recommendations.
Related content and resources:
- A video version of this conversation is available on our YouTube channel.
- Download the 2020 NLP Survey Report and learn how companies are using and implementing natural language technologies.
- Marco Ribeiro: “Testing Natural Language Models”
- Ram Shankar: “Securing machine learning applications”
- Krishna Gade: “What businesses need to know about model explainability”
- Xiyin Zhou: “Detecting Fake News”
- Weifeng Zhong: “Using machine learning to detect shifts in government policy”
- Alan Nichol: “Best practices for building conversational AI applications”
Register to join live or watch on-demand.
[Image by Ben Lorica, from original artwork by John Patrick McKenzie.]