My Work - Lifelong Language Knowledge Distillation

EMNLP 2020 long paper

Posted by Jexus on November 1, 2020

Recently by the same author:


美國 EECS 博士班申請經驗分享 (ML/DL/NLP/Speech)

2021 Fall NLP/Speech PhD Application

You may find interesting:


My Work - Dual Inference for Improving Language Understanding and Generation

EMNLP 2020 findings paper


My Work - Dual Inference for Improving Language Understanding and Generation

EMNLP 2020 findings paper

Lifelong Language Knowledge Distillation

This is my work accepted by the EMNLP 2020 conference (long paper).

TL;DR

Catastrophic forgetting problem in lifelong language learning (LLL) could be mitigated if lifelong language learners are continuously learning from different teachers (mainly for language generation, w/ seq-KD & word-KD).

  • arXiv: https://arxiv.org/abs/2010.02123
  • github: https://github.com/voidism/L2KD
  • video: https://youtu.be/t3Ee5fA8mCo

Abstract

It is challenging to perform lifelong language learning (LLL) on a stream of different tasks without any performance degradation comparing to the multi-task counterparts. To address this issue, we present Lifelong Language Knowledge Distillation (L2KD), a simple but efficient method that can be easily applied to existing LLL architectures in order to mitigate the degradation. Specifically, when the LLL model is trained on a new task, we assign a teacher model to first learn the new task, and pass the knowledge to the LLL model via knowledge distillation. Therefore, the LLL model can better adapt to the new task while keeping the previously learned knowledge. Experiments show that the proposed L2KD consistently improves previous state-of-the-art models, and the degradation comparing to multi-task models in LLL tasks is well mitigated for both sequence generation and text classification tasks.