Improving Language Understanding by Generative Pre-Training – NLPIR自然语言处理与信息检索共享平台

自然语言处理与信息检索共享平台 自然语言处理与信息检索共享平台

Improving Language Understanding by Generative Pre-Training

NLPIR SEMINAR Y2019#19

INTRO

In the new semester, our Lab, Web Search Mining and Security Lab, plans to hold an academic seminar every Monday, and each time a keynote speaker will share understanding of papers on his/her related research with you.

Arrangement

This week’s seminar is organized as follows:

  1. The seminar time is 1.pm, Mon, at Zhongguancun Technology Park ,Building 5, 1306.
  2. The lecturer is Qinghong Jiang, the paper’s title is Improving Language Understanding by Generative Pre-Training.
  3. Zhaoyou Liu give the presentation of his work .
  4. The seminar will be hosted by Ziyu Liu.
  5. Attachment is the paper of this seminar, please download in advance.

Everyone interested in this topic is welcomed to join us. the following is the abstract for this week’s paper.

Improving Language Understanding by Generative Pre-Training

Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever

Abstract

Natural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).

NLPIR SEMINAR 32nd ISSUE COMPLETED

Last Monday, Qinghong Jiang gave a presentation about the paper, Improving Language Understanding by Generative Pre-Training, and shared some opinion on it.

zero-shot-transfer@2x

This was a state-of-the-art word representation method before BERT. visit https://openai.com/blog/language-unsupervised/ for more information.

You May Also Like

About the Author: nlpvv

发表评论