﻿{"id":6970,"date":"2019-06-16T20:57:05","date_gmt":"2019-06-16T12:57:05","guid":{"rendered":"http:\/\/www.nlpir.org\/wordpress\/?p=6970"},"modified":"2019-06-30T21:24:11","modified_gmt":"2019-06-30T13:24:11","slug":"improving-language-understanding-by-generative-pre-training","status":"publish","type":"post","link":"http:\/\/www.nlpir.org\/wordpress\/2019\/06\/16\/improving-language-understanding-by-generative-pre-training\/","title":{"rendered":"Improving Language Understanding by Generative Pre-Training"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\" style=\"text-align:center\"><strong>NLPIR SEMINAR Y2019#1<\/strong>9<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">INTRO <\/h3>\n\n\n\n<p>         In the new semester, our Lab, Web Search Mining and Security Lab, plans to hold an academic seminar every Monday, and each time a keynote speaker will share understanding of papers on his\/her related research with you.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Arrangement<\/h3>\n\n\n\n<p>This week&#8217;s seminar is organized as follows: <\/p>\n\n\n\n<ol><li>The seminar time is 1.pm, Mon, at Zhongguancun Technology Park ,Building 5, 1306.<\/li><li>The lecturer is <strong>Qinghong Jiang<\/strong>, the paper&#8217;s title is <strong>Improving Language Understanding by Generative Pre-Training<\/strong>.<\/li><li> Zhaoyou Liu give the presentation of his work .<\/li><li>The seminar will be hosted by Ziyu Liu.<\/li><li>Attachment is the paper of this seminar, please download in advance.<\/li><\/ol>\n\n\n\n<p> Everyone interested in this topic is welcomed to join us. the following is the abstract for this week\u2019s paper.<\/p>\n\n\n\n<div style=\"border:dashed windowtext 1.0pt;padding:1.0pt 4.0pt 1.0pt 4.0pt;\">\n\t<p align=\"center\" style=\"text-align:center;font-weight: bold\">\n\t\tImproving Language Understanding by Generative Pre-Training\n\t<\/p>\n\t<p align=\"center\" style=\"text-align:center;font-size: 0.5em\">\n\t\tAlec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever\n\t<\/p>\n\t<p align=\"center\" style=\"text-align:center;\">\n\t\tAbstract\n\t<\/p>\n\t<p style=\"text-indent:2em;\">\n\t\tNatural language understanding comprises a wide range of diverse tasks such as textual entailment, question answering, semantic similarity assessment, and document classification. Although large unlabeled text corpora are abundant, labeled data for learning these specific tasks is scarce, making it challenging for discriminatively trained models to perform adequately. We demonstrate that large gains on these tasks can be realized by generative pre-training of a language model on a diverse corpus of unlabeled text, followed by discriminative fine-tuning on each specific task. In contrast to previous approaches, we make use of task-aware input transformations during fine-tuning to achieve effective transfer while requiring minimal changes to the model architecture. We demonstrate the effectiveness of our approach on a wide range of benchmarks for natural language understanding. Our general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, significantly improving upon the state of the art in 9 out of the 12 tasks studied. For instance, we achieve absolute improvements of 8.9% on commonsense reasoning (Stories Cloze Test), 5.7% on question answering (RACE), and 1.5% on textual entailment (MultiNLI).\n        <\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-file aligncenter\"><a href=\"http:\/\/www.nlpir.org\/wordpress\/wp-content\/uploads\/2019\/06\/Improving-language-understanding-by-generative-pre-training.pdf\">Improving language understanding by generative pre-training<\/a><a href=\"http:\/\/www.nlpir.org\/wordpress\/wp-content\/uploads\/2019\/06\/Improving-language-understanding-by-generative-pre-training.pdf\" class=\"wp-block-file__button\" download>\u4e0b\u8f7d<\/a><\/div>\n\n\n\n<h2 class=\"wp-block-heading\" style=\"text-align:center\"><strong>NLPIR SEMINAR 32nd ISSUE COMPLETED<\/strong><\/h2>\n\n\n\n<p>Last Monday, Qinghong Jiang gave a presentation about the paper, Improving Language Understanding by Generative Pre-Training, and shared some opinion on it.<\/p>\n\n\n\n<figure class=\"wp-block-image\"><img decoding=\"async\" src=\"https:\/\/openai.com\/content\/images\/2018\/06\/zero-shot-transfer@2x.png\" alt=\"zero-shot-transfer@2x\"\/><\/figure>\n\n\n\n<p>This was a state-of-the-art word representation method before BERT. visit <a href=\"https:\/\/openai.com\/blog\/language-unsupervised\/\">https:\/\/openai.com\/blog\/language-unsupervised\/<\/a> for more information.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>NLPIR SEMINAR Y2019#19 INTRO In the new  &hellip; <a href=\"http:\/\/www.nlpir.org\/wordpress\/2019\/06\/16\/improving-language-understanding-by-generative-pre-training\/\">\u7ee7\u7eed\u9605\u8bfb <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":862,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37,38],"tags":[],"_links":{"self":[{"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/posts\/6970"}],"collection":[{"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/users\/862"}],"replies":[{"embeddable":true,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/comments?post=6970"}],"version-history":[{"count":5,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/posts\/6970\/revisions"}],"predecessor-version":[{"id":6991,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/posts\/6970\/revisions\/6991"}],"wp:attachment":[{"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/media?parent=6970"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/categories?post=6970"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/tags?post=6970"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}