﻿{"id":7348,"date":"2019-12-08T21:53:21","date_gmt":"2019-12-08T13:53:21","guid":{"rendered":"http:\/\/www.nlpir.org\/wordpress\/?p=7348"},"modified":"2019-12-08T21:53:21","modified_gmt":"2019-12-08T13:53:21","slug":"learning-sentiment-specific-word-embedding-for-twitter-sentiment-classification","status":"publish","type":"post","link":"http:\/\/www.nlpir.org\/wordpress\/2019\/12\/08\/learning-sentiment-specific-word-embedding-for-twitter-sentiment-classification\/","title":{"rendered":"Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\" style=\"text-align:center\">NLPIR SEMINAR Y2019#40<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">INTRO<\/h3>\n\n\n\n<p>         In the new semester, our Lab, Web Search Mining and Security Lab, plans to hold an academic seminar every Monday, and each time a keynote speaker will share understanding of papers on his\/her related research with you.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Arrangement<\/h3>\n\n\n\n<p>Tomorrow&#8217;s seminar is organized as follows: <\/p>\n\n\n\n<ol><li>The seminar time is 1:20.pm, Mon (December 9, 2019), at Zhongguancun Technology Park ,Building 5, 1306.<\/li><li> Baohua Zhang is going to give a presentation on the paper, Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification.(Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1555-1565, Baltimore, Maryland, USA, June 23-25 2014)<\/li><li>The seminar will be hosted by Ziyu Liu.<\/li><\/ol>\n\n\n\n<p>      Everyone interested in this topic is welcomed to join us. <\/p>\n\n\n\n<div style=\"border:dashed windowtext 1.0pt;padding:1.0pt 4.0pt 1.0pt 4.0pt;\">\n\t<p align=\"center\" style=\"text-align:center;font-weight: bold\">\n\t\tLearning Sentiment-Specific Word Embedding for Twitter Sentiment Classification\n\t<\/p>\n\t<p align=\"center\" style=\"text-align:center;font-size: 0.5em\">\n\t\tDuyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, Bing Qin\n\t<\/p>\n\t<p align=\"center\" style=\"text-align:center;\">\n\t\tAbstract\n\t<\/p>\n\t<p style=\"text-indent:2em;\">\n\t\tWe present a method that learns word emedding for Twitter sentiment classification in this paper. Most existing algorithms for learning continuous word representations typically only model the syntactic context of words but ignore the sentiment of text. This is problematic for sentiment analysis as they usually map words with similar syntactic context but opposite sentiment polarity, such as good and bad, to neighboring word vectors. We address this issue by learning sentiment-specific word embedding (SSWE), which encodes sentiment information in the continuous representation of words. Specifically, we develop three neural networks to effectively incorporate the supervision from sentiment polarity of text (e.g. sentences  or  tweets)  in  their  loss functions. To obtain large scale training corpora, we learn the sentiment-specific word embedding from massive distant-supervised tweets collected by positive and negative emoticons. Experiments on applying SSWE  to  a  benchmark  Twitter   sentiment classification dataset in SemEval 2013 show that (1) the SSWE feature performs comparably with hand-crafted features in the top-performed system; (2) the performance is further improved by concatenating SSWE with existing feature set.\n\t<\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-file\"><a href=\"http:\/\/www.nlpir.org\/wordpress\/wp-content\/uploads\/2019\/12\/Learning-Sentiment-Specific-Word-Embedding-for-Twitter-Sentiment-Classification.pdf\">Learning-Sentiment-Specific-Word-Embedding-for-Twitter-Sentiment-Classification<\/a><a href=\"http:\/\/www.nlpir.org\/wordpress\/wp-content\/uploads\/2019\/12\/Learning-Sentiment-Specific-Word-Embedding-for-Twitter-Sentiment-Classification.pdf\" class=\"wp-block-file__button\" download>\u4e0b\u8f7d<\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>NLPIR SEMINAR Y2019#40 INTRO In the new  &hellip; <a href=\"http:\/\/www.nlpir.org\/wordpress\/2019\/12\/08\/learning-sentiment-specific-word-embedding-for-twitter-sentiment-classification\/\">\u7ee7\u7eed\u9605\u8bfb <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":862,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37,38],"tags":[],"_links":{"self":[{"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/posts\/7348"}],"collection":[{"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/users\/862"}],"replies":[{"embeddable":true,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/comments?post=7348"}],"version-history":[{"count":1,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/posts\/7348\/revisions"}],"predecessor-version":[{"id":7350,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/posts\/7348\/revisions\/7350"}],"wp:attachment":[{"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/media?parent=7348"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/categories?post=7348"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/tags?post=7348"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}