﻿{"id":6742,"date":"2019-03-10T21:41:00","date_gmt":"2019-03-10T13:41:00","guid":{"rendered":"http:\/\/www.nlpir.org\/wordpress\/?p=6742"},"modified":"2019-03-17T21:12:13","modified_gmt":"2019-03-17T13:12:13","slug":"end-to-end-text-recognition-with-convolutional-neural-networks","status":"publish","type":"post","link":"http:\/\/www.nlpir.org\/wordpress\/2019\/03\/10\/end-to-end-text-recognition-with-convolutional-neural-networks\/","title":{"rendered":"End-to-End Text Recognition with Convolutional Neural Networks"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\" style=\"text-align:center\"> NLPIR SEMINAR Y2019#5 <\/h2>\n\n\n\n<h3 class=\"wp-block-heading\"> INTRO <\/h3>\n\n\n\n<p>        In the new semester, our Lab, Web Search Mining and Security Lab, plans to hold an academic seminar every Monday, and each time a keynote speaker will share understanding of papers on his\/her related research with you.<br><\/p>\n\n\n\n<h3 class=\"wp-block-heading\">Arrangement<br><\/h3>\n\n\n\n<p>This week&#8217;s seminar is organized as follows: <\/p>\n\n\n\n<ol><li>The seminar time is 1.pm, Mon, at Zhongguancun Technology Park ,Building 5, 1306.<\/li><li>The lecturer is Wang Gang, the paper&#8217;s title is End-to-End Text Recognition with Convolutional Neural Networks.<\/li><li>The seminar will be hosted by Qinghong Jiang.<\/li><li>Attachment is the paper of this seminar, please download in advance.<\/li><\/ol>\n\n\n\n<p>Everyone interested in this topic is welcomed to join us. the following is the abstract for this week\u2019s paper.<\/p>\n\n\n\n<p>\n\t<div style=\"border:dotted windowtext 1.0pt;padding:1.0pt 4.0pt 1.0pt 4.0pt;\">\n\t\t<p class=\"MsoNormal\" align=\"center\" style=\"text-align:center;\">\n\t\t\tEnd-to-End Text Recognition with Convolutional Neural Networks\n\t\t<\/p>\n\t\t<p class=\"MsoNormal\" align=\"center\" style=\"text-align:center;\">\n\t\t\tTao Wang&nbsp;&nbsp; David J. Wu&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Adam\nCoates&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Andrew Y. Ng\n\t\t<\/p>\n\t\t<p class=\"MsoNormal\" align=\"center\" style=\"text-align:center;\">\n\t\t\tAbstract\n\t\t<\/p>\n\t\t<p class=\"MsoNormal\">\n\t\t\t<span>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Full end-to-end text recognition in\nnatural images is a challenging problem that has received much attention\nrecently. Traditional systems in this area have relied on elaborate models\nincorporating carefully hand-engineered features or large amounts of prior\nknowledge. In this paper, we take a different route and combine the\nrepresentational power of large, multilayer neural networks together with\nrecent developments in unsupervised feature learning, which allows us to use a\ncommon framework to train highly-accurate text detector and character\nrecognizer modules. Then, using only simple off-the-shelf methods, we integrate\nthese two modules into a full end-to-end, lexicon-driven, scene text\nrecognition system that achieves state-of-the-art performance on standard\nbenchmarks, namely Street View Text and ICDAR 2003.<\/span>\n\t\t<\/p>\n\t\t<p class=\"MsoNormal\">\n\t\t\t<span><\/span>\n\t\t<\/p>\n\t<\/div>\n<\/p>\n\n\n\n<div class=\"wp-block-file aligncenter\"><a href=\"http:\/\/www.nlpir.org\/wordpress\/wp-content\/uploads\/2019\/03\/End-to-End-Text-Recognition-with-Convolutional-Neural-Networks.pdf\">End-to-End Text Recognition with Convolutional Neural Networks<\/a><a href=\"http:\/\/www.nlpir.org\/wordpress\/wp-content\/uploads\/2019\/03\/End-to-End-Text-Recognition-with-Convolutional-Neural-Networks.pdf\" class=\"wp-block-file__button\" download>\u4e0b\u8f7d<\/a><\/div>\n\n\n\n<!--nextpage-->\n\n\n\n<h2 class=\"wp-block-heading\" style=\"text-align:center\"><strong>NLPIR\nSEMINAR 18th ISSUE COMPLETED<\/strong><\/h2>\n\n\n\n<p>Last Monday, <strong>Gang Wang<\/strong> gave a presentation about the paper, <strong>End-to-End Text Recognition with Convolutional Neural\nNetworks<\/strong>, and shared some opinion on it.<\/p>\n\n\n\n<p>The paper is published on ICPR in 2012. The\nexperiments were taken on two dataset: 1) ICDAR(International Conference on\nDocument Analysis and Recognition ) 2003 Dataset and 2) SVT(Street View Text)\nDataset. So their results were mainly concerned with English.<\/p>\n\n\n\n<p>One question \u201cIn result table2, why I-5 is\nmore accurate than I-50?\u201d was asked. A possible answer is that: 5 and 50 are\nthe number of distractor words provided by other research. The greater the\nnumerical value, the louder the noise.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>NLPIR SEMINAR Y2019#5 INTRO In the new s &hellip; <a href=\"http:\/\/www.nlpir.org\/wordpress\/2019\/03\/10\/end-to-end-text-recognition-with-convolutional-neural-networks\/\">\u7ee7\u7eed\u9605\u8bfb <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":862,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37,38],"tags":[],"_links":{"self":[{"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/posts\/6742"}],"collection":[{"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/users\/862"}],"replies":[{"embeddable":true,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/comments?post=6742"}],"version-history":[{"count":2,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/posts\/6742\/revisions"}],"predecessor-version":[{"id":6757,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/posts\/6742\/revisions\/6757"}],"wp:attachment":[{"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/media?parent=6742"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/categories?post=6742"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/www.nlpir.org\/wordpress\/wp-json\/wp\/v2\/tags?post=6742"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}