NLPIR SEMINAR Y2019#10
In the new semester, our Lab, Web Search Mining and Security Lab, plans to hold an academic seminar every Monday, and each time a keynote speaker will share understanding of papers on his/her related research with you.
This week’s seminar is organized as follows:
- The seminar time is 1.pm, Mon, at Zhongguancun Technology Park ,Building 5, 1306.
- The lecturer is Zhaoyou Liu , the paper’s title is Pay Less Attention with Lightweight and Dynamic Convolutions.
- The seminar will be hosted by Gang Wang.
- Attachment is the paper of this seminar, please download in advance.
Everyone interested in this topic is welcomed to join us. the following is the abstract for this week’s paper.
Pay Less Attention with Lightweight and Dynamic Convolutions
FelixWu, Angela Fan, Alexei Baevski, Yann N. Dauphin, Michael Auli
Self-attention is a
useful mechanism to build generative models for language and images. It
determines the importance of context elements by comparing each element to the
current time step. In this paper, we show that a very lightweight convolution
can perform competitively to the best reported self-attention results. Next, we
introduce dynamic convolutions which are simpler and more efficient than
self-attention. We predict separate convolution kernels based solely on the
current time-step in order to determine the importance of context elements. The
number of operations required by this approach scales linearly in the input
length, whereas self-attention is quadratic. Experiments on large-scale machine
translation, language modeling and abstractive summarization show that dynamic
convolutions improve over strong self-attention models. On the WMT’14
English-German test set dynamic convolutions achieve a new state of the art of