Stacked Cross Attention for Image-Text Matching – NLPIR自然语言处理与信息检索共享平台

自然语言处理与信息检索共享平台 自然语言处理与信息检索共享平台

Stacked Cross Attention for Image-Text Matching

NLPIR SEMINAR Y2019#34

INTRO

In the new semester, our Lab, Web Search Mining and Security Lab, plans to hold an academic seminar every Monday, and each time a keynote speaker will share understanding of papers on his/her related research with you.

Arrangement

Tomorrow’s seminar is organized as follows:

  1. The seminar time is 1:20.pm, Mon (October 28, 2019), at Zhongguancun Technology Park ,Building 5, 1306.
  2. Ziyu Liu is going to give a presentation on the paper, Stacked Cross Attention for Image-Text Matching. (ECCV 2018,8-14, September 2018 Munich,Germany)
  3. Yaofei Yang give a lecture about CCL.
  4. The seminar will be hosted by Qinghong Jiang.

Everyone interested in this topic is welcomed to join us.

Stacked Cross Attention for Image-Text Matching

Kuang-Huei Lee, Xi Chen1, Gang Hua, Houdong Hu, and Xiaodong He

Abstract

In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MS-COCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https://github.com/kuanghuei/SCAN.

You May Also Like

About the Author: nlpvv

发表评论