Matching Images and Text with Multi-modal Tensor Fusion and Re-ranking. – NLPIR自然语言处理与信息检索共享平台

自然语言处理与信息检索共享平台 自然语言处理与信息检索共享平台

Matching Images and Text with Multi-modal Tensor Fusion and Re-ranking.

NLPIR SEMINAR Y2019#42

INTRO

In the new semester, our Lab, Web Search Mining and Security Lab, plans to hold an academic seminar every Monday, and each time a keynote speaker will share understanding of papers on his/her related research with you.

Arrangement

Tomorrow’s seminar is organized as follows:

  1. The seminar time is 1:20.pm, Mon (December 23, 2019), at Zhongguancun Technology Park ,Building 5, 1306.
  2. Dr. Kuang give us a lecture about psychology.
  3. Ziyu Liu is going to give a presentation on the paper, Matching Images and Text with Multi-modal Tensor Fusion and Re-ranking. (Proceedings of the 27th ACM International Conference on Multimedia. ACM, 2019: 12-20).
  4. The seminar will be hosted by Baohua Zhang.

Everyone interested in this topic is welcomed to join us.

Matching Images and Text with Multi-modal Tensor Fusion and Re-ranking

Tan Wang, Xing Xu, etc

Abstract

A major challenge in matching images and text is that they have intrinsically different data distributions and feature representations. Most existing approaches are based either on embedding or classification, the first one mapping image and text instances into a common embedding space for distance measuring, and the second one regarding image-text matching as a binary classification problem. Neither of these approaches can, however, balance the matching accuracy and model complexity well.We propose a novel framework that achieves remarkable matching performance with acceptable model complexity. Specifically, in the training stage, we propose a novel Multi-modal Tensor Fusion Network (MTFN) to explicitly learn an accurate image-text similarity function with rank-based tensor fusion rather than seeking a common embedding space for each image-text instance. Then, during testing, we deploy a generic Cross-modal Re-ranking (RR) scheme for refinement without requiring additional training procedure. Extensive experiments on two datasets demonstrate that our MTFN-RR consistently achieves the state-of-the-art matching performance with much less time complexity.

You May Also Like

About the Author: nlpvv

发表评论