Cross-Media Similarity Evaluation for Web Image Retrieval in the Wild

Our work on cross-media similarity computation for web image retrieval has been accepted for publication as a REGULAR paper in the IEEE Transactions on Multimedia.

In order to retrieve unlabeled images by textual queries, cross-media similarity computation is a key ingredient. Although novel methods are continuously introduced, little has been done to evaluate these methods together with large-scale query log analysis. Consequently, how far have these methods brought us in answering real-user queries is unclear. Given baseline methods that use relatively simple text/image matching, how much progress have advanced models made is also unclear. This paper takes a pragmatic approach to answering the two questions. Queries are automatically categorized according to the proposed query visualness measure, and later connected to the evaluation of multiple cross-media similarity models on three test sets. Such a connection reveals that the success of the state-of-the-art is mainly attributed to their good performance on visual-oriented queries, which account for only a small part of real-user queries. To quantify the current progress, we propose a simple text2image method, representing a novel query by a set of images selected from large-scale query log. Consequently, computing cross-media similarity between the query and a given image boils down to comparing the visual similarity between the given image and the selected images. Image retrieval experiments on the challenging Clickture dataset show that the proposed text2image is a strong baseline, comparing favorably to recent deep learning alternatives.


Jianfeng Dong, Xirong Li, Duanqing Xu (2018): Cross-Media Similarity Evaluation for Web Image Retrieval in the Wild. In: IEEE Transactions on Multimedia (TMM), 20 (9), pp. 2371-2384, 2018.