Predicting Visual Features from Text for Image and Video Caption Retrieval

Our Word2VisualVec work has been accepted for publication as a REGULAR paper in the IEEE Transactions on Multimedia. Source code is available at  https://github.com/danieljf24/w2vv.

This paper strives to find amidst a set of sentences the one best describing the content of a given image or video. Different from existing works, which rely on a joint subspace for their image and video caption retrieval, we propose to do so in a visual space exclusively. Apart from this conceptual novelty, we contribute Word2VisualVec, a deep neural network architecture that learns to predict a visual feature representation from textual input. Example captions are encoded into a textual embedding based on multi-scale sentence vectorization and further transferred into a deep visual feature of choice via a simple multi-layer perceptron. We further generalize Word2VisualVec for video caption retrieval, by predicting from text both 3-D convolutional neural network features as well as a visual-audio representation. Experiments on Flickr8k, Flickr30k, the Microsoft Video Description dataset and the very recent NIST TrecVid challenge for video caption retrieval detail Word2VisualVec’s properties, its benefit over textual embeddings, the potential for multimodal query composition and its state-of-the-art results.

Jianfeng Dong, Xirong Li, Cees G. M. Snoek: Predicting Visual Features from Text for Image and Video Caption Retrieval. In: IEEE Transactions on Multimedia (TMM), vol. 20, no. 12, pp. 3377-3388, 2018.

 

Cross-Media Similarity Evaluation for Web Image Retrieval in the Wild

Our work on cross-media similarity computation for web image retrieval has been accepted for publication as a REGULAR paper in the IEEE Transactions on Multimedia.

In order to retrieve unlabeled images by textual queries, cross-media similarity computation is a key ingredient. Although novel methods are continuously introduced, little has been done to evaluate these methods together with large-scale query log analysis. Consequently, how far have these methods brought us in answering real-user queries is unclear. Given baseline methods that use relatively simple text/image matching, how much progress have advanced models made is also unclear. This paper takes a pragmatic approach to answering the two questions. Queries are automatically categorized according to the proposed query visualness measure, and later connected to the evaluation of multiple cross-media similarity models on three test sets. Such a connection reveals that the success of the state-of-the-art is mainly attributed to their good performance on visual-oriented queries, which account for only a small part of real-user queries. To quantify the current progress, we propose a simple text2image method, representing a novel query by a set of images selected from large-scale query log. Consequently, computing cross-media similarity between the query and a given image boils down to comparing the visual similarity between the given image and the selected images. Image retrieval experiments on the challenging Clickture dataset show that the proposed text2image is a strong baseline, comparing favorably to recent deep learning alternatives.

tmm2018

Jianfeng Dong, Xirong Li, Duanqing Xu: Cross-Media Similarity Evaluation for Web Image Retrieval in the Wild. In: IEEE Transactions on Multimedia (TMM), vol. 20, no. 9, pp. 2371-2384, 2018.

Fluency-Guided Cross-Lingual Image Captioning

Our MM2017 paper on cross-lingual image captioning is online.  We have also released code, data and pre-trained models at https://github.com/li-xirong/cross-lingual-cap.

conceptual

Image captioning has so far been explored mostly in English, as most available datasets are in this language. However, the application of image captioning should not be restricted by language. Only few studies have been conducted for image captioning in a cross-lingual setting. Different from these works that manually build a dataset for a target language, we aim to learn a cross-lingual captioning model fully from machine-translated sentences. To conquer the lack of fluency in the translated sentences, we propose in this paper a fluency-guided learning framework. The framework comprises a module to automatically estimate the fluency of the sentences and another module to utilize the estimated fluency scores to effectively train an image captioning model for the target language. As experiments on two bilingual (English-Chinese) datasets show, our approach improves both fluency and relevance of the generated captions in Chinese, but without using any manually written sentences from the target language.

Weiyu Lan, Xirong Li, Jianfeng Dong: Fluency-Guided Cross-Lingual Image Captioning. In: ACM Multimedia, 2017.

Harvesting Deep Models for Cross-Lingual Image Annotation

Our CBMI2017 paper on cross-lingual image annotation is online.

This paper considers cross-lingual image annotation, harvesting deep visual models from one language to annotate images with labels from another language. This task cannot be accomplished by machine translation, as labels can be ambiguous and a translated vocabulary leaves us limited freedom to annotate images with appropriate labels. Given non-overlapping vocabularies between two languages, we formulate cross-lingual image annotation as a zero-shot learning problem. For cross-lingual label matching, we adapt zero-shot by replacing the current monolingual semantic embedding space by a bilingual alternative. In order to reduce both label ambiguity and redundancy we propose a simple yet effective approach called label-enhanced zero-shot learning. Using three state-of-the-art deep visual models, i.e., ResNet-152, GoogleNet-Shuffle and OpenImages, experiments on the test set of Flickr8k-CN demonstrate the viability of the proposed approach for cross-lingual image annotation.

cbmi2017_cross-lin

Qijie Wei, Xiaoxu Wang, Xirong Li: Harvesting Deep Models for Cross-Lingual Image Annotation. The 15th International Workshop on Content-Based Multimedia Indexing (CBMI), 2017.

Tag Relevance Fusion for Social Image Retrieval

My work on image tag relevance learning, Tag Relevance Fusion for Social Image Retrieval, has been published as a special issue paper in the Multimedia Systems journal.

Due to the subjective nature of social tagging, measuring the relevance of social tags with respect to the visual content is crucial for retrieving the increasing amounts of social-networked images. Witnessing the limit of a single measurement of tag relevance, we introduce in this paper tag relevance fusion as an extension to methods for tag relevance estimation. We present a systematic study, covering tag relevance fusion in early and late stages, and in supervised and unsupervised settings. Experiments on a large present-day benchmark set show that tag relevance fusion leads to better image retrieval. Moreover, unsupervised tag relevance fusion is found to be practically as effective as supervised tag relevance fusion, but without the need of any training efforts. This finding suggests the potential of tag relevance fusion for real-world deployment.

Xirong Li: Tag Relevance Fusion for Social Image Retrieval. In: Multimedia Systems, vol. 23, no. 1, pp. 29–40, 2017, ISSN: 1432-1882.

Detecting Violence in Video using Subclasses

Our work on video violence detection is to appear as a short paper in the forthcoming ACM Multimedia 2016 conference.

This paper attacks the challenging problem of violence detection in videos. Different from existing works focusing on combining multi-modal features, we go one step further by adding and exploiting subclasses visually related to violence. We enrich the MediaEval 2015 violence dataset by manually labeling violence videos with respect to the subclasses. Such fine-grained annotations not only help understand what have impeded previous efforts on learning to fuse the multi-modal features, but also enhance the generalization ability of the learned fusion to novel test data. The new subclass based solution, with AP of 0.303 and P100 of 0.55 on the MediaEval 2015 test set, outperforms the state-of-the-art. Notice that our solution does not require fine-grained annotations on the test set, so it can be directly applied on novel and fully unlabeled videos. Interestingly, our study shows that motion related features (MBH, HOG and HOF), though being essential part in previous systems, are seemingly dispensable. Data is available at http://lixirong.net/datasets/mm2016vsd

Xirong Li, Yujia Huo, Qin Jin, Jieping Xu: Detecting Violence in Video using Subclasses. In: ACM Multimedia, 2016.

Improving Image Captioning by Concept-based Sentence Reranking

We presented our image2text work (Best Paper Runner-Up) at the Pacific-Rim Conference on Multimedia (PCM) 2016 today.

This paper describes our winning entry in the ImageCLEF 2015 image sentence generation task. We improve Google’s CNN-LSTM model by introducing concept-based sentence reranking, a data-driven approach which exploits the large amounts of concept-level annotations on Flickr. Different from previous usage of concept detection that is tailored to specific image captioning models, the propose approach reranks predicted sentences in terms of their matches with detected concepts, essentially treating the underlying model as a black box. This property makes the approach applicable to a number of existing solutions. We also experiment with fine tuning on the deep language model, which improves the performance further. Scoring METEOR of 0.1875 on the ImageCLEF 2015 test set, our system outperforms the runner-up (METEOR of 0.1687) with a clear margin.

 Examples illustrating concept-based sentence re-ranking for improving image captioning.

Xirong Li, Qin Jin: Improving Image Captioning by Concept-based Sentence Reranking. In: The 17th Pacific-Rim Conference on Multimedia (PCM), pp. 231-240, 2016, (Best Paper Runner-up).

Early Embedding and Late Reranking for Video Captioning

We are going to present our video2text work in the Multimedia Grand Challenge session of the forthcoming ACM Multimedia 2016 Conference at Amsterdam.

This paper describes our solution for the MSR Video to Language Challenge.  We start from the popular ConvNet + LSTM model, which we extend with two novel modules. One is early embedding, which enriches the current low-level input to LSTM by tag embeddings. The other is late reranking, for re-scoring generated sentences in terms of their relevance to a specific video. The modules are inspired by recent works on image captioning, repurposed and redesigned for video. As experiments on the MSR-VTT validation set show, the joint use of these two modules add a clear improvement over a non-trivial ConvNet + LSTM baseline under four performance metrics. The viability of the proposed solution is further confirmed by the blind test by the organizers. Our system is ranked at the 4th place in terms of overall performance, while scoring the best CIDEr-D, which measures the human-likeness of generated captions.

Early embedding and late reranking for video captioning

Jianfeng Dong, Xirong Li, Weiyu Lan, Yujia Huo, Cees G. M. Snoek: Early Embedding and Late Reranking for Video Captioning. In: ACM Multimedia, 2016, (Grand Challenge Award).

TagBook for video event detection with few or zero example

Our work on video event detection, TagBook: A Semantic Video Representation without Supervision for Event Detection, has been published as a regular paper in the July issue of IEEE Transactions on Multimedia.

tagbook-frameworkWe consider the problem of event detection in video for scenarios where only few, or even zero examples are available for training. For this challenging setting, the prevailing solutions in the literature rely on a semantic video representation obtained from thousands of pre-trained concept detectors. Different from existing work, we propose a new semantic video representation that is based on freely available social tagged videos only, without the need for training any intermediate concept detectors. We introduce a simple algorithm that propagates tags from a video’s nearest neighbors, similar in spirit to the ones used for image retrieval, but redesign it for video event detection by including video source set refinement and varying the video tag assignment. We call our approach TagBook and study its construction, descriptiveness and detection performance on the TRECVID 2013 and 2014 multimedia event detection datasets and the Columbia Consumer Video dataset. Despite its simple nature, the proposed TagBook video representation is remarkably effective for few-example and zero-example event detection, even outperforming very recent state-of-the-art alternatives building on supervised representations.

Masoud Mazloom, Xirong Li, Cees G. M. Snoek: TagBook: A Semantic Video Representation Without Supervision for Event Detection. In: IEEE Transactions on Multimedia (TMM), vol. 18, no. 7, pp. 1378-1388, 2016.