Fluency-Guided Cross-Lingual Image Captioning

Our MM2017 paper on cross-lingual image captioning is online.  We have also released code, data and pre-trained models at https://github.com/li-xirong/cross-lingual-cap.

conceptual

Image captioning has so far been explored mostly in English, as most available datasets are in this language. However, the application of image captioning should not be restricted by language. Only few studies have been conducted for image captioning in a cross-lingual setting. Different from these works that manually build a dataset for a target language, we aim to learn a cross-lingual captioning model fully from machine-translated sentences. To conquer the lack of fluency in the translated sentences, we propose in this paper a fluency-guided learning framework. The framework comprises a module to automatically estimate the fluency of the sentences and another module to utilize the estimated fluency scores to effectively train an image captioning model for the target language. As experiments on two bilingual (English-Chinese) datasets show, our approach improves both fluency and relevance of the generated captions in Chinese, but without using any manually written sentences from the target language.

Weiyu Lan, Xirong Li, Jianfeng Dong (2017): Fluency-Guided Cross-Lingual Image Captioning. In: ACM Multimedia, 2017.

Harvesting Deep Models for Cross-Lingual Image Annotation

Our CBMI2017 paper on cross-lingual image annotation is online.

This paper considers cross-lingual image annotation, harvesting deep visual models from one language to annotate images with labels from another language. This task cannot be accomplished by machine translation, as labels can be ambiguous and a translated vocabulary leaves us limited freedom to annotate images with appropriate labels. Given non-overlapping vocabularies between two languages, we formulate cross-lingual image annotation as a zero-shot learning problem. For cross-lingual label matching, we adapt zero-shot by replacing the current monolingual semantic embedding space by a bilingual alternative. In order to reduce both label ambiguity and redundancy we propose a simple yet effective approach called label-enhanced zero-shot learning. Using three state-of-the-art deep visual models, i.e., ResNet-152, GoogleNet-Shuffle and OpenImages, experiments on the test set of Flickr8k-CN demonstrate the viability of the proposed approach for cross-lingual image annotation.

cbmi2017_cross-lin

Qijie Wei, Xiaoxu Wang, Xirong Li (2017): Harvesting Deep Models for Cross-Lingual Image Annotation. The 15th International Workshop on Content-Based Multimedia Indexing (CBMI), 2017.