T-MM 2019: COCO-CN for Cross-Lingual Image Tagging, Captioning, and Retrieval

Our work on cross-lingual image tagging, captioning and retrieval has been published as a regular paper in the September issue of the IEEE Transactions on Multimedia (Impact factor: 5.452). Data and code are available at https://github.com/li-xirong/coco-cn.

This paper contributes to cross-lingual image annotation and retrieval in terms of data and baseline methods. We propose COCO-CN, a novel dataset enriching MS-COCO with manually written Chinese sentences and tags. For effective annotation acquisition, we develop a recommendation-assisted collective annotation system, automatically providing an annotator with several tags and sentences deemed to be relevant with respect to the pictorial content. Having 20 342 images annotated with 27 218 Chinese sentences and 70 993 tags, COCO-CN is currently the largest Chinese–English dataset that provides a unified and challenging platform for cross-lingual image tagging, captioning, and retrieval. We develop conceptually simple yet effective methods per task for learning from cross-lingual resources. Extensive experiments on the three tasks justify the viability of the proposed dataset and methods.

Xirong Li, Chaoxi Xu, Xiaoxu Wang, Weiyu Lan, Zhengxiong Jia, Gang Yang, Jieping Xu: COCO-CN for Cross-Lingual Image Tagging, Captioning and Retrieval. In: IEEE Transactions on Multimedia, vol. 21, no. 9, pp. 2347-2360, 2019.

Harvesting Deep Models for Cross-Lingual Image Annotation

Our CBMI2017 paper on cross-lingual image annotation is online.

This paper considers cross-lingual image annotation, harvesting deep visual models from one language to annotate images with labels from another language. This task cannot be accomplished by machine translation, as labels can be ambiguous and a translated vocabulary leaves us limited freedom to annotate images with appropriate labels. Given non-overlapping vocabularies between two languages, we formulate cross-lingual image annotation as a zero-shot learning problem. For cross-lingual label matching, we adapt zero-shot by replacing the current monolingual semantic embedding space by a bilingual alternative. In order to reduce both label ambiguity and redundancy we propose a simple yet effective approach called label-enhanced zero-shot learning. Using three state-of-the-art deep visual models, i.e., ResNet-152, GoogleNet-Shuffle and OpenImages, experiments on the test set of Flickr8k-CN demonstrate the viability of the proposed approach for cross-lingual image annotation.

cbmi2017_cross-lin

Qijie Wei, Xiaoxu Wang, Xirong Li: Harvesting Deep Models for Cross-Lingual Image Annotation. The 15th International Workshop on Content-Based Multimedia Indexing (CBMI), 2017.