Improving Image Captioning by Concept-based Sentence Reranking

We presented our image2text work (Best Paper Runner-Up) at the Pacific-Rim Conference on Multimedia (PCM) 2016 today.

This paper describes our winning entry in the ImageCLEF 2015 image sentence generation task. We improve Google’s CNN-LSTM model by introducing concept-based sentence reranking, a data-driven approach which exploits the large amounts of concept-level annotations on Flickr. Different from previous usage of concept detection that is tailored to specific image captioning models, the propose approach reranks predicted sentences in terms of their matches with detected concepts, essentially treating the underlying model as a black box. This property makes the approach applicable to a number of existing solutions. We also experiment with fine tuning on the deep language model, which improves the performance further. Scoring METEOR of 0.1875 on the ImageCLEF 2015 test set, our system outperforms the runner-up (METEOR of 0.1687) with a clear margin.

 Examples illustrating concept-based sentence re-ranking for improving image captioning.

Xirong Li, Qin Jin (2016): Improving Image Captioning by Concept-based Sentence Reranking. In: The 17th Pacific-Rim Conference on Multimedia (PCM), pp. 231-240, 2016, (Best Paper Runner-up).