J-BHI 2022: Learning Two-Stream CNN for Multi-Modal Age-related Macular Degeneration Categorization

Our work on multi-modal AMD categorization has been published online as a regular paper in IEEE Journal of Biomedical and Health Informatics (Impact factor: 5.772). Source code is available at https://github.com/li-xirong/mmc-amd.
Proposed end-to-end solution for multi-modal AMD categorization
Weisen Wang, Xirong Li, Zhiyan Xu, Weihong Yu, Jianchun Zhao, Dayong Ding, Youxin Chen: Learning Two-Stream CNN for Multi-Modal Age-related Macular Degeneration Categorization. In: IEEE Journal of Biomedical and Health Informatics (J-BHI), 2022.

MM2021: Multi-Modal Multi-Instance Learning for Retinal Disease Recognition

Our ACMMM’2021 paper on multi-modal retinal disease recognition is online, with pre-recorded video presentation available at YouTube.
Proposed multi-modal retinal disease classification network in its inference mode.
This paper attacks an emerging challenge of multi-modal retinal disease recognition. Given a multi-modal case consisting of a color fundus photo (CFP) and an array of OCT B-scan images acquired during an eye examination, we aim to build a deep neural network that recognizes multiple vision-threatening diseases for the given case. As the diagnostic efficacy of CFP and OCT is disease-dependent, the network’s ability of being both selective and interpretable is important. Moreover, as both data acquisition and manual labeling are extremely expensive in the medical domain, the network has to be relatively lightweight for learning from a limited set of labeled multi-modal samples. Prior art on retinal disease recognition focuses either on a single disease or on a single modality, leaving multi-modal fusion largely underexplored. We propose in this paper Multi-Modal Multi-Instance Learning (MM-MIL) for selectively fusing CFP and OCT modalities. Its lightweight architecture (as compared to current multi-head attention modules) makes it suited for learning from relatively small-sized datasets. For an effective use of MM-MIL, we propose to generate a pseudo sequence of CFPs by over sampling a given CFP. The benefits of this tactic include well balancing instances across modalities, increasing the resolution of the CFP input, and finding out regions of the CFP most relevant with respect to the final diagnosis. Extensive experiments on a real-world dataset consisting of 1,206 multi-modal cases from 1,193 eyes of 836 subjects demonstrate the viability of the proposed model.

Xirong Li, Yang Zhou, Jie Wang, Hailan Lin, Jianchun Zhao, Dayong Ding, Weihong Yu, Youxin Chen: Multi-Modal Multi-Instance Learning for Retinal Disease Recognition. In: ACM Multimedia, 2021.