机构:[1]Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Key Laboratory of Ophthalmology and Visual Sciences, Beijing, China,研究所眼科研究所首都医科大学附属北京同仁医院首都医科大学附属同仁医院[2]Beijing Zhizhen Internet Technology Co. Ltd., Beijing, China,[3]Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China,[4]Department of Health Management and Physical Examination, Sichuan Provincial People’s Hospital, University of Electronic Science and Technology of China, Chengdu, China,四川省人民医院[5]School of Medicine, University of Electronic Science and Technology of China, Chengdu, China四川省人民医院
The task of fundus image registration aims to find matching keypoints between an image pair. Traditional methods detect the keypoint by hand-designed features, which fail to cope with complex application scenarios. Due to the strong feature learning ability of deep neural network, current image registration methods based on deep learning directly learn to align the geometric transformation between the reference image and test image in an end-to-end manner. Another mainstream of this task aims to learn the displacement vector field between the image pair. In this way, the image registration has achieved significant advances. However, due to the complicated vascular morphology of retinal image, such as texture and shape, current widely used image registration methods based on deep learning fail to achieve reliable and stable keypoint detection and registration results. To this end, in this paper, we aim to bridge this gap. Concretely, since the vessel crossing and branching points can reliably and stably characterize the key components of fundus image, we propose to learn to detect and match all the crossing and branching points of the input images based on a single deep neural network. Moreover, in order to accurately locate the keypoints and learn discriminative feature embedding, a brain-inspired spatially-varying adaptive pyramid context aggregation network is proposed to incorporate the contextual cues under the supervision of structured triplet ranking loss. Experimental results show that the proposed method achieves more accurate registration results with significant speed advantage.
基金:
Sichuan Provincial People's Hospital Fund Project [2021LY15]; Chengdu Science and Technology Bureau Project [2021-YF05-00498-SN]
第一作者机构:[1]Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Key Laboratory of Ophthalmology and Visual Sciences, Beijing, China,
通讯作者:
推荐引用方式(GB/T 7714):
Xu Jie,Yang Kang,Chen Youxin,et al.Reliable and stable fundus image registration based on brain-inspired spatially-varying adaptive pyramid context aggregation network[J].FRONTIERS IN NEUROSCIENCE.2023,16:doi:10.3389/fnins.2022.1117134.
APA:
Xu, Jie,Yang, Kang,Chen, Youxin,Dai, Liming,Zhang, Dongdong...&Yang, Zhanbo.(2023).Reliable and stable fundus image registration based on brain-inspired spatially-varying adaptive pyramid context aggregation network.FRONTIERS IN NEUROSCIENCE,16,
MLA:
Xu, Jie,et al."Reliable and stable fundus image registration based on brain-inspired spatially-varying adaptive pyramid context aggregation network".FRONTIERS IN NEUROSCIENCE 16.(2023)