高级检索
当前位置: 首页 > 详情页

Reliable and stable fundus image registration based on brain-inspired spatially-varying adaptive pyramid context aggregation network

文献详情

资源类型:
WOS体系:
Pubmed体系:

收录情况: ◇ SCIE

机构: [1]Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Key Laboratory of Ophthalmology and Visual Sciences, Beijing, China, [2]Beijing Zhizhen Internet Technology Co. Ltd., Beijing, China, [3]Department of Ophthalmology, Peking Union Medical College Hospital, Beijing, China, [4]Department of Health Management and Physical Examination, Sichuan Provincial People’s Hospital, University of Electronic Science and Technology of China, Chengdu, China, [5]School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
出处:
ISSN:

关键词: retinal image analysis fundus image registration deep learning context aggregation structured triplet ranking loss

摘要:
The task of fundus image registration aims to find matching keypoints between an image pair. Traditional methods detect the keypoint by hand-designed features, which fail to cope with complex application scenarios. Due to the strong feature learning ability of deep neural network, current image registration methods based on deep learning directly learn to align the geometric transformation between the reference image and test image in an end-to-end manner. Another mainstream of this task aims to learn the displacement vector field between the image pair. In this way, the image registration has achieved significant advances. However, due to the complicated vascular morphology of retinal image, such as texture and shape, current widely used image registration methods based on deep learning fail to achieve reliable and stable keypoint detection and registration results. To this end, in this paper, we aim to bridge this gap. Concretely, since the vessel crossing and branching points can reliably and stably characterize the key components of fundus image, we propose to learn to detect and match all the crossing and branching points of the input images based on a single deep neural network. Moreover, in order to accurately locate the keypoints and learn discriminative feature embedding, a brain-inspired spatially-varying adaptive pyramid context aggregation network is proposed to incorporate the contextual cues under the supervision of structured triplet ranking loss. Experimental results show that the proposed method achieves more accurate registration results with significant speed advantage.

基金:

基金编号: 2021LY15 2021-YF05-00498-SN

语种:
被引次数:
WOS:
PubmedID:
中科院(CAS)分区:
出版当年[2022]版:
大类 | 2 区 医学
小类 | 3 区 神经科学
最新[2023]版:
大类 | 3 区 医学
小类 | 3 区 神经科学
JCR分区:
出版当年[2021]版:
Q2 NEUROSCIENCES
最新[2023]版:
Q2 NEUROSCIENCES

影响因子: 最新[2023版] 最新五年平均 出版当年[2021版] 出版当年五年平均 出版前一年[2020版] 出版后一年[2022版]

第一作者:
第一作者机构: [1]Beijing Institute of Ophthalmology, Beijing Tongren Eye Center, Beijing Tongren Hospital, Capital Medical University, Beijing Key Laboratory of Ophthalmology and Visual Sciences, Beijing, China,
通讯作者:
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:21169 今日访问量:0 总访问量:1219 更新日期:2025-01-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 首都医科大学附属北京同仁医院 技术支持:重庆聚合科技有限公司 地址:北京市东城区东交民巷1号(100730)