高级检索
当前位置: 首页 > 详情页

RET-CLIP: A Retinal Image Foundation Model Pre-trained with Clinical Diagnostic Reports

文献详情

资源类型:
WOS体系:

收录情况: ◇ CPCI(ISTP)

机构: [1]Beijing Institute of Technology, Beijing, China [2]Beijing Tongren Hospital, Beijing, China [3]Capital Medical University, Beijing, China [4]Beijing Institute of Ophthalmology, Beijing, China
出处:
ISSN:

关键词: Vision-Language Pre-training Foundation Model Retinal Fundus Image

摘要:
The Vision-Language Foundation model is increasingly investigated in the fields of computer vision and natural language processing, yet its exploration in ophthalmology and broader medical applications remains limited. The challenge is the lack of labeled data for the training of foundation model. To handle this issue, a CLIP-style retinal image foundation model is developed in this paper. Our foundation model, RET-CLIP, is specifically trained on a dataset of 193,865 patients to extract general features of color fundus photographs (CFPs), employing a tripartite optimization strategy to focus on left eye, right eye, and patient level to reflect real-world clinical scenarios. Extensive experiments demonstrate that RET-CLIP outperforms existing benchmarks across eight diverse datasets spanning four critical diagnostic categories: diabetic retinopathy, glaucoma, multiple disease diagnosis, and multilabel classification of multiple diseases, which demonstrate the performance and generality of our foundation model. The sourse code and pretrained model are available at https://github.com/sStonemason/RETCLIP.

基金:
语种:
WOS:
第一作者:
第一作者机构: [1]Beijing Institute of Technology, Beijing, China
通讯作者:
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:21169 今日访问量:0 总访问量:1219 更新日期:2025-01-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 首都医科大学附属北京同仁医院 技术支持:重庆聚合科技有限公司 地址:北京市东城区东交民巷1号(100730)