高级检索
当前位置: 首页 > 详情页

An attention-supervised full-resolution residual network for the segmentation of breast ultrasound images

文献详情

资源类型:
WOS体系:
Pubmed体系:

收录情况: ◇ SCIE ◇ EI

机构: [1]Beihang Univ, Sch Instrumentat & Optoelect Engn, Beijing 100191, Peoples R China [2]Beihang Univ, Beijing Adv Innovat Ctr Big Data Based Precis Med, Beijing 100191, Peoples R China [3]Capital Med Univ, Beijing Tongren Hosp, Dept Diagnost Ultrasound, Beijing 100730, Peoples R China [4]Mem Sloan Kettering Canc Ctr, Dept Med Phys, New York, NY 10065 USA
出处:
ISSN:

关键词: breast cancer breast ultrasound image deep learning segmentation

摘要:
Purpose Breast cancer is the most common cancer among women worldwide. Medical ultrasound imaging is one of the widely applied breast imaging methods for breast tumors. Automatic breast ultrasound (BUS) image segmentation can measure the size of tumors objectively. However, various ultrasound artifacts hinder segmentation. We proposed an attention-supervised full-resolution residual network (ASFRRN) to segment tumors from BUS images. Methods In the proposed method, Global Attention Upsample (GAU) and deep supervision were introduced into a full-resolution residual network (FRRN), where GAU learns to merge features at different levels with attention for deep supervision. Two datasets were employed for evaluation. One (Dataset A) consisted of 163 BUS images with tumors (53 malignant and 110 benign) from UDIAT Centre Diagnostic, and the other (Dataset B) included 980 BUS images with tumors (595 malignant and 385 benign) from the Sun Yat-sen University Cancer Center. The tumors from both datasets were manually segmented by medical doctors. For evaluation, the Dice coefficient (Dice), Jaccard similarity coefficient (JSC), and F1 score were calculated. Results For Dataset A, the proposed method achieved higher Dice (84.3 +/- 10.0%), JSC (75.2 +/- 10.7%), and F1 score (84.3 +/- 10.0%) than the previous best method: FRRN. For Dataset B, the proposed method also achieved higher Dice (90.7 +/- 13.0%), JSC (83.7 +/- 14.8%), and F1 score (90.7 +/- 13.0%) than the previous best methods: DeepLabv3 and dual attention network (DANet). For Dataset A + B, the proposed method achieved higher Dice (90.5 +/- 13.1%), JSC (83.3 +/- 14.8%), and F1 score (90.5 +/- 13.1%) than the previous best method: DeepLabv3. Additionally, the parameter number of ASFRRN was only 10.6 M, which is less than those of DANet (71.4 M) and DeepLabv3 (41.3 M). Conclusions We proposed ASFRRN, which combined with FRRN, attention mechanism, and deep supervision to segment tumors from BUS images. It achieved high segmentation accuracy with a reduced parameter number.

基金:
语种:
被引次数:
WOS:
PubmedID:
中科院(CAS)分区:
出版当年[2019]版:
大类 | 3 区 医学
小类 | 3 区 核医学
最新[2023]版:
大类 | 2 区 医学
小类 | 3 区 核医学
JCR分区:
出版当年[2018]版:
Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
最新[2023]版:
Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING

影响因子: 最新[2023版] 最新五年平均 出版当年[2018版] 出版当年五年平均 出版前一年[2017版] 出版后一年[2019版]

第一作者:
第一作者机构: [1]Beihang Univ, Sch Instrumentat & Optoelect Engn, Beijing 100191, Peoples R China [2]Beihang Univ, Beijing Adv Innovat Ctr Big Data Based Precis Med, Beijing 100191, Peoples R China
通讯作者:
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:21169 今日访问量:0 总访问量:1219 更新日期:2025-01-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 首都医科大学附属北京同仁医院 技术支持:重庆聚合科技有限公司 地址:北京市东城区东交民巷1号(100730)