高级检索
当前位置: 首页 > 详情页

A VGG attention vision transformer network for benign and malignant classification of breast ultrasound images

文献详情

资源类型:
WOS体系:
Pubmed体系:

收录情况: ◇ SCIE

机构: [1]School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China [2]School of computer Science and Engineering, Beihang University, Beijing, China [3]Research Institute for Frontier Science, Beihang University, Beijing, China [4]Department of Diagnostic Ultrasound, Beijing Tongren Hospital, Capital Medical University, Beijing, China [5]Department of Medical Physics,Memorial Sloan Kettering Cancer Center,New York, New York, USA
出处:
ISSN:

关键词: breast tumor breast ultrasound image classification deep learning

摘要:
Breast cancer is the most commonly occurring cancer worldwide. The ultrasound reflectivity imaging technique can be used to obtain breast ultrasound (BUS) images, which can be used to classify benign and malignant tumors. However, the classification is subjective and dependent on the experience and skill of operators and doctors. The automatic classification method can assist doctors and improve the objectivity, but current convolution neural network (CNN) is not good at learning global features and vision transformer (ViT) is not good at extraction local features. In this study, we proposed a visual geometry group attention ViT (VGGA-ViT) network to overcome their disadvantages.In the proposed method, we used a CNN module to extract the local features and employed a ViT module to learn the global relationship among different regions and enhance the relevant local features. The CNN module was named the VGGA module. It was composed of a VGG backbone, a feature extraction fully connected layer, and a squeeze-and-excitation block. Both the VGG backbone and the ViT module were pretrained on the ImageNet dataset and retrained using BUS samples in this study. Two BUS datasets were employed for validation.Cross-validation was conducted on two BUS datasets. For the Dataset A, the proposed VGGA-ViT network achieved high accuracy (88.71 ± $\ \pm \ $ 1.55%), recall (90.73 ± $\ \pm \ $ 1.57%), specificity (85.58 ± $\ \pm \ $ 3.35%), precision (90.77 ± $\ \pm \ $ 1.98%), F1 score (90.73 ± $\ \pm \ $ 1.24%), and Matthews correlation coefficient (MCC) (76.34 ± 7 $\ \pm \ 7$ 3.29%), which were better than those of all compared previous networks in this study. The Dataset B was used as a separate test set, the test results showed that the VGGA-ViT had highest accuracy (81.72 ± $\ \pm \ $ 2.99%), recall (64.45 ± $\ \pm \ $ 2.96%), specificity (90.28 ± $\ \pm \ $ 3.51%), precision (77.08 ± $\ \pm \ $ 7.21%), F1 score (70.11 ± $\ \pm \ $ 4.25%), and MCC (57.64 ± $\ \pm \ $ 6.88%).In this study, we proposed the VGGA-ViT for the BUS classification, which was good at learning both local and global features. The proposed network achieved higher accuracy than the compared previous methods.© 2022 American Association of Physicists in Medicine.

基金:
语种:
被引次数:
WOS:
PubmedID:
中科院(CAS)分区:
出版当年[2021]版:
大类 | 3 区 医学
小类 | 3 区 核医学
最新[2023]版:
大类 | 2 区 医学
小类 | 3 区 核医学
JCR分区:
出版当年[2020]版:
Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
最新[2023]版:
Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING

影响因子: 最新[2023版] 最新五年平均 出版当年[2020版] 出版当年五年平均 出版前一年[2019版] 出版后一年[2021版]

第一作者:
第一作者机构: [1]School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, China
通讯作者:
通讯机构: [5]Department of Medical Physics,Memorial Sloan Kettering Cancer Center,New York, New York, USA [*1]Department of Medical Physics, Memorial Sloan Kettering Cancer Center, 1275 York Avenue,New York,NY 10065, USA.
推荐引用方式(GB/T 7714):
APA:
MLA:

资源点击量:21169 今日访问量:0 总访问量:1219 更新日期:2025-01-01 建议使用谷歌、火狐浏览器 常见问题

版权所有©2020 首都医科大学附属北京同仁医院 技术支持:重庆聚合科技有限公司 地址:北京市东城区东交民巷1号(100730)