The operating skills of interventionists have an important impact on the surgical treatment effect, and combining operating skills with surgical robots is a promising method to improve the intelligence level of robots. However, unstructured operating skills recognition is a complex and lengthy process, and related research is limited. Moreover, compared with the method of skills recognition for specific vascular segment operation data, the skills recognition throughout the entire surgical process is more meaningful and more challenging. In this study, a data acquisition solution integrating four modal sensors was used to collect operation data from expert and novice interventionists from model and in vivo animal experimental platforms. A hierarchical fusion strategy based on the Shannon entropy weight method is proposed for feature fusion and behavior recognition, with average recognition accuracies of 96.84% (model) and 93.26% (in vivo). Further qualitative and quantitative analyses effectively evaluated skill level differences between expert and novice interventionists. These promising results indicate the possibility of applying this fusion strategy to clinical operating skills recognition. Simultaneously, the digitization of operating skills could be used as a control indicator to improve the intelligence level of surgical robots.
基金:
National Natural Science Foundation of China [62133009, 62473258]; Fundamental Research Funds for the Central Universities [YG2023ZD05, YG2023ZD14]; National Key Research and Development Program [2022YFC2405500, 2022YFC2405503]; Project of Laboratory Open Fund of Key Technology and Materials in Minimally Invasive Spine Surgery [2024JZWC-ZDA03]; Project of Institute of Medical Robotics of Shanghai Jiao Tong University
第一作者机构:[1]Shanghai Jiao Tong Univ, Tongren Hosp, Inst Med Robot, Shanghai Key Lab Flexible Med Robot, Shanghai 200336, Peoples R China
通讯作者:
通讯机构:[2]Shanghai Jiao Tong Univ, Inst Forming Technol & Equipment, Shanghai 200240, Peoples R China[3]Shanghai Jiao Tong Univ, Inst Med Robot, Shanghai 200240, Peoples R China
推荐引用方式(GB/T 7714):
Wang Shuang,Zhang Wenlong,Zhao Liang,et al.Task-Oriented Interventionists' Operating Skills Recognition Based on Multimodal Data Hierarchical Fusion[J].IEEE SENSORS JOURNAL.2025,25(11):20732-20742.doi:10.1109/JSEN.2025.3558981.
APA:
Wang, Shuang,Zhang, Wenlong,Zhao, Liang,Yang, Guang-Zhong&Xie, Le.(2025).Task-Oriented Interventionists' Operating Skills Recognition Based on Multimodal Data Hierarchical Fusion.IEEE SENSORS JOURNAL,25,(11)
MLA:
Wang, Shuang,et al."Task-Oriented Interventionists' Operating Skills Recognition Based on Multimodal Data Hierarchical Fusion".IEEE SENSORS JOURNAL 25..11(2025):20732-20742