A VS ultrasound diagnostic system with kidney image evaluation functions

Jiayi Zhou, Norihiro Koizumi, Yu Nishiyama, Kiminao Kogiso, Tomohiro Ishikawa, Kento Kobayashi, Yusuke Watanabe, Takumi Fujibayashi, Miyu Yamada, Momoko Matsuyama, Hiroyuki Tsukihara, Ryosuke Tsumura, Kiyoshi Yoshinaka, Naoki Matsumoto, Masahiro Ogawa, Hideyo Miyazaki, Kazushi Numata, Hidetoshi Nagaoka, Toshiyuki Iwai, Hideyuki Iijima

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)

Abstract

Purpose: An inevitable feature of ultrasound-based diagnoses is that the quality of the ultrasound images produced depends directly on the skill of the physician operating the probe. This is because physicians have to constantly adjust the probe position to obtain a cross section of the target organ, which is constantly shifting due to patient respiratory motions. Therefore, we developed an ultrasound diagnostic robot that works in cooperation with a visual servo system based on deep learning that will help alleviate the burdens imposed on physicians. Methods: Our newly developed robotic ultrasound diagnostic system consists of three robots: an organ tracking robot (OTR), a robotic bed, and a robotic supporting arm. Additionally, we used different image processing methods (YOLOv5s and BiSeNet V2) to detect the target kidney location, as well as to evaluate the appropriateness of the obtained ultrasound images (ResNet 50). Ultimately, the image processing results are transmitted to the OTR for use as motion commands. Results: In our experiments, the highest effective tracking rate (0.749) was obtained by YOLOv5s with Kalman filtering, while the effective tracking rate was improved by about 37% in comparison with cases without such filtering. Additionally, the appropriateness probability of the ultrasound images obtained during the tracking process was also the highest and most stable. The second highest tracking efficiency value (0.694) was obtained by BiSeNet V2 with Kalman filtering and was a 75% improvement over the case without such filtering. Conclusion: While the most efficient tracking achieved is based on the combination of YOLOv5s and Kalman filtering, the combination of BiSeNet V2 and Kalman filtering was capable of detecting the kidney center of gravity closer to the kidney’s actual motion state. Furthermore, this model could also measure the cross-sectional area, maximum diameter, and other detailed information of the target kidney, which meant it is more practical for use in actual diagnoses.

Original languageEnglish
Pages (from-to)227-246
Number of pages20
JournalInternational journal of computer assisted radiology and surgery
Volume18
Issue number2
DOIs
Publication statusPublished - Feb 2023

Keywords

  • Deep learning
  • Robotic ultrasound
  • Visual servoing

Fingerprint

Dive into the research topics of 'A VS ultrasound diagnostic system with kidney image evaluation functions'. Together they form a unique fingerprint.

Cite this