journal IEEE Robotics and Automation Letters.

USMicroMagSet: Using Deep Learning Analysis to Benchmark the Performance of Microrobots in Ultrasound Images

  • US imaging
  • Tracking
  • Magnetic microrobots

This letter proposes the USMicroMagSet dataset and a deep learning method to detect and track microrobots in ultrasound images.

Q1 IF: 4.321
Authors
Affiliations
Published

Abstract

Microscale robots introduce great perspectives into many medical applications such as drug delivery, minimally invasive surgery, and localized biometric diagnostics. Fully automatic microrobots’ real-time detection and tracking using medical imagers are actually investigated for future clinical translation. Ultrasound (US) B-mode imaging has been employed to monitor single agents and collective swarms of microrobots in vitro and ex vivo in controlled experimental conditions. However, low contrast and spatial resolution still limit the effective employment of such a method in a medical microrobotic scenario due to uncertainties associated with the position of microrobots. The positioning error arises due to the inaccuracy of the US-based visual feedback, which is provided by the detection and tracking algorithms. The application of deep learning networks is a promising solution to detect and track real-time microrobots in noisy ultrasonic images. However, what is most striking is the performance gap among state-of-the-art microrobots deep learning detection and tracking research. A key factor of that is the unavailability of large-scale datasets and benchmarks. In this paper, we present the first publicly available B-mode ultrasound dataset for microrobots (USmicroMagSet) with accurate annotations which contains more than 40000 samples of magnetic microrobots. In addition, for analyzing the performance of microrobots included in the proposed benchmark dataset, 4 deep learning detectors and 4 deep learning trackers are used.

Support information

The USMicroMagset dataset is available on GitLab at https://gitlab.com/insa-cvl2/USMicroMagSet/

Funding

This work was supported by the Region Centre Val de Loire fund with the BUBBLEBOT project.

Reuse

Citation

BibTeX citation:
@article{botross2023,
  author = {Botross, Karim and Mohammad, Alkhatib and Folio, David and
    Ferreira, Antoine},
  publisher = {IEEE},
  title = {USMicroMagSet: {Using} {Deep} {Learning} {Analysis} to
    {Benchmark} the {Performance} of {Microrobots} in {Ultrasound}
    {Images}},
  journal = {IEEE Robotics and Automation Letters},
  volume = {8},
  number = {6},
  pages = {3254-3261},
  date = {2023-06},
  url = {https://dfolio.fr/publications/articles/2023botrosRAL.html},
  doi = {10.1109/LRA.2023.3264746},
  langid = {en-US},
  abstract = {Microscale robots introduce great perspectives into many
    medical applications such as drug delivery, minimally invasive
    surgery, and localized biometric diagnostics. Fully automatic
    microrobots’ real-time detection and tracking using medical imagers
    are actually investigated for future clinical translation.
    Ultrasound (US) B-mode imaging has been employed to monitor single
    agents and collective swarms of microrobots in vitro and ex vivo in
    controlled experimental conditions. However, low contrast and
    spatial resolution still limit the effective employment of such a
    method in a medical microrobotic scenario due to uncertainties
    associated with the position of microrobots. The positioning error
    arises due to the inaccuracy of the US-based visual feedback, which
    is provided by the detection and tracking algorithms. The
    application of deep learning networks is a promising solution to
    detect and track real-time microrobots in noisy ultrasonic images.
    However, what is most striking is the performance gap among
    state-of-the-art microrobots deep learning detection and tracking
    research. A key factor of that is the unavailability of large-scale
    datasets and benchmarks. In this paper, we present the first
    publicly available B-mode ultrasound dataset for microrobots
    (\$USmicroMagSet\$) with accurate annotations which contains more
    than 40000 samples of magnetic microrobots. In addition, for
    analyzing the performance of microrobots included in the proposed
    benchmark dataset, 4 deep learning detectors and 4 deep learning
    trackers are used.}
}
For attribution, please cite this work as:
Botross K., Mohammad A., Folio D., and Ferreira A., “USMicroMagSet: Using Deep Learning Analysis to Benchmark the Performance of Microrobots in Ultrasound Images,” Robot. Autom. Lett., vol. 8, pp. 3254–3261, June 2023. [Online]. Available: https://dfolio.fr/publications/articles/2023botrosRAL.html