VAC Colloquium: Daniel Wulff "Advancing ultrasound guidance: Real-time target tracking in 4D ultrasound using deep representation learning"

Daniel Wulff, new staff member of Jun.-Prof. Lüdtke's chair Marine Data Science, will give a presentation on

"Advancing ultrasound guidance: Real-time target tracking in 4D ultrasound using deep representation learning"

as part of the VAC Colloquium.

Afterwards we look forward to discussions while enjoying coffee and cookies.

The event is open to anyone interested.

Abstract:

The goal of radiation therapy is to destroy tumor tissue with ionizing radiation, but patient movements, such as breathing, complicate precise targeting, risking damage to healthy tissue. Safety margins are used to account for this, but they can be minimized by continuously monitoring the tumor's position. While X-ray imaging and implanted markers are commonly used, 3D ultrasound offers real-time, nonionizing visualization of soft tissue, making it a promising alternative for therapy guidance. However, challenges like limited field of view and image quality need to be addressed. Robotic ultrasound systems, which automatically align the transducer, can help by continuously visualizing the target, requiring robust and real-time tracking for optimal treatment delivery.

Due to low image quality, volumetric image characteristics and high-dimensional soft-tissue motion, 4D ultrasound tracking is a challenging task. Only a few methods have been investigated so far, showing that there is a lack in 4D ultrasound tracking research. This work contributes to filling this gap by investigating the usability of representation learning using deep neural networks for the purpose of 4D ultrasound tracking. Training deep neural networks requires a substantial amount of data, but to date, only a limited amount of 4D ultrasound data is publicly accessible. This limited amount of data has been extended in this work in a 4D ultrasound labeling study. A novel 4D ultrasound data set has been made available containing image and landmark data. It has been shown that local image features can be detected and described in a unique and meaningful way in 3D ultrasound images using binary feature descriptors. In addition, it has been investigated that autoencoders are able to map 3D ultrasound patches into latent representations that can be used to identify similar soft-tissue structures and differentiate dissimilar ones. Therefor, different types of autoencoders were developed and investigated.

Different target tracking algorithms were developed. Algorithms working in ultrasound image space as well as in representation space created by autoencoders were implemented and evaluated. It has been shown that 4D ultrasound tracking in representation space can outperform image space-based tracking in terms of runtime while maintaining comparable accuracy. The target tracking methodology proposed in this work is based on unsupervised learning, is real-time capable, robust, and can be generalized across patients and organs, making it promising for ultrasound guided therapy purposes. The applicability of the representation space-based tracking has been shown in an online robotic ultrasound tracking experiment. Hence, this work proposes a novel method for 4D ultrasound tracking that could be integrated into any therapy domain.

 


Back to Eventlist