Skip to main content
Thesis defences

PhD Oral Exam - Mostafa Sharifzadeh, Electrical and Computer Engineering

Simplifying Interpretation of Ultrasound Imaging: Deep Learning Approaches for Phase Aberration Correction and Automatic Segmentation


Date & time
Friday, November 29, 2024
10:15 a.m. – 1:15 p.m.
Cost

This event is free

Organization

School of Graduate Studies

Contact

Dolly Grewal

Wheel chair accessible

Yes

When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.

Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.

Abstract

Medical ultrasound imaging is a widely used diagnostic tool in clinical practice, offering several advantages, including high temporal resolution, non-invasiveness, cost-effectiveness, and portability. Despite these benefits, ultrasound modality often suffers from lower image quality compared to other modalities, such as magnetic resonance imaging, which complicates image interpretation and poses diagnostic challenges, even for experienced clinicians. Given its unique advantages, simplifying the interpretation of ultrasound images can profoundly impact the accessibility and affordability of healthcare. This thesis aims to enhance the interpretability of ultrasound images using deep learning (DL)-based approaches on two parallel fronts.

The first front focuses on improving image quality by addressing the phase aberration effect, a primary contributor to the degradation of medical ultrasound images. Phase aberration arises from spatial variations in sound speed within heterogeneous media, introducing artifacts such as blurring and geometric distortions. This effect hinders the accurate representation of tissue structures and complicates clinical interpretation. To tackle this, we propose two novel methods. The first involves training a convolutional neural network (CNN) to estimate the aberration profile from the B-mode image and employing it to compensate for the aberration effects. The second introduces an aberration-to-aberration approach combined with an innovative loss function to train a CNN that directly predicts corrected radio frequency data without requiring ground truth.

The second front focuses on the automatic segmentation of ultrasound images and explores the challenges associated with employing DL-based approaches. Manual segmentation, typically performed by expert clinicians, is time-consuming and prone to human error, and automating this process can simplify the interpretation of ultrasound images. While DL methods have demonstrated considerable potential, ultrasound image segmentation poses unique challenges due to artifacts such as shadowing, reverberation, refraction, phase aberration, and speckle noise. The scarcity of medical data further complicates these challenges, limiting the generalizability and robustness of models in clinical settings. To address these limitations, we investigate the shift-variance problem in CNNs and propose pyramidal blur-pooling layers to mitigate this issue. Furthermore, we tackle domain shift and data scarcity by employing a domain adaptation method and introducing an ultra-fast ultrasound image simulation technique based on frequency domain analysis.

Back to top

© Concordia University