Skip to main content
Thesis defences

Enhancing Visual Interpretability in Computer-Assisted Radiological Diagnosis: Deep Learning Approaches for Chest X-Ray Analysis


Date & time
Tuesday, July 30, 2024 –
Wednesday, July 31, 2024
10 a.m. – 12 p.m.
Speaker(s)

Zirui Qiu

Cost

This event is free

Organization

Department of Computer Science and Software Engineering

Where

ER Building
2155 Guy St.
Room Zoom

Wheel chair accessible

Yes

Abstract

   This thesis delves into the realm of interpretability in medical image processing, focusing on deep learning’s role in enhancing the transparency and understandability of automated diagnostics in chest X-ray analysis. As deep learning models become increasingly integral to medical diagnostics, the imperative for these models to be interpretable has never been more pronounced. This work is anchored in two main studies that address the challenge of interpretability from distinct yet complementary perspectives. The first study scrutinizes the effectiveness of Gradient-weighted Class Activation Mapping (Grad-CAM) across various deep learning architectures, specifically evaluating its reliability in the context of pneumothorax diagnosis in chest X-ray images. Through a systematic analysis, this research reveals how different neural network architectures and depths influence the robustness and clarity of Grad-CAM visual explanations, providing valuable insights for selecting and designing interpretable deep learning models in medical imaging. Building on the foundational understanding of interpretability, the second study introduces a novel deep learning framework that enhances the synergy between disease diagnosis and the prediction of visual saliency maps in chest X-rays. This dual-encoder, multi-task UNet architecture, augmented by a multi-stage cooperative learning strategy, offers a sophisticated approach to interpretability. By aligning the model’s attention with that of clinicians, the framework not only enhances diagnostic accuracy but also provides intuitive visual explanations that resonate with clinical expertise. Together, these studies contribute to the field of medical image processing by offering innovative approaches to improve the interpretability of deep learning models. The findings underscore the potential of interpretability-enhanced models to foster trust among medical practitioners, facilitate better clinical decision-making, and pave the way for the broader acceptance and integration of AI in healthcare diagnostics. The thesis concludes by synthesizing the insights gained from both projects and outlining prospective pathways for future research to further advance the interpretability and utility of AI in medical imaging.

Back to top

© Concordia University