When studying for a doctoral degree (PhD), candidates submit a thesis that provides a critical review of the current state of knowledge of the thesis subject as well as the student’s own contributions to the subject. The distinguishing criterion of doctoral graduate research is a significant and original contribution to knowledge.
Once accepted, the candidate presents the thesis orally. This oral exam is open to the public.
Abstract
Artificial Intelligence (AI) is at the forefront of the Fourth Industrial Revolution, fundamentally transforming industries and societies through unprecedented automation and data-driven applications. The Fourth Industrial Revolution is characterized by a fusion of software and hardware improvements that blur the lines between the physical, digital, and biological spheres. These improvements makes it possible for AI to leverage and process vast amounts of information to generate actionable insights, and perform complex tasks more quickly and more accurately than humans, leading to more informed decisions and efficient processes.
Despite its success and promising results in other domains, the adoption and integration of AI innovations in healthcare has been complex and slow. Few AI innovations have met with success and have been incorporated into daily practice. This thesis addresses technological, legal and ethical issues that must be mitigated before AI-based systems can be fully adopted and trusted into clinical trials and workflows. We identify an opportunity to further the state-of-the-art of AI solutions and their adoption in healthcare through privacy-preserving aggregation algorithms and human-centered evaluations of transparency in clinical decision support systems.
In particular, this dissertation explores advanced methodologies in Federated Learning (FL) for improving collaborative learning, data privacy, and decision-making across various domains. We improve the core FL aggregation algorithm for better handling the learning of distributed heterogeneous data sources, with a method named Precision-weighted Federated Learning. We perform extensive evaluations with benchmark datasets on resource-constrained environments to measure its limits as well as perform further evaluations on clinical data to enhance the quality of assessments, validating its utility.
Our research also aims to understand how to visualize AI model outputs to enhance transparency in clinical decision support systems. We conduct extensive evaluations to assess the impact of visualizing AI uncertainty and personal traits on decision-making, promoting the design of AI outputs that are interpretable for clinicians. Initially, we explore the effects in low-risk scenarios, followed by an examination of AI uncertainty representation in high-stakes decision-making, particularly in Alzheimer’s disease prognosis.
In summary, this dissertation presents significant advancements in FL and clinical decision support systems. We address some of the current limitations and challenges of adopting AI systems, and demonstrate improvements in collaborative learning, data privacy, and human-AI decision-making. These findings offer valuable insights for designing robust, efficient, and trustworthy AI and FL systems. We believe that design will eventually play a more prominent role in the development of AI tools and technologies, becoming the driving force behind moving innovations from the laboratory to the clinic.