notice
Doctoral Thesis Defense: Qicheng Lao
Speaker: Qicheng Lao
Supervisor: Dr. T. Fevens
Supervisory Committee:
Drs. M. Amer, Ben Ayed, T. D. Bui, A. Krzyzak, W. Lynch
(Chair)
Title: Learning Through Text-Image Pairs and Image Sequences
Date:Monday, October 7, 2019
Time: 9:30am
Place: EV 3.309
ABSTRACT
Many machine learning systems for artificial intelligence are biologically inspired, for example, the artificial neural networks (ANNs) have similar architecture as human brains, and convolutional neural networks (CNNs) are inspired by the observations from early study on animal's visual cortex system. The above two examples (ANNs and CNNs) are inspirations at the level of creating fundamental tools (e.g., neural networks) for a machine learning system. Another level of inspirations can come from the way human learn or respond that builds on top of the existing powerful learning tools, i.e., brains. In this thesis, we will focus on another type of inspiration that also belongs to the second level. It is based on the common practice that for an efficient learning or an optimal decision, human integrate all sources of available information in multiple views and leverage the reasoning of the underlying connections among them, i.e., multi-view learning. We address several problems in both medical and non-medical domains, including text-to-image synthesis, cell phenotype classification, histopathological malignancy diagnosis and disease progression learning, from the perspective of multi-view learning with an emphasis on learning the underlying connections among the multiple distinct feature sets representing the given multi-view data (i.e., image sequences in a unimodal setting and text-image pairs in a multi-modal setting).