Many modern machine learning problems can be cast as bilevel optimization problems: few-shot learning, causality, model selection, dataset distillation, adversarial attacks etc. More generally, differentiating the solutions of optimization problems has become a central question in order to make optimization solutions standard building blocks of deep learning architectures. In this talk, I will tackle how to differentiate the solution of sparse optimization problems and its applications to model selection for neuroimaging and few-shot learning in vision. Using implicit differentiation, I will show it is possible to leverage the non-smoothness of the inner problem to speed up the automatic differentiation. Finally, I will present my research agenda, revolving around efficient automatic differentiation, drawing links between games theory and bi-level optimization, and, more generally, investigating the links between games and machine learning.
Bio
Dr. Quentin Bertrand is a post-doctoral researcher at Mila working with Gauthier Gidel and Simon Lacoste-Julien. He works on automatic differentiation, representation learning, and the intersection between machine learning and game theory. Before this position, he did his Ph. D. at Inria under the supervision of Joseph Salmon and Alexandre Gramfort. He worked on the optimization and statistical aspects of high dimensional sparse models applied to brain signal reconstruction. In particular, he developed python packages for fast computation (https://github.com/scikit-learn-contrib/skglm) and automatic differentiation (https://github.com/QB3/sparse-ho) of sparse models.