Séminaire de Probabilités et Statistique :

Le 03 mai 2021 à 13:45 - UM - Bât 09 - Salle de conférence (1er étage)


Présentée par Bertrand Quentin - Inria Paris-Saclay

Implicit differentiation of Lasso-type models for hyperparameter optimization



Setting regularization parameters for Lasso-type estimators is notoriously difficult, though crucial in practice. The most popular hyperparameter optimization approach is grid-search. Grid-search however requires to choose a predefined grid for each parameter, which scales exponentially in the number of parameters. Another approach is to cast hyperparameter optimization as a bi-level optimization problem, one can solve by gradient descent. The key challenge for these methods is the estimation of the gradient w.r.t. the hyperparameters for nonsmooth optimization problems. This work introduces an efficient implicit differentiation algorithm, tailored for sparse problems. Our approach scales to high-dimensional data by leveraging the sparsity of the solutions. Experiments demonstrate that the proposed method outperforms a large number of standard methods to optimize the error on held-out data, or the Stein Unbiased Risk Estimator (SURE).

WEBINAIRE ouvert à toutes et tous : https://umontpellier-fr.zoom.us/j/85813807839



Retour