Séminaire de Probabilités et Statistique
Monday 24 November 2025 à 13:45 - UM, campus Triolet, bâtiment 9, salle 109 (1er étage)
Nelly Pustelnik (CNRS - ENS Lyon)
ReTune: Restarted Truncated unrolled networks to bridge the gap between Plug-and-Play and Unfolded Neural Networks for image restoration
In recent years, deep learning methods have transformed the field of image restoration, achieving significantly higher reconstruction quality than traditional variational approaches. Many current strategies combine principles from both variational models and neural networks, forming what are known as model-based neural networks. Among these, two main frameworks stand out: Unfolded neural networks and Plug-and-Play (PnP) methods.
Unfolded networks emulate the iterations of proximal algorithms (also referred to as unrolled algorithms, e.g., unrolled forward–backward) to create task-specific end-to-end architectures, while PnP methods rely on pretrained denoisers to solve reconstruction problems without additional training. However, automatic differentiation in unfolded networks requires relatively shallow architectures (a small number of unrolled iterations) due to computational constraints, whereas PnP methods, though theoretically convergent, must be iterated until convergence and often underperform unfolded approaches.
This work focuses on a specific training procedure for unfolded neural networks that still uses automatic differentiation while maintaining theoretical convergence guarantees. More precisely, our analysis shows that an unfolded neural network can be restarted to infer solutions of a variational problem with fixed parameters. Furthermore, we introduce the ReTune (Restarted Truncated unrolled networks) procedure to estimate the underlying parameters in a simple fashion, building on theoretical developments inspired by bi-level literature (Deep Equilibrium,Jacobian-free propagation).
A deeper analysis is provided in the context of unrolled forward–backward iterations to highlight the interplay between the network depth and the number of restarts in ReTune, based on the Lipschitz properties of the algorithmic scheme. In particular, the depth controls the approximation error of a Jacobian-free step, whereas restarting the unrolled neural network allows one to reach the equilibrium point.
This theoretical analysis is supported by numerical experiments showing that the proposed ReTune procedure goes beyond traditional learning schemes, improves performance compared to PnP strategies, and provides stronger guarantees than standard unfolded approaches.
Séminaire en salle 109, également retransmis sur zoom : https://umontpellier-fr.zoom.us/j/7156708132
