Processing math: 100%
Skip to yearly menu bar Skip to main content


Spotlight Poster

Implicit bias of SGD in L2-regularized linear DNNs: One-way jumps from high to low rank

Zihan Wang · Arthur Jacot

Halle B

Abstract: The L2-regularized loss of Deep Linear Networks (DLNs) withmore than one hidden layers has multiple local minima, correspondingto matrices with different ranks. In tasks such as matrix completion,the goal is to converge to the local minimum with the smallest rankthat still fits the training data. While rank-underestimating minimacan be avoided since they do not fit the data, GD might getstuck at rank-overestimating minima. We show that with SGD, there is always a probability to jumpfrom a higher rank minimum to a lower rank one, but the probabilityof jumping back is zero. More precisely, we define a sequence of setsB1B2BR so that Brcontains all minima of rank r or less (and not more) that are absorbingfor small enough ridge parameters λ and learning rates η:SGD has prob. 0 of leaving Br, and from any starting point thereis a non-zero prob. for SGD to go in Br.

Chat is not available.