In-Person Poster presentation / top 25% paper
Progress measures for grokking via mechanistic interpretability
Neel Nanda · Lawrence Chan · Tom Lieberum · Jess Smith · Jacob Steinhardt
MH1-2-3-4 #136
Keywords: [ Mechanistic Interpretability ] [ progress measures ] [ grokking ] [ circuits ] [ interpretability ] [ Social Aspects of Machine Learning ]
Neural networks often exhibit emergent behavior in which qualitatively new capabilities that arise from scaling up the number of parameters, training data, or even the number of steps. One approach to understanding emergence is to find the continuous \textit{progress measures} that underlie the seemingly discontinuous qualitative changes. In this work, we argue that progress measures can be found via mechanistic interpretability---that is, by reverse engineering learned models into components and measuring the progress of each component over the course of training. As a case study, we study small transformers trained on a modular arithmetic tasks with emergent grokking behavior. We fully reverse engineer the algorithm learned by these networks, which uses discrete fourier transforms and trigonometric identities to convert addition to rotation about a circle. After confirming the algorithm via ablation, we then use our understanding of the algorithm to define progress measures that precede the grokking phase transition on this task. We see our result as demonstrating both that it is possible to fully reverse engineer trained networks, and that doing so can be invaluable to understanding their training dynamics.