Skip to yearly menu bar Skip to main content


Spotlight Poster

What does automatic differentiation compute for neural networks?

Sejun Park · Sanghyuk Chun · Wonyeol Lee

Halle B
[ ]
Wed 8 May 1:45 a.m. PDT — 3:45 a.m. PDT
 
Spotlight presentation:

Abstract:

Forward- or reverse-mode automatic differentiation (AD) is a popular algorithm for computing the derivative of a function expressed by a program. AD always outputs the correct derivative if a program does not use any non-differentiable functions and control flows; however, it may return an arbitrary value otherwise. In this work, we investigate what AD computes for neural networks that may contain non-differentiable functions such as ReLU and maxpools. We first prove that AD always returns a generalized derivative called a Clarke subderivative for networks with pointwise activation functions, if the minibatch size is one and all non-differentiable neurons have distinct bias parameters. We show that the same conclusion does not hold otherwise, but does hold under some mild sufficient conditions. We also prove similar results for more general networks that can use maxpools and bias parameters shared across different neurons. We empirically check our sufficient conditions over popular network architectures and observe that AD almost always computes a Clarke subderivative in practical learning setups.

Chat is not available.