Skip to yearly menu bar Skip to main content


Poster

A Sublinear Adversarial Training Algorithm

Yeqi Gao · Lianke Qin · Zhao Song · Yitan Wang

Halle B

Abstract: Adversarial training is a widely used strategy for making neural networks resistant to adversarial perturbations. For a neural network of width mm, nn input training data in dd dimension, it takes Ω(mnd)Ω(mnd) time cost per training iteration for the forward and backward computation. In this paper we analyze the convergence guarantee of adversarial training procedure on a two-layer neural network with shifted ReLU activation, and shows that only o(m)o(m) neurons will be activated for each input data per iteration. Furthermore, we develop an algorithm for adversarial training with time cost o(mnd)o(mnd) per iteration by applying half-space reporting data structure.

Chat is not available.