In-Person Poster presentation / top 25% paper
NeRN: Learning Neural Representations for Neural Networks
Maor Ashkenazi · Zohar Rimon · Ron Vainshtein · Shir Levi · Elad Richardson · Pinchas Mintz · Eran Treister
MH1-2-3-4 #80
Keywords: [ convolutional neural networks ] [ Implicit Representations ] [ Neural Representations ] [ Deep Learning and representational learning ]
Neural Representations have recently been shown to effectively reconstruct a wide range of signals from 3D meshes and shapes to images and videos. We show that, when adapted correctly, neural representations can be used to directly represent the weights of a pre-trained convolutional neural network, resulting in a Neural Representation for Neural Networks (NeRN). Inspired by coordinate inputs of previous neural representation methods, we assign a coordinate to each convolutional kernel in our network based on its position in the architecture, and optimize a predictor network to map coordinates to their corresponding weights. Similarly to the spatial smoothness of visual scenes, we show that incorporating a smoothness constraint over the original network's weights aids NeRN towards a better reconstruction. In addition, since slight perturbations in pre-trained model weights can result in a considerable accuracy loss, we employ techniques from the field of knowledge distillation to stabilize the learning process. We demonstrate the effectiveness of NeRN in reconstructing widely used architectures on CIFAR-10, CIFAR-100, and ImageNet. Finally, we present two applications using NeRN, demonstrating the capabilities of the learned representations.