December, 2017

In-Place Activated BatchNorm for Memory-Optimized Training of DNNs

Technical report
By: Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder

Abstract

In this work we present In-Place Activated Batch Normalization (InPlace-ABN) - a novel approach to drastically reduce the training memory footprint of modern deep neural networks in a computationally efficient way. Our solution substitutes the conventionally used succession of BatchNorm + Activation layers with a single plugin layer, hence avoiding invasive framework surgery while providing straightforward applicability for existing deep learning frameworks. We obtain memory savings of up to 50% by dropping intermediate results and by recovering required information during the backward pass through the inversion of stored forward results, with only minor increase (0.8-2%) in computation time. Also, we demonstrate how frequently used checkpointing approaches can be made computationally as efficient as InPlace-ABN. In our experiments on image classification, we demonstrate on-par results on ImageNet-1k with state-of-the-art approaches. On the memory-demanding task of semantic segmentation, we report results for COCO-Stuff, Cityscapes and Mapillary Vistas, obtaining new state-of-the-art results on the latter without additional training data but in a single-scale and -model scenario. Code is provided here.

Example of residual block with identity mapping. Left: Implementation with standard BN and in-place activation layers, which requires storing 6 buffers for the backward pass. Right: Implementation with our proposed InPlace-ABN layer, which requires storing only 3 buffers. Our solution avoids storing the buffers that are typically kept for the backward pass through BN and exhibits a lower computational overhead compared to state-of-the-art memory-reduction methods.