In-Place Activated BatchNorm for Memory-Optimized Training of DNNs

Conf. on Computer Vision and Pattern Recognition (CVPR) 2018 /
By Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder

Abstract

In this work we present In-Place Activated Batch Normalization (InPlace-ABN) - a novel approach to drastically reduce the training memory footprint of modern deep neural networks in a computationally efficient way. Our solution substitutes the conventionally used succession of BatchNorm + Activation layers with a single plugin layer, hence avoiding invasive framework surgery while providing straightforward applicability for existing deep learning frameworks. We obtain memory savings of up to 50% by dropping intermediate results and by recovering required information during the backward pass through the inversion of stored forward results, with only minor increase (0.8-2%) in computation time. Also, we demonstrate how frequently used checkpointing approaches can be made computationally as efficient as InPlace-ABN. In our experiments on image classification, we demonstrate on-par results on ImageNet-1k with state-of-the-art approaches. On the memory-demanding task of semantic segmentation, we report competitive results for COCO-Stuff and set new state-of-the-art results for Cityscapes and Mapillary Vistas.

Example of residual block with identity mapping. Left: Implementation with standard BN and in-place activation layers, which requires storing 6 buffers for the backward pass. Right: Implementation with our proposed InPlace-ABN layer, which requires storing only 3 buffers. Our solution avoids storing the buffers that are typically kept for the backward pass through BN and exhibits a lower computational overhead compared to state-of-the-art memory-reduction methods.

More publications

Seamless Scene Segmentation

By Lorenzo Porzi, Samuel Rota Bulò, Aleksander Colovic, Peter Kontschieder
Conf. on Computer Vision and Pattern Recognition (CVPR) 2019 /

AdaGraph: Unifying Predictive and Continuous Domain Adaptation through Graphs

By Massimiliano Mancini, Samuel Rota Bulò, Barbara Caputo, Elisa Ricci
Conf. on Computer Vision and Pattern Recognition (CVPR) 2019 /

Unsupervised Domain Adaptation using Feature-Whitening and Consensus Loss

By Subhankar Roy, Aliaksandr Siarohin, Enver Sangineto, Samuel Rota Bulò, Nicu Sebe, Elisa Ricci
Conf. on Computer Vision and Pattern Recognition (CVPR) 2019 /

Deep Single Image Camera Calibration with Radial Distortion

By Manuel López-Antequera, Roger Marı́, Pau Gargallo, Yubin Kuang, Javier Gonzalez-Jimenez, Gloria Haro
Conf. on Computer Vision and Pattern Recognition (CVPR) 2019 /

Boosting Domain Adaptation by Discovering Latent Domains

By Massimilano Mancini, Lorenzo Porzi, Samuel Rota Bulò, Barbara Caputo, Elisa Ricci
Conf. on Computer Vision and Pattern Recognition (CVPR) 2018 /

Geometry-Aware Network for Non-Rigid Shape Prediction from a Single View

By Albert Pumarola, Antonio Agudo, Lorenzo Porzi, Alberto Sanfeliu, Vincent Lepetit, Francesc Moreno-Noguer
Conf. on Computer Vision and Pattern Recognition (CVPR) 2018 /

AutoDIAL: Automatic DomaIn Alignment Layers

By Fabio Maria Carlucci, Lorenzo Porzi, Barbara Caputo, Elisa Ricci, Samuel Rota Bulò
International Conf. on Computer Vision (ICCV) 2017 /

The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes

By Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulò, Peter Kontschieder
International Conf. on Computer Vision (ICCV) 2017 /

Loss Max-Pooling for Semantic Image Segmentation

By Samuel Rota Bulò, Gerhard Neuhold, Peter Kontschieder
Conf. on Computer Vision and Pattern Recognition (CVPR) 2017 /

Online Learning with Bayesian Classification Trees

By Samuel Rota Bulò, Peter Kontschieder
Conf. on Computer Vision and Pattern Recognition (CVPR) 2016 /

Dropout Distillation

By Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder
Intern. Conf. on Machine Learning (ICML) 2016 /