Dropout Distillation

Intern. Conf. on Machine Learning (ICML) 2016 /
By Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder

Abstract

Dropout is a popular stochastic regularization technique for deep neural networks that works by randomly dropping (i.e. zeroing) units from the network during training. This randomization process allows to implicitly train an ensemble of exponentially many networks sharing the same parametrization, which should be averaged at test time to deliver the final prediction. A typical workaround for this intractable averaging operation consists in scaling the layers undergoing dropout randomization. This simple rule called standard dropout is efficient, but might degrade the accuracy of the prediction. In this work we introduce a novel approach, coined dropout distillation, that allows us to train a predictor in a way to better approximate the intractable, but preferable, averaging process, while keeping under control its computational efficiency. We are thus able to construct models that are as efficient as standard dropout, or even more efficient, while being more accurate. Experiments on standard benchmark datasets demonstrate the validity of our method, yielding consistent improvements over conventional dropout.

Illustration how Dropout Distillation absorbs information from a deep network previously trained conventional dropout.

More publications

Disentangling Monocular 3D Object Detection

By Andrea Simonelli, Samuel Rota Bulò, Lorenzo Porzi, Manuel López-Antequera, Peter Kontschieder
International Conf. on Computer Vision (ICCV) 2019 /

Seamless Scene Segmentation

By Lorenzo Porzi, Samuel Rota Bulò, Aleksander Colovic, Peter Kontschieder
Conf. on Computer Vision and Pattern Recognition (CVPR) 2019 /

AdaGraph: Unifying Predictive and Continuous Domain Adaptation through Graphs

By Massimiliano Mancini, Samuel Rota Bulò, Barbara Caputo, Elisa Ricci
Conf. on Computer Vision and Pattern Recognition (CVPR) 2019 /

Unsupervised Domain Adaptation using Feature-Whitening and Consensus Loss

By Subhankar Roy, Aliaksandr Siarohin, Enver Sangineto, Samuel Rota Bulò, Nicu Sebe, Elisa Ricci
Conf. on Computer Vision and Pattern Recognition (CVPR) 2019 /

Deep Single Image Camera Calibration with Radial Distortion

By Manuel López-Antequera, Roger Marı́, Pau Gargallo, Yubin Kuang, Javier Gonzalez-Jimenez, Gloria Haro
Conf. on Computer Vision and Pattern Recognition (CVPR) 2019 /

In-Place Activated BatchNorm for Memory-Optimized Training of DNNs

By Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder
Conf. on Computer Vision and Pattern Recognition (CVPR) 2018 /

Boosting Domain Adaptation by Discovering Latent Domains

By Massimilano Mancini, Lorenzo Porzi, Samuel Rota Bulò, Barbara Caputo, Elisa Ricci
Conf. on Computer Vision and Pattern Recognition (CVPR) 2018 /

Geometry-Aware Network for Non-Rigid Shape Prediction from a Single View

By Albert Pumarola, Antonio Agudo, Lorenzo Porzi, Alberto Sanfeliu, Vincent Lepetit, Francesc Moreno-Noguer
Conf. on Computer Vision and Pattern Recognition (CVPR) 2018 /

The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes

By Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulò, Peter Kontschieder
International Conf. on Computer Vision (ICCV) 2017 /

AutoDIAL: Automatic DomaIn Alignment Layers

By Fabio Maria Carlucci, Lorenzo Porzi, Barbara Caputo, Elisa Ricci, Samuel Rota Bulò
International Conf. on Computer Vision (ICCV) 2017 /

Loss Max-Pooling for Semantic Image Segmentation

By Samuel Rota Bulò, Gerhard Neuhold, Peter Kontschieder
Conf. on Computer Vision and Pattern Recognition (CVPR) 2017 /

Online Learning with Bayesian Classification Trees

By Samuel Rota Bulò, Peter Kontschieder
Conf. on Computer Vision and Pattern Recognition (CVPR) 2016 /