Loss Max-Pooling for Semantic Image Segmentation
AutoDIAL: Automatic DomaIn Alignment Layers
The Mapillary Vistas Dataset for Semantic Understanding of Street Scenes
We introduce a novel loss max-pooling concept for handling imbalanced training data distributions, applicable as alternative loss layer in the context of deep neural networks for semantic image segmentation. Most real-world semantic segmentation datasets exhibit long tail distributions with few object categories comprising the majority of data and consequently biasing the classifiers towards them. Our method adaptively re-weights the contributions of each pixel based on their observed losses, targeting under-performing classification results as often encountered for under-represented object classes. Our approach goes beyond conventional cost-sensitive learning attempts through adaptive considerations that allow us to indirectly address both, inter- and intra-class imbalances. We provide a theoretical justification of our approach, complementary to experimental analyses on benchmark datasets. In our experiments on the Cityscapes and Pascal VOC 2012 segmentation datasets we find consistently improved results, demonstrating the efficacy of our approach.
Gain per object category in Cityscapes when using our proposed loss function vs. conventional log-loss (in %) as a function of absolute number of training pixels for a given object category (x-axis, log-scale).