Seeing Through the Human Reporting Bias: Visual Classifiers From Noisy Human-Centric Labels
Ishan Misra, C. Lawrence Zitnick, Margaret Mitchell, Ross Girshick; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2930-2939
Abstract
When human annotators are given a choice about what to label in an image, they apply their own subjective judgments on what to ignore and what to mention. We refer to these noisy "human-centric" annotations as exhibiting human reporting bias. Examples of such annotations include image tags and keywords found on photo sharing sites, or in datasets containing image captions. In this paper, we use these noisy annotations for learning visually correct image classifiers. Such annotations do not use consistent vocabulary, and miss a significant amount of the information present in an image; however, we demonstrate that the noise in these annotations exhibits structure and can be modeled. We propose an algorithm to decouple the human reporting bias from the correct visually grounded labels. Our results are highly interpretable for reporting "what's in the image" versus "what's worth saying." We demonstrate the algorithm's efficacy along a variety of metrics and datasets, including MS COCO and Yahoo Flickr 100M. We show significant improvements over traditional algorithms for both image classification and image captioning, doubling the performance of existing methods in some cases.
Related Material
[pdf]
[
bibtex]
@InProceedings{Misra_2016_CVPR,
author = {Misra, Ishan and Lawrence Zitnick, C. and Mitchell, Margaret and Girshick, Ross},
title = {Seeing Through the Human Reporting Bias: Visual Classifiers From Noisy Human-Centric Labels},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}