Unbiased Teacher for Semi-Supervised Object Detection



ICLR 2021


Figure: (a) Illustration of semi-supervised object detection, where the model observes a set of labeled data and a set of unlabeled data in the training stage. (b) Our proposed model can efficiently leverage the unlabeled data (i.e., label supervision from 0.5% to 10%) and perform favorably against the existing semi-supervised object detection works.




Motivation

The availability of large-scale datasets and computational resources has allowed deep neural networks to achieve strong performance on a wide variety of tasks. However, training these networks requires a large number of labeled examples that are expensive to annotate and acquire. As an alternative, Semi-Supervised Learning methods have received growing attention. Yet, these advances have primarily focused on image classification, rather than object detection where bounding box annotations require more effort.



Challenges

A straightforward way to address Semi-Supervised Object Detection (SS-OD) is to adapt from existing semi-supervised image classification methods (e.g., FixMatch). Unfortunately, the nature of class-imbalance in object detection tasks impedes the usage of pseudo-labeling. Also, object detectors are far more complicated than image classifers in terms of model architectures.



Our Method

Figure: Illustration of our proposed Unbiased Teacher.

To overcome the above issues, we propose a general framework – Unbiased Teacher: an approach that jointly trains a Student and a slowly progressing Teacher in a mutually-beneficial manner, in which the Teacher generates pseudo-labels to train the Student, and the Student gradually updates theTeacher via Exponential Moving Average (EMA), while the Teacher and Student are given different augmented input images (see the above Figure). To understand more details about our method, we encourage the readers to take a closer look at our paper.



Paper



Unbiased Teacher for Semi-Supervised Object Detection

International Conference on Learning Representations, 2021 (ICLR 2021)
Yen-Cheng Liu, Chih-Yao Ma, Zijian He, Chia-Wen Kuo, Kan Chen, Peizhao Zhang, Bichen Wu, Zsolt Kira, Peter Vajda




Code





Acknowledgements

This project is done partially while Yen-Cheng Liu was interning at Facebook.
Yen-Cheng Liu and Zsolt Kira were partly supported by DARPA’s Learning with Less
Labels(LwLL) program under agreement HR0011-18-S-0044, as part of their affiliation with Georgia Tech.