Object Detectors Emerge in Deep Scene CNNs

New research shows how object detectors emerge as a result of learning to classify scene categories from a single network - without needing multiple outputs or networks.

A recent paper entitled "Object Detectors Emerge in Deep Scene CNNs" discusses the use of new system architectures, deep convolutional neural networks and image databases to detect objects and classify scenes. They demonstrate how the same network can learn to recognize scenes and perform both scene recognition and object localization without being taught the notion of objects.

Read paper here.

Abstract

With the success of new computational architectures for visual processing, such as convolutional neural networks (CNN) and access to image databases with millions of labeled examples (e.g., ImageNet, Places), the state of the art in computer vision is advancing rapidly. One important factor for continued progress is to understand the representations that are learned by the inner layers of these deep architectures. Here we show that object detectors emerge from training CNNs to perform scene classification. As scenes are composed of objects, the CNN for scene classification automatically discovers meaningful objects detectors, representative of the learned scene categories. With object detectors emerging as a result of learning to recognize scenes, our work demonstrates that the same network can perform both scene recognition and object localization in a single forward-pass, without ever having been explicitly taught the notion of objects.