Open World Object Detection in the Era of Foundation Models

Stanford University

Abstract

Object detection is integral to a bevy of real-world applications, from robotics to medical image analysis. To be used reliably in such applications, models must be capable of handling unexpected - or novel - objects. The open world object detection (OWD) paradigm addresses this challenge by enabling models to detect unknown objects and learn discovered ones incrementally. However, OWD method development is hindered due to the stringent benchmark and task definitions. These definitions effectively prohibit foundation models. Here, we aim to relax these definitions and investigate the utilization of pre-trained foundation models in OWD.

First, we show that existing benchmarks are insufficient in evaluating methods that utilize foundation models, as even naive integration methods nearly saturate these benchmarks. This result motivated us to curate a new and challenging benchmark for these models. Therefore, we introduce a new benchmark that includes five real-world application-driven datasets, including challenging domains such as aerial and surgical images, and establish baselines. We exploit the inherent connection between classes in application-driven datasets and introduce a novel method, Foundation Object detection Model for the Open world, or FOMO, which identifies unknown objects based on their shared attributes with the base known objects.

FOMO has ~3x unknown object mAP compared to baselines on our benchmark. However, our results indicate a significant place for improvement - suggesting a great research opportunity in further scaling object detection methods to real-world domains. Our code and benchmark are available in the supplementary and will be made available upon publication.

Method

(i) Attributes are generated using an LLM, which is then encoded using FOMO’s Text Transformer Encoder into the Attribute Embedding (E_att). Meanwhile, vision-based object embeddings are derived from (image-based) object exemplars from the models’ Vision Encoder (e^v). (ii) For attribute selection, we update W while freezing E_att using the BCE classification loss, followed by a threshold. (iii) to refine the attributes, we update E_att while freezing W. (iv) An image is fed into the vision encoder during inference, followed by the bounding box and classification heads. The classification head utilizes the pre-computed attribute embeddings to produce the attribute logits. To identify unknown objects, we look for object proposals that are in-distribution (ID) to the attributes but out-of-distribution (OOD) to the known classes. s_A attribute scores between an image and attribute embedding.

Results

(top) BASE-FS (bottom) FOMO on RWD. blue - unknown, green - known. FOMO shows superior performance on RWD, appearing to have less known-class confusion and better unknown object detection capability. For example, in Aquatic, BASE-FS seems to confuse unknown objects (stingrays) as known objects (sharks), but FOMO appears more robust.

Each domain is broken up into two tasks, where in task 1, you are given some classes, and then known and unknown mAP are evaluated. In Task 2, the rest of the classes are revealed, and we evaluate Previously and Currently known mAP. FOMO and BASE-FS are evaluated in 100-shot regime. To see the effect of the number of shots on performance. GT baselines use the ground-truth class names to identify unknown objects, effectively operating in the open-vocabulary paradigm, and serve as an upper bound to the text-conditioned (zero-shot) baselines.

BibTeX

@inproceedings{zohar2023fomo,
    title = {Open World Object Detection in the Era of Foundation Models},
    author = {Zohar, Orr and Lozano, Alejandro and Goel, Shelly and Yeung, Serena and Wang, Kuan-Chieh},
    year = {2023},
    booktitle = {arXiv preprint arXiv:2312.05745},
}