Authors:
(1) Zhaoqing Wang, The University of Sydney and AI2Robotics;
(2) Xiaobo Xia, The University of Sydney;
(3) Ziye Chen, The University of Melbourne;
(4) Xiao He, AI2Robotics;
(5) Yandong Guo, AI2Robotics;
(6) Mingming Gong, The University of Melbourne and Mohamed bin Zayed University of Artificial Intelligence;
(7) Tongliang Liu, The University of Sydney.
Table of Links
Abstract and 1. Introduction
2. Related works
3. Method and 3.1. Problem definition
3.2. Baseline and 3.3. Uni-OVSeg framework
4. Experiments
4.1. Implementation details
4.2. Main results
4.3. Ablation study
5. Conclusion
6. Broader impacts and References
A. Framework details
B. Promptable segmentation
C. Visualisation
5. Conclusion
In conclusion, this paper proposes an innovative framework for weakly-supervised open-vocabulary segmentation, named Uni-OVSeg. Using independent image-text and image-mask pairs, Uni-OVSeg effectively reduces the dependency on labour-intensive image-mask-text triplets, meanwhile achieving impressive segmentation performance in open-vocabulary settings. Using the LVLM to refine text descriptions and multi-scale ensemble to enhance the quality of region embeddings, we alleviate the noise in masktext correspondences, achieving substantial performance improvements. Notably, Uni-OVSeg significantly outper
forms previous state-of-the-art weakly-supervised methods and even surpasses the cutting-edge fully-supervised method on the Challenging PASCAL Context-459 dataset. This impressive advancement demonstrates the superiority of our proposed framework and paves the way for further research.