Table of Links
Abstract and 1 Introduction
-
Related Works
2.1. Vision-and-Language Navigation
2.2. Semantic Scene Understanding and Instance Segmentation
2.3. 3D Scene Reconstruction
-
Methodology
3.1. Data Collection
3.2. Open-set Semantic Information from Images
3.3. Creating the Open-set 3D Representation
3.4. Language-Guided Navigation
-
Experiments
4.1. Quantitative Evaluation
4.2. Qualitative Results
-
Conclusion and Future Work, Disclosure statement, and References
2. Related Works
2.1. Vision-and-Language Navigation:
Vision-and-Language Navigation (VLN) has recently gained much traction because of its potential to improve autonomous navigation by combining human natural language understanding and visual perception [2, 6, 11, 12]. The development of foundation models like CLIP [9], which combines image and text data to learn the rich representations of the environment, has spurred considerable progress in vision and language understanding. The multi-modal nature of these foundation models allows them to comprehend concepts in both text and image and even connect concepts between the two modalities. There have been increasing advancements to address the gaps faced by previous methods for VLN, like navigation efficiency to spatial goals specified by language commands and zero-shot spatial goal navigation given unseen language instructions [2].
A major challenge in VLN is interpreting language instructions in unfamiliar environments. A significant limitation of previous studies in this domain is their handling of action errors. If a robot agent makes an incorrect action, it risks failing to reach its destination or exploring unnecessary areas, leading to increased computational demands and possibly entering a state from which recovery is unfeasible. State-of-the-art VLN methodologies employ diverse strategies to excel in such scenarios. Some methods adopt a specialized pre-training and fine-tuning approach designed explicitly for VLN tasks, utilizing transformer-based architectures [13]. These strategies often involve using image-text-action triplets in a self-supervised learning context [14]. Other approaches refine the pre-training process to enhance VLN task performance, for instance, by emphasizing the learning of spatiotemporal visual-textual relationships to better utilize past observations for future action prediction [15, 16, 17]. Furthermore, contemporary VLN systems predominantly rely on simulations due to their dependency on panoramic views and extracting region features, which can be computationally prohibitive. In contrast, our work demonstrates our pipeline’s efficiency and computational viability with real-world data, underscoring its practical applicability.
Given the recent advances in the semantic understanding of images, there has been an increasing interest in using semantic reasoning to improve exploration efficiency in novel environments and handling semantic goals specified via categories, images, and language instructions. Most of these methods are specialized to a single task, i.e. they are uni-modal.
Recent works have also been on executing tasks with lifelong learning, which means taking advantage of experience in the same environment for multi-modal navigation [11]. One such task is that a robot must be able to reach any object specified in any way and remember object locations to return to them. The work done on these tasks also utilizes CLIP to align both image and language embeddings, where they match language goal descriptions with all instances in the environment using the cosine similarity score between their CLIP features.
:::info
Authors:
(1) Laksh Nanwani, International Institute of Information Technology, Hyderabad, India; this author contributed equally to this work;
(2) Kumaraditya Gupta, International Institute of Information Technology, Hyderabad, India;
(3) Aditya Mathur, International Institute of Information Technology, Hyderabad, India; this author contributed equally to this work.
(4) Swayam Agrawal, International Institute of Information Technology, Hyderabad, India;
(5) A.H. Abdul Hafez, Hasan Kalyoncu University, Sahinbey, Gaziantep, Turkey;
(6) K. Madhava Krishna, International Institute of Information Technology, Hyderabad, India.
:::
:::info
This paper is available on arxiv under CC by-SA 4.0 Deed (Attribution-Sharealike 4.0 International) license.
:::
