:::info
Authors:
(1) Hoang Viet Do, Intelligent Navigation and Control Systems Laboratory (iNCSL), School of Intelligent Mechatronics Engineering, and the Department of Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic Of Korea ([email protected]);
(2) Yong Hun Kim, Intelligent Navigation and Control Systems Laboratory (iNCSL), School of Intelligent Mechatronics Engineering, and the Department of Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic Of Korea ([email protected]);
(3) Joo Han Lee, Intelligent Navigation and Control Systems Laboratory (iNCSL), School of Intelligent Mechatronics Engineering, and the Department of Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic Of Korea ([email protected]);
(4) Min Ho Lee, Intelligent Navigation and Control Systems Laboratory (iNCSL), School of Intelligent Mechatronics Engineering, and the Department of Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic Of Korea ([email protected])r;
(5) Jin Woo Song, Intelligent Navigation and Control Systems Laboratory (iNCSL), School of Intelligent Mechatronics Engineering, and the Department of Convergence Engineering for Intelligent Drone, Sejong University, Seoul 05006, Republic Of Korea ([email protected]).
:::
Table of Links
Abstract and I. Introduction
II. Related Works
III. Dead Reckoning using Radar Odometry
IV. Stochastic Cloning Indirect Extended Kalman Filter
V. Experiments
VI. Conclusion and References
Abstract— In this paper, we propose a radar odometry structure that directly utilizes radar velocity measurements for dead reckoning while maintaining its ability to update estimations within the Kalman filter framework. Specifically, we employ the Doppler velocity obtained by a 4D Frequency Modulated Continuous Wave (FMCW) radar in conjunction with gyroscope data to calculate poses. This approach helps mitigate high drift resulting from accelerometer biases and double integration. Instead, tilt angles measured by gravitational force are utilized alongside relative distance measurements from radar scan matching for the filter’s measurement update. Additionally, to further enhance the system’s accuracy, we estimate and compensate for the radar velocity scale factor. The performance of the proposed method is verified through five real-world open-source datasets. The results demonstrate that our approach reduces position error by 47% and rotation error by 52% on average compared to the state-of-the-art radarinertial fusion method in terms of absolute trajectory error
OPEN SOURCE CODE
The implementation of the method described in this article is available at https://github.com/hoangvietdo/ dero.
I. INTRODUCTION
In recent decades, achieving precise and reliable localization has emerged as a paramount challenge for advanced autonomous robots. A widely employed approach involves integrating an Inertial Measurement Unit (IMU) with supplementary sensors to address this challenge. The rationale behind this fusion lies in the fact that while IMUs provide highrate but short-term accuracy, additional drift-free sources are required. Among these methods, the fusion of IMU data with that from a Global Navigation Satellite System (GNSS) stands out as the most widely used, with rich developmental
history [1]. However, GNSS signals are not always available, as in tunnels or indoor environments, and their reliability degrades in densely built urban areas due to operational constraints [2].
To date, the common GNSS-denied navigation strategies have primarily relied on visual sensors such as cameras or Light Detection and Ranging (LiDAR) sensors. While the fusion of these sensors with an IMU yields significant advancements [3], [4], they do not offer a comprehensive solution for all environmental conditions. Specifically, cameras are susceptible to failure in low-light conditions, such as darkness or low-textured scenes. One potential approach to addressing this challenge involves the utilization of thermal or infrared cameras [5]; still, these alternatives are ineffective in the absence of sufficient temperature gradients. Conversely, LiDAR operates independently of light conditions but experiences degradation when operating in the presence of fog or smoke [6].
Instead of relying solely on visual sensors like cameras or LiDAR, another viable option for GNSS-denied navigation is the utilization of a 4-dimensional (4D) Frequency-Modulated Continuous Wave (FMCW) radar [7]. Radar offers distinct advantages, particularly in adverse weather conditions such as fog or smoke, where the previous mentioned sensors may fail. Moreover, modern FMCW radars are characterized by their affordability, lightweight and compact size, making them suitable for small-size robotic application with spacelimitation such as drones. Another significant advantage of 4D FMCW radar is its ability to provide Doppler velocity in addition to range, azimuth, and elevation angle information of targets. This feature holds the potential for achieving superior localization results compared to other sensor systems.
Considerable efforts have recently been dedicated to integrating an IMU with a 4D FMCW radar for indoor localization [8]. Given that radar offers 4D point cloud measurements of targets, two primary approaches can be employed to determine the robot’s poses. The first approach, known as the instantaneous method, utilizes Doppler velocities and angles of arrival from a single scan to estimate ego velocity [9]. The second approach involves scan matching between two radar scans, enabling not only localization but also mapping [10]. However, due to challenges such as signal reflection, ghost targets, and multi-path interference, this method remains demanding and requires extensive tuning and computational resources. On the other hand, in applications where mapping is unnecessary, the former approach proves to be more robust and reliable.
To the best of the author’s knowledge, existing literature predominantly treats radar as an auxiliary sensor within a loosely coupled framework. This methodology is commonly referred to as radar inertial odometry (RIO). In this framework, radar measurements typically serve the purpose of updating the state estimation, often implemented through an extended Kalman filter (EKF) [11]. Meanwhile, an Inertial Navigation System (INS) algorithm integrates acceleration to derive velocity, and then integrates again to determine the platform’s position [12]. However, this conventional methodology is notably sensitive to the accuracy of accelerometer bias estimation. Even a modest error in bias estimation, as small as 10%, can lead to large position error [13].
To address these challenges, the dead reckoning (DR) fusion technique has been proposed and implemented [13]. When a velocity sensor such as an odometer is available, it replaces the accelerometer for position estimation. This substitution offers several advantages. Firstly, it completely prevents significant errors caused by accelerometers biases (typically from a low-grade IMU), while also eliminating the need for a second integration step to calculate position estimation. Additionally, when relying on accelerometer measurements, compensating for gravitational forces becomes necessary [12]. Inaccuracies in determining this constant can lead to notable errors due to the effect of double integration. Moreover, conventional INS may include additional attitude errors caused by vibrations, such as those encountered in drones or legged robots. Vibration induces acceleration biases in IMU measurements, whereas it is typically considered as noise in velocity sensors. Consequently, handling noise is generally considered more straightforward than addressing biases.
Motivated by the aforementioned advantages and existing empirical support for the DR technique [13], this paper introduces a DR-based structure for radar odometry (RO), as depicted in Fig. 1. In this framework, we integrate the 4D FMCW radar and gyroscope as the core navigation algorithm, while treating the accelerometers as aided sensors. We argue that radar is highly suitable for the DR structure, as it completely eliminate the side-slip error associated with traditional odometer-based DR approaches in automotive applications. Furthermore, the versatility of the DR technique can be extended to non-wheeled robots (e.g., drones, legged robots and ships) when radar is employed.
To sum up, our primary contributions of this article lie in the following aspects:
-
A framework of dead reckoning based on radar odometry (DeRO) with accelerometers aided is presented. In this pipeline, the radar’s ego velocity and angular velocity measurement from gyroscope are used for the IEKF’s time update. Simultaneously, the radar’s range measurement and title angles computed by accelerometers contribute to IEKF’s measurement update under stochastic cloning concept.
-
We estimate and compensate for the 3D radar velocity scale factor to optimize the performance of the proposed system.
-
We conduct a comparative analysis with state-of-the-art RIO approaches using the same open-source dataset to validate the effectiveness of our proposed system.
-
We implement the proposed method using C++ in robot operating system 2 (ROS 2) [14] and make our source code openly available for the benefit of the research community.
The rest of this article is organized as follows. In Section II, we review related work in the context of RIO and DR. Definitions on coordinate frames and mathematical notations along with DR-based radar odometry mechanization are given in Section III. We develop the stochastic cloning based indirect EKF (IEKF) starting from the state-space definition in Section IV. Next, Section V describes the real-world experiment results to demonstrate our proposed framework. Finally, Section VI concludes this article.
:::info
This paper is available on arxiv under ATTRIBUTION-NONCOMMERCIAL-NODERIVS 4.0 INTERNATIONAL license.
:::