LOCUS: A Multi-Sensor Lidar-Centric Solution for High-Precision Odometry and 3D Mapping in Real-Time


Table of Contents

A high-precision odometer and 3D mapping system centered on LiDAR and multi-sensor collaboration. The system is the core perception system used by the CoSTAR team in the DARPA Subterranean Challenge.

Figure 1 Testing our proposed lidar odometry system in the DARPA Subsurface Challenge. (a) Urban Circuit test environment and Husky wheeled robot, (b) Tunnel Circuit test environment and Husky wheeled robot, (c) Ground Truth map of Tunnel dataset, (d) Urban Circuit test environment and Spot quadruped robot, (e) The map generated by the LOCUS system on the Husky robot on the Urban Beta test route, (f) the Ground Truth map of the Urban Alpha route.

1.Main work and contribution

1) Architecture: The LOCUS architecture (see Figure 2) enables accurate, robust, and real-time odometry in challenging perception environments and mitigates the challenges posed by sensor failures. The architecture can be adapted to heterogeneous robot platforms with different sensor inputs and computing capabilities.

 2) Resilience: The system achieves safety assurance against the loss or damage of one or more sensor channels through a loosely coupled switching scheme between sensing modalities.

 3) Environmental Adaptability: The system further supports the fusion of domain knowledge (if available), such as flat ground in man-made structures.

  • Field experiments: We conduct extensive field experiments on the proposed system to verify the reliability and effectiveness of the proposed system.
Fig. 2 Architecture diagram of the proposed LOCUS system.

1. Point cloud preprocessing

Motion Distortion Correction (MDC): We assume one or more 360-degree lidar sensors. The data from them is first fed to a motion distortion correction unit, which corrects the Cartesian coordinates of each point by taking into account the robot’s motion during each scan. This correction is especially important for distant points when the robot is undergoing high-speed rotations, and is a commonly used step. The information used for correction comes from the IMU or some kind of odometry (eg VIO, WIO, KIO), and which information source to choose will depend on their calibration and reliability at the time.

Point Cloud Merging: For robots with multiple lidars, to expand the robot’s overall field of view, the point cloud merger uses known rigid transformations between sensors to merge each motion-corrected point cloud into one.

Point Cloud Filter: The resulting point cloud is then processed in a Point Cloud Filter to remove noise and out-of-range points, control data volume and reduce computational load. The point cloud filter is a sequential combination of a 3D voxel grid filter and a random downsampling filter, which can be individually tuned, activated and deactivated. A voxel grid filter averages points in each 3D volume (one voxel) to reduce data size while still capturing the main structure of the environment. We use a voxel size of 0.1m in our tests in this paper. For the random downsampling filter, we use the algorithm of [23] with a downsampling percentage of 90%. For these two filters we use the implementation in the point cloud library PCL [24].

2. Scanning and matching module

The scan matching unit (light blue box in Figure 2) uses GICP-based scan-to-scan and scan-to-submap matching operations to estimate the Six degrees of freedom motion variation between scans.

Sensor Integration Module: In robots with multimodal perception, when available, we use initial transform estimates from non-LiDAR sources in the scan-to-scan matching phase to improve accuracy and reduce computation.

Health Monitoring: Multiple sources of odometry (e.g. VIO, KIO, WIO) and raw IMU measurements are first transformed into the robot coordinate frame and then fed into the health monitor. The monitor will choose the best one to output from the sensor sources considered healthy. The system is designed to employ various health indicators to assess the health of input sources. For example, work in progress [25] applies different health checks (such as feature counts and observability analysis) to different sources, as well as rate and covariance checks. In our current implementation, we use a simple rate check: if the rate of incoming messages is sufficient (greater than 1Hz), the source is considered healthy.

Scan-to-scan matching: In the scan-to-scan matching stage, we use GICP to compute an optimal transformation Tk-1,k that minimizes the residual E between Lk−1 and corresponding points in Lk.

When the sensor integration module succeeds, we initialize GICP with information provided by other sensor sources. If all sensors fail, GICP is initialized with the identity matrix and the system becomes a pure lidar odometry.

Scan-to-subimage matching: The motion estimated in the scan-to-scan matching stage will be further refined by the scan-to-subimage matching step. Here, Lk will be matched against a local submap Sk estimated from the current pose of the robot in world coordinates from a local region of the global map.

In this optimization, T is initialized using the result in Equation 1. The global map is a point cloud stored in octree format, which results from the accumulation of the point cloud every translation of t meters or rotation of r degrees: for our results we use t = 1 meter and r = 30 degrees. We use an octree with a minimum resolution of 0.001m to store the map, which not only facilitates fast searching, but also saves almost all points. In the experiments in this paper, we uniformly use four threads to run the GICP algorithm.

Environmental Suitability: Flat Earth Assumption: In man-made environments, there are many areas with flat ground, which can be used to aid odometry if known in advance. When detected or known, the Assumption of Flat Ground (FGA) can be activated to limit drift in the Z direction and errors in roll and pitch (blue box in Figure 2, lower right). The FGA operates on scan-to-scan and scan-to-submap outputs, and it zeros out any Z-axis movement, roll or pitch, in a global gravity-aligned coordinate system.

3.Experimental results

We compare the proposed algorithm with six state-of-the-art open source algorithms covering different point cloud matching algorithms and sensor integration methods, Table 1 summarizes the characteristics of these several algorithms.

We will compare from three perspectives: 1) accuracy, 2) robustness, 3) operational efficiency.

Here we only show part of the comparison results. The comparison results of accuracy are shown in Table 2, the comparison results of robustness are shown in Table 3 and Figure 5, and the comparison results of operating efficiency are shown in Figure 6.

Figure 5 Robustness testing on the Beta route. a) Comparison result for WIO/IMU failure, b) Comparison result for WIO failure, c) Comparison result for LiDAR failure.

Figure 6. Processing time comparison results for different lidar algorithms. The operation time of 0.1s can support the real-time operation of the system at 10 Hz.


Request User Manual

Request Datasheet