Abstract

In recent years there have been excellent results in Visual-Inertial Odometry\ntechniques, which aim to compute the incremental motion of the sensor with high\naccuracy and robustness. However these approaches lack the capability to close\nloops, and trajectory estimation accumulates drift even if the sensor is\ncontinually revisiting the same place. In this work we present a novel\ntightly-coupled Visual-Inertial Simultaneous Localization and Mapping system\nthat is able to close loops and reuse its map to achieve zero-drift\nlocalization in already mapped areas. While our approach can be applied to any\ncamera configuration, we address here the most general problem of a monocular\ncamera, with its well-known scale ambiguity. We also propose a novel IMU\ninitialization method, which computes the scale, the gravity direction, the\nvelocity, and gyroscope and accelerometer biases, in a few seconds with high\naccuracy. We test our system in the 11 sequences of a recent micro-aerial\nvehicle public dataset achieving a typical scale factor error of 1% and\ncentimeter precision. We compare to the state-of-the-art in visual-inertial\nodometry in sequences with revisiting, proving the better accuracy of our\nmethod due to map reuse and no drift accumulation.\n

Keywords

MonocularReuseSimultaneous localization and mappingArtificial intelligenceComputer visionInertial frame of referenceComputer scienceEnvironmental scienceRobotEngineeringPhysicsMobile robot

Affiliated Institutions

Related Publications

Publication Info

Year
2017
Type
article
Volume
2
Issue
2
Pages
796-803
Citations
783
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

783
OpenAlex

Cite This

Raul Mur-Artal, Juan D. Tardós (2017). Visual-Inertial Monocular SLAM With Map Reuse. IEEE Robotics and Automation Letters , 2 (2) , 796-803. https://doi.org/10.1109/lra.2017.2653359

Identifiers

DOI
10.1109/lra.2017.2653359