Abstract
In recent years there have been excellent results in Visual-Inertial Odometry\ntechniques, which aim to compute the incremental motion of the sensor with high\naccuracy and robustness. However these approaches lack the capability to close\nloops, and trajectory estimation accumulates drift even if the sensor is\ncontinually revisiting the same place. In this work we present a novel\ntightly-coupled Visual-Inertial Simultaneous Localization and Mapping system\nthat is able to close loops and reuse its map to achieve zero-drift\nlocalization in already mapped areas. While our approach can be applied to any\ncamera configuration, we address here the most general problem of a monocular\ncamera, with its well-known scale ambiguity. We also propose a novel IMU\ninitialization method, which computes the scale, the gravity direction, the\nvelocity, and gyroscope and accelerometer biases, in a few seconds with high\naccuracy. We test our system in the 11 sequences of a recent micro-aerial\nvehicle public dataset achieving a typical scale factor error of 1% and\ncentimeter precision. We compare to the state-of-the-art in visual-inertial\nodometry in sequences with revisiting, proving the better accuracy of our\nmethod due to map reuse and no drift accumulation.\n
Keywords
Affiliated Institutions
Related Publications
VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator
One camera and one low-cost inertial measurement unit (IMU) form a monocular visual-inertial system (VINS), which is the minimum sensor suite (in size, weight, and power) for th...
ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM
This paper presents ORB-SLAM3, the first system able to perform visual,\nvisual-inertial and multi-map SLAM with monocular, stereo and RGB-D cameras,\nusing pin-hole and fisheye...
LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping
We propose a framework for tightly-coupled lidar inertial odometry via smoothing and mapping, LIO-SAM, that achieves highly accurate, real-time mobile robot trajectory estimatio...
Mobile Robot Localization and Mapping with Uncertainty using Scale-Invariant Visual Landmarks
A key component of a mobile robot system is the ability to localize itself accurately and, simultaneously, to build a map of the environment. Most of the existing algorithms are...
Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization
In this paper we propose an affordable solution to self-localization, which utilizes visual odometry and road maps as the only inputs. To this end, we present a probabilistic mo...
Publication Info
- Year
- 2017
- Type
- article
- Volume
- 2
- Issue
- 2
- Pages
- 796-803
- Citations
- 783
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1109/lra.2017.2653359