Abstract

We present Argoverse, a dataset designed to support autonomous vehicle perception tasks including 3D tracking and motion forecasting. Argoverse includes sensor data collected by a fleet of autonomous vehicles in Pittsburgh and Miami as well as 3D tracking annotations, 300k extracted interesting vehicle trajectories, and rich semantic maps. The sensor data consists of 360 degree images from 7 cameras with overlapping fields of view, forward-facing stereo imagery, 3D point clouds from long range LiDAR, and 6-DOF pose. Our 290km of mapped lanes contain rich geometric and semantic metadata which are not currently available in any public dataset. All data is released under a Creative Commons license at Argoverse.org. In baseline experiments, we use map information such as lane direction, driveable area, and ground height to improve the accuracy of 3D object tracking. We use 3D object tracking to mine for more than 300k interesting vehicle trajectories to create a trajectory forecasting benchmark. Motion forecasting experiments ranging in complexity from classical methods (k-NN) to LSTMs demonstrate that using detailed vector maps with lane-level information substantially reduces prediction error. Our tracking and forecasting experiments represent only a superficial exploration of the potential of rich maps in robotic perception. We hope that Argoverse will enable the research community to explore these problems in greater depth.

Keywords

Computer scienceArtificial intelligenceComputer visionPoint cloudTrajectoryTracking (education)Benchmark (surveying)LidarMetadataVideo trackingStructure from motionSensor fusionRangingObject (grammar)Motion (physics)Geography

Affiliated Institutions

Related Publications

Publication Info

Year
2019
Type
article
Pages
8740-8749
Citations
1306
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1306
OpenAlex

Cite This

Ming-Fang Chang, Deva Ramanan, James Hays et al. (2019). Argoverse: 3D Tracking and Forecasting With Rich Maps. , 8740-8749. https://doi.org/10.1109/cvpr.2019.00895

Identifiers

DOI
10.1109/cvpr.2019.00895