ADVIO: An Authentic Dataset for Visual-Inertial Odometry
DOI10.5281/zenodo.1476931Zenodo1476931MaRDI QIDQ6682442
Dataset published at Zenodo repository.
Author name not available (Why is that?)
Publication date: 25 July 2018
Data abstract: This Zenodo upload contains the ADVIO data for benchmarking and developing visual-inertial odometry methods. The data documentation is available on Github: https://github.com/AaltoVision/ADVIO Paper abstract: The lack of realistic and open benchmarking datasets for pedestrian visual-inertial odometry has made it hard to pinpoint differences in published methods. Existing datasets either lack a full six degree-of-freedom ground-truth or are limited to small spaces with optical tracking systems. We take advantage of advances in pure inertial navigation, and develop a set of versatile and challenging real-world computer vision benchmark sets for visual-inertial odometry. For this purpose, we have built a test rig equipped with an iPhone, a Google Pixel Android phone, and a Google Tango device. We provide a wide range of raw sensor data that is accessible on almost any modern-day smartphone together with a high-quality ground-truth track. We also compare resulting visual-inertial tracks from Google Tango, ARCore, and Apple ARKit with two recent methods published in academic forums. The data sets cover both indoor and outdoor cases, with stairs, escalators, elevators, office environments, a shopping mall, and metro station. Attribution: If you use this data set in your own work, please cite this paper: Santiago Corts, Arno Solin, Esa Rahtu, and Juho Kannala (2018). ADVIO: An authentic dataset for visual-inertial odometry. In European Conference on Computer Vision (ECCV). Munich, Germany.
This page was built for dataset: ADVIO: An Authentic Dataset for Visual-Inertial Odometry