Abstract
Measuring vehicle locations relative to a driver's vehicle is a critical component in the analysis of driving data from both postanalysis (such as in naturalistic driving studies) or in autonomous vehicle navigation. In this work we describe a method to estimate vehicle positions from a forward-looking video camera using intrinsic camera calibration, estimates of extrinsic parameters, and a convolutional neural network trained to detect and locate vehicles in video data. We compare the measurements we achieve with this method with ground truth and with radar data available from a naturalistic driving study. We identify regions where video is preferred, where radar is preferred, and explore trade-offs between the two methods in regions where the preference is more ambiguous. We describe applications of these measurements for transportation analysis.
Original language | English |
---|---|
Article number | 334 |
Journal | IS and T International Symposium on Electronic Imaging Science and Technology |
Volume | 2021 |
Issue number | 6 |
DOIs | |
State | Published - 2021 |
Event | 2021 Intelligent Robotics and Industrial Applications Using Computer Vision Conference, IRIACV 2021 - Virtual, Online, United States Duration: Jan 11 2021 → Jan 28 2021 |
Funding
We would like to acknowledge the support and assistance of Virginia Tech Transportation Institute. This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). Work was funded by the Federal Highway Administration of the US Department of Transportation, Exploratory Advanced Research Fund. We would like to acknowledge the support and assistance of Virginia Tech Transportation Institute. This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan). Work was funded by the Federal Highway Administration of the US Department of Transportation, Exploratory Advanced Research Fund.
Keywords
- Autonomous driving
- Camera calibration
- Data fusion
- Scene understanding