TY - GEN
T1 - Online Point Cloud Super Resolution using Dictionary Learning for 3D Urban Perception
AU - Shinde, Rajat C.
AU - Potnis, Abhishek V.
AU - Durbha, Surya S.
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2020/9/26
Y1 - 2020/9/26
N2 - Real-time embedded vision tasks require extraction of complex geometric and morphological features from the raw 3D point cloud acquired using range scanning systems like lidar, radar etc. and depth cameras. Such applications are found in autonomous navigation, surveying, 3D mapping and localization tasks such as automatic target recognition (ATR). Typically, a dataset acquired during surveying by remote sensing lidar scanners, known as point cloud, is (1) huge in size and requires a big chunk of memory for processing at a single instance and, (2) experiences missing information due to rapid change in orientation of the sensor while scanning. In our work, we are addressing both the issues combinedly by proposing an online point cloud super-resolution approach for translating a low dimensional point cloud to a high dimensional dense point cloud by learning dictionaries in the low-dimensional subspace. We are presenting our approach for an urban road scenario by reconstructing dense point clouds of 3D objects and comparing results based on PSNR and Hausdorff distance.
AB - Real-time embedded vision tasks require extraction of complex geometric and morphological features from the raw 3D point cloud acquired using range scanning systems like lidar, radar etc. and depth cameras. Such applications are found in autonomous navigation, surveying, 3D mapping and localization tasks such as automatic target recognition (ATR). Typically, a dataset acquired during surveying by remote sensing lidar scanners, known as point cloud, is (1) huge in size and requires a big chunk of memory for processing at a single instance and, (2) experiences missing information due to rapid change in orientation of the sensor while scanning. In our work, we are addressing both the issues combinedly by proposing an online point cloud super-resolution approach for translating a low dimensional point cloud to a high dimensional dense point cloud by learning dictionaries in the low-dimensional subspace. We are presenting our approach for an urban road scenario by reconstructing dense point clouds of 3D objects and comparing results based on PSNR and Hausdorff distance.
KW - 3D vision and perception
KW - lidar point cloud super-resolution
KW - online dictionary learning
UR - http://www.scopus.com/inward/record.url?scp=85102015855&partnerID=8YFLogxK
U2 - 10.1109/IGARSS39084.2020.9323992
DO - 10.1109/IGARSS39084.2020.9323992
M3 - Conference contribution
AN - SCOPUS:85102015855
T3 - International Geoscience and Remote Sensing Symposium (IGARSS)
SP - 4414
EP - 4417
BT - 2020 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2020 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2020 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2020
Y2 - 26 September 2020 through 2 October 2020
ER -