LidarCSNet: A Deep Convolutional Compressive Sensing Reconstruction Framework for 3D Airborne Lidar Point Cloud

Rajat C. Shinde, Surya S. Durbha, Abhishek V. Potnis

Research output: Contribution to journalArticlepeer-review

18 Scopus citations

Abstract

Lidar scanning is a widely used surveying and mapping technique ranging across remote-sensing applications involving topological, and topographical information. Typically, lidar point clouds, unlike images, lack inherent consistent structure and store redundant information thus requiring huge processing time. The Compressive Sensing (CS) framework leverages this property to generate sparse representations and accurately reconstructs the signals from very few linear, non-adaptive measurements. The reconstruction is based on valid assumptions on the following parameters- (1) sampling function governed by sampling ratio for generating samples, and (2) measurement function for sparsely representing the data in a low-dimensional subspace. In our work, we address the following motivating scientific questions- Is it possible to reconstruct dense point cloud data from a few sparse measurements? And, what could be the optimal limit for CS sampling ratio with respect to overall classification metrics? Our work proposes a novel Convolutional Neural Network based deep Compressive Sensing Network (named LidarCSNet) for generating sparse representations using publicly available 3D lidar point clouds of the Philippines. We have performed extensive evaluations for analysing the reconstruction for different sampling ratios {4%, 10%, 25%, 50% and 75%} and we observed that our proposed LidarCSNet reconstructed the 3D lidar point cloud with a maximum PSNR of 54.47 dB for a sampling ratio of 75%. We investigate the efficacy of our novel LidarCSNet framework with 3D airborne lidar point clouds for two domains - forests and urban environment on the basis of Peak Signal to Noise Ratio, Haussdorf distance, Pearson Correlation Coefficient and Kolmogorov-Smirnov Test Statistic as evaluation metrics for 3D reconstruction. The results relevant to forests such as Canopy Height Model and 2D vertical profile are compared with the ground truth to investigate the robustness of the LidarCSNet framework. In the urban environment, we extend our work to propose two novel 3D lidar point cloud classification frameworks, LidarNet and LidarNet++, achieving maximum classification accuracy of 90.6% as compared to other prominent lidar classification frameworks. The improved classification accuracy is attributed to ensemble-based learning on the proposed novel 3D feature stack and justifies the robustness of using our proposed LidarCSNet for near-perfect reconstruction followed by classification. We document our classification results for the original dataset along with the point clouds reconstructed by using LidarCSNet for five different measurement ratios - based on overall accuracy and mean Intersection over Union as evaluation metrics for 3D classification. It is envisaged that our proposed deep network based convolutional sparse coding approach for rapid lidar point cloud processing finds huge potential across vast applications, either as a plug-and-play (reconstruction) framework or as an end-to-end (reconstruction followed by classification) system for scalability.

Original languageEnglish
Pages (from-to)313-334
Number of pages22
JournalISPRS Journal of Photogrammetry and Remote Sensing
Volume180
DOIs
StatePublished - Oct 2021
Externally publishedYes

Funding

This research is partially funded under the Prime Minister’s Research Fellowship issued by the Ministry of Education, Government of India. The high-performance computation for lidar data processing is performed using the Google Cloud Platform Credits received under the Google Cloud Platform Research Credits Program. The authors express their gratitude towards the Training Center for Applied Geodesy and Photogrammetry (UP TCAGP) and the PHIL-Lidar Program of Philippines for publishing the open lidar data. The authors also express their gratitude to the Google Cloud Team for providing the Google Cloud Platform with GPU enabled computing facility for implementing the architectures under the Google Cloud Platform Research Credits Program. This research is partially funded under the Prime Minister's Research Fellowship issued by the Ministry of Education, Government of India. The high-performance computation for lidar data processing is performed using the Google Cloud Platform Credits received under the Google Cloud Platform Research Credits Program.

FundersFunder number
Google Cloud Team
Training Center for Applied Geodesy and Photogrammetry
UP TCAGP
Ministry of Education, India

    Keywords

    • 3D airborne lidar point cloud
    • Compressive sensing
    • Convolutional sparse coding
    • Deep learning for point cloud classification
    • Deep network-based optimization
    • Ensemble deep learning
    • Lidar for forests
    • Urban environment

    Fingerprint

    Dive into the research topics of 'LidarCSNet: A Deep Convolutional Compressive Sensing Reconstruction Framework for 3D Airborne Lidar Point Cloud'. Together they form a unique fingerprint.

    Cite this