Abstract
With the ubiquitous use of mobile imaging devices, the collection of perishable disaster-scene data has become unprecedentedly easy. However, computing methods are unable to understand these images with significant complexity and uncertainties. In this paper, the authors investigate the problem of disaster-scene understanding through a deep-learning approach. Two attributes of images are concerned, including hazard types and damage levels. Three deep-learning models are trained, and their performance is assessed. Specifically, the best model for hazard-type prediction has an overall accuracy (OA) of 90.1%, and the best damage-level classification model has an explainable OA of 62.6%, upon which both models adopt the Faster R-CNN architecture with a ResNet50 network as a feature extractor. It is concluded that hazard types are more identifiable than damage levels in disaster-scene images. Insights are revealed, including that damage-level recognition suffers more from inter-and intra-class variations, and the treatment of hazard-agnostic damage leveling further contributes to the underlying uncertainties.
Original language | English |
---|---|
Article number | 3952 |
Journal | Applied Sciences (Switzerland) |
Volume | 11 |
Issue number | 9 |
DOIs | |
State | Published - 2021 |
Externally published | Yes |
Funding
Funding: This material is partially funded under the National Science Foundation (NSF) under Award Number IIA-1355406 and the A.37 Disasters of the National Aeronautics and Space Administration (NASA) Applied Sciences Disaster Program. Any opinions, findings, and conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of NSF or NASA.
Funders | Funder number |
---|---|
National Science Foundation | IIA-1355406 |
National Aeronautics and Space Administration |
Keywords
- Classifica-tion
- Convolutional neural network
- Deep learning
- Disaster scenes
- Mobile images
- Object detection
- Transfer learning
- Understanding