TY - GEN
T1 - Verification & validation of a semantic image tagging framework via generation of geospatial imagery ground truth
AU - Gleason, Shaun S.
AU - Dema, Mesfin
AU - Sari-Sarraf, Hamed
AU - Cheriyadat, Anil
AU - Vatsavai, Raju
AU - Ferrell, Regina
PY - 2011
Y1 - 2011
N2 - As a result of increasing geospatial image libraries, many algorithms are being developed to automatically extract and classify regions of interest from these images. However, limited work has been done to compare, validate and verify these algorithms due to the lack of datasets with high accuracy ground truth annotations. In this paper, we present an approach to generate a large number of synthetic images accompanied by perfect ground truth annotation via learning scene statistics from few training images through Maximum Entropy (ME) modeling. The ME model [1,2] embeds a Stochastic Context Free Grammar (SCFG) to model object attribute variations with Markov Random Fields (MRF) with the final goal of modeling contextual relations between objects. Using this model, 3D scenes are generated by configuring a 3D object model to obey the learned scene statistics. Finally, these plausible 3D scenes are captured by ray tracing software to produce synthetic images with the corresponding ground truth annotations that are useful for evaluating the performance of a variety of image analysis algorithms.
AB - As a result of increasing geospatial image libraries, many algorithms are being developed to automatically extract and classify regions of interest from these images. However, limited work has been done to compare, validate and verify these algorithms due to the lack of datasets with high accuracy ground truth annotations. In this paper, we present an approach to generate a large number of synthetic images accompanied by perfect ground truth annotation via learning scene statistics from few training images through Maximum Entropy (ME) modeling. The ME model [1,2] embeds a Stochastic Context Free Grammar (SCFG) to model object attribute variations with Markov Random Fields (MRF) with the final goal of modeling contextual relations between objects. Using this model, 3D scenes are generated by configuring a 3D object model to obey the learned scene statistics. Finally, these plausible 3D scenes are captured by ray tracing software to produce synthetic images with the corresponding ground truth annotations that are useful for evaluating the performance of a variety of image analysis algorithms.
KW - Markov Random Field (MRF)
KW - Maximum Entropy (ME)
KW - Stochastic Context Free Grammars (SCFG)
KW - Synthetic Imagery
UR - http://www.scopus.com/inward/record.url?scp=80955149376&partnerID=8YFLogxK
U2 - 10.1109/IGARSS.2011.6049372
DO - 10.1109/IGARSS.2011.6049372
M3 - Conference contribution
AN - SCOPUS:80955149376
SN - 9781457710056
T3 - International Geoscience and Remote Sensing Symposium (IGARSS)
SP - 1577
EP - 1580
BT - 2011 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2011 - Proceedings
T2 - 2011 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2011
Y2 - 24 July 2011 through 29 July 2011
ER -