TY - GEN
T1 - A comparative study of multi-focus image fusion validation metrics
AU - Giansiracusa, Michael
AU - Lutz, Adam
AU - Messer, Neal
AU - Ezekiel, Soundararajan
AU - Alford, Mark
AU - Blasch, Erik
AU - Bubalo, Adnan
AU - Manno, Michael
N1 - Publisher Copyright:
© 2016 SPIE.
PY - 2016
Y1 - 2016
N2 - Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
AB - Fusion of visual information from multiple sources is relevant for applications security, transportation, and safety applications. One way that image fusion can be particularly useful is when fusing imagery data from multiple levels of focus. Different focus levels can create different visual qualities for different regions in the imagery, which can provide much more visual information to analysts when fused. Multi-focus image fusion would benefit a user through automation, which requires the evaluation of the fused images to determine whether they have properly fused the focused regions of each image. Many no-reference metrics, such as information theory based, image feature based and structural similarity-based have been developed to accomplish comparisons. However, it is hard to scale an accurate assessment of visual quality which requires the validation of these metrics for different types of applications. In order to do this, human perception based validation methods have been developed, particularly dealing with the use of receiver operating characteristics (ROC) curves and the area under them (AUC). Our study uses these to analyze the effectiveness of no-reference image fusion metrics applied to multi-resolution fusion methods in order to determine which should be used when dealing with multi-focus data. Preliminary results show that the Tsallis, SF, and spatial frequency metrics are consistent with the image quality and peak signal to noise ratio (PSNR).
KW - Image Fusion
KW - Metrics
KW - Multi-Modal Imagery
KW - Multi-Resolution Transformations
KW - Receiver Operating Characteristic
KW - Validation
UR - http://www.scopus.com/inward/record.url?scp=84989871981&partnerID=8YFLogxK
U2 - 10.1117/12.2224349
DO - 10.1117/12.2224349
M3 - Conference contribution
AN - SCOPUS:84989871981
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Geospatial Informatics, Fusion, and Motion Video Analytics VI
A2 - Dockstader, Shiloh L.
A2 - Seetharaman, Gunasekaran
A2 - Doucette, Peter J.
A2 - Pellechia, Matthew F.
A2 - Palaniappan, Kannappan
PB - SPIE
T2 - Geospatial Informatics, Fusion, and Motion Video Analytics VI
Y2 - 19 April 2016 through 21 April 2016
ER -