TY - GEN
T1 - Multi-focus and multi-modal fusion
T2 - Geospatial Informatics, Fusion, and Motion Video Analytics VI
AU - Giansiracusa, Michael
AU - Lutz, Adam
AU - Ezekiel, Soundararajan
AU - Alford, Mark
AU - Blasch, Erik
AU - Bubalo, Adnan
AU - Thomas, Millicent
N1 - Publisher Copyright:
© 2016 SPIE.
PY - 2016
Y1 - 2016
N2 - Automated image fusion has a wide range of applications across a multitude of fields such as biomedical diagnostics, night vision, and target recognition. Automation in the field of image fusion is difficult because there are many types of imagery data that can be fused using different multi-resolution transforms. The different image fusion transforms provide coefficients for image fusion, creating a large number of possibilities. This paper seeks to understand how automation could be conceived for selected the multiresolution transform for different applications, starting in the multifocus and multi-modal image sub-domains. The study analyzes the greatest effectiveness for each sub-domain, as well as identifying one or two transforms that are most effective for image fusion. The transform techniques are compared comprehensively to find a correlation between the fusion input characteristics and the optimal transform. The assessment is completed through the use of no-reference image fusion metrics including those of information theory based, image feature based, and structural similarity based methods.
AB - Automated image fusion has a wide range of applications across a multitude of fields such as biomedical diagnostics, night vision, and target recognition. Automation in the field of image fusion is difficult because there are many types of imagery data that can be fused using different multi-resolution transforms. The different image fusion transforms provide coefficients for image fusion, creating a large number of possibilities. This paper seeks to understand how automation could be conceived for selected the multiresolution transform for different applications, starting in the multifocus and multi-modal image sub-domains. The study analyzes the greatest effectiveness for each sub-domain, as well as identifying one or two transforms that are most effective for image fusion. The transform techniques are compared comprehensively to find a correlation between the fusion input characteristics and the optimal transform. The assessment is completed through the use of no-reference image fusion metrics including those of information theory based, image feature based, and structural similarity based methods.
KW - Image Fusion
KW - Information fusion
KW - Multi-Resolution Transforms
KW - Multi-focus
UR - http://www.scopus.com/inward/record.url?scp=84989864027&partnerID=8YFLogxK
U2 - 10.1117/12.2224347
DO - 10.1117/12.2224347
M3 - Conference contribution
AN - SCOPUS:84989864027
T3 - Proceedings of SPIE - The International Society for Optical Engineering
BT - Geospatial Informatics, Fusion, and Motion Video Analytics VI
A2 - Dockstader, Shiloh L.
A2 - Seetharaman, Gunasekaran
A2 - Doucette, Peter J.
A2 - Pellechia, Matthew F.
A2 - Palaniappan, Kannappan
PB - SPIE
Y2 - 19 April 2016 through 21 April 2016
ER -