Towards Interpretable Machine Learning Metrics for Earth Observation Image Analysis

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Machine learning models have been extensively used for analyzing Earth Observation images and have played a crucial role in advancing the field. While most studies focus on improving the model's performance, some aim to understand the model's output. These explainable approaches provide reasoning behind the model's output, establishing trust and confidence in the results. However, the evaluation of these models' performance is mainly based on accuracy. To enhance the fairness and transparency of machine learning models, the evaluation of these models on Earth Observation images should also focus on explainability. This work reflects on existing research on explaninable AI in Remote Sensing and further outlines the desirable properties of the gold standard metric for evaluating explainable machine learning models on EO images.

Original languageEnglish
Title of host publicationIGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages4066-4068
Number of pages3
ISBN (Electronic)9798350360325
DOIs
StatePublished - 2024
Event2024 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2024 - Athens, Greece
Duration: Jul 7 2024Jul 12 2024

Publication series

NameInternational Geoscience and Remote Sensing Symposium (IGARSS)

Conference

Conference2024 IEEE International Geoscience and Remote Sensing Symposium, IGARSS 2024
Country/TerritoryGreece
CityAthens
Period07/7/2407/12/24

Keywords

  • earth observation
  • explainability
  • fairness
  • gold standard
  • metrics
  • trustworthiness
  • XAI

Fingerprint

Dive into the research topics of 'Towards Interpretable Machine Learning Metrics for Earth Observation Image Analysis'. Together they form a unique fingerprint.

Cite this