Crowdsourcing Landscape Perceptions to Validate Land Cover Classifications

Kevin Sparks, Alexander Klippel, Jan Oliver Wallgrün, David Mark

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

4 Scopus citations

Abstract

This chapter analyzes the correspondence between human conceptualizations of landscapes and spectrally derived land cover classifications. Although widely used, global land cover data have known discontinuities in accuracy across different datasets. With the emergence of crowdsourcing platforms, large-scale contributions from the crowd to validate land cover classifications are now possible and practical. If crowd science is to be incorporated into environmental monitoring, there needs to be some understanding of how humans perceive and conceptualize environmental features. We are reporting on experiments that compare crowd classification of land cover against an authoritative dataset (National Land Cover Dataset), and crowd agreement between participants using novices, educated novices, and experts. Results indicate misclassifications are not random but rather systematic to unique landscape stimuli and unique land cover classes.

Original languageEnglish
Title of host publicationLand Use and Land Cover Semantics
Subtitle of host publicationPrinciples, Best Practices, and Prospects
PublisherCRC Press
Pages295-314
Number of pages20
ISBN (Electronic)9781482237405
ISBN (Print)9781138747999
DOIs
StatePublished - Jan 1 2015
Externally publishedYes

Keywords

  • classification
  • crowd science
  • land cover

Fingerprint

Dive into the research topics of 'Crowdsourcing Landscape Perceptions to Validate Land Cover Classifications'. Together they form a unique fingerprint.

Cite this