An Exploration of Some Pitfalls of Thematic Map Assessment Using the New Map Tools Resource

Salk, C., Fritz, S. ORCID: https://orcid.org/0000-0003-0420-8549, See, L. ORCID: https://orcid.org/0000-0002-2665-7065, Dresel, C., & McCallum, I. ORCID: https://orcid.org/0000-0002-5812-9988 (2018). An Exploration of Some Pitfalls of Thematic Map Assessment Using the New Map Tools Resource. Remote Sensing 10 (3) p. 376. 10.3390/rs10030376.

[thumbnail of remotesensing-10-00376.pdf]
Preview
Text
remotesensing-10-00376.pdf - Published Version
Available under License Creative Commons Attribution.

Download (627kB) | Preview

Abstract

A variety of metrics are commonly employed by map producers and users to assess and compare thematic maps’ quality, but their use and interpretation is inconsistent. This problem is exacerbated by a shortage of tools to allow easy calculation and comparison of metrics from different maps or as a map’s legend is changed. In this paper, we introduce a new website and a collection of R functions to facilitate map assessment. We apply these tools to illustrate some pitfalls of error metrics and point out existing and newly developed solutions to them. Some of these problems have been previously noted, but all of them are under-appreciated and persist in published literature. We show that binary and categorical metrics, including information about true-negative classifications, are inflated for rare categories, and more robust alternatives should be chosen. Most metrics are useful to compare maps only if their legends are identical. We also demonstrate that combining land-cover classes has the often-neglected consequence of apparent improvement, particularly if the combined classes are easily confused (e.g., different forest types). However, we show that the average mutual information (AMI) of a map is relatively robust to combining classes, and reflects the information that is lost in this process; we also introduce a modified AMI metric that credits only correct classifications. Finally, we introduce a method of evaluating statistical differences in the information content of competing maps, and show that this method is an improvement over other methods in more common use. We end with a series of recommendations for the meaningful use of accuracy metrics by map users and producers

Item Type: Article
Uncontrolled Keywords: thematic maps; map accuracy; map comparison; overall accuracy; Cohen’s Kappa; producers accuracy; users accuracy; average mutual information
Research Programs: Ecosystems Services and Management (ESM)
Depositing User: Romeo Molina
Date Deposited: 03 Apr 2018 07:57
Last Modified: 19 Oct 2022 05:00
URI: https://pure.iiasa.ac.at/15185

Actions (login required)

View Item View Item