Citizen scientists collect large amounts of very valuable data, yet the quality of data is an important issue. The CrowdWater game shows a playful way to improve their accuracy.
For more than two years, citizens are collecting data on water levels of rivers and streams all over the world. The aim is to improve water management and forecasts, in particular in regions with a sparse or inexistent network of conventional measurement stations. But what about the quality of these data? Human errors might play an important role, in particular in citizen science projects. The CrowdWater game shows a playful way to improve the accuracy of water level class observations that were submitted by citizen scientists using the CrowdWater app.
In this game, players compare two photos: The original photo with the virtual staff gauge and another one taken at the same location at a later time. Both come from the CrowdWater app. Next, the players vote on a water level class by comparing the water level on the new photo to the virtual staff gauge on the original photo. Each observation is shown to several players and will therefore receive multiple votes. Subsequently, the average water level class for all votes of the different players is compared with the value observed by the citizen scientist in the field. "In this way, the game helps us to confirm or correct the values that were submitted through the app", says Barbara Strobl, one of the doctoral candidates responsible for the project.
Curious to play the CrowdWater game? Try it out and win great prizes during each championship!
A recent paper focuses on the value of the online CrowdWater game and demonstrates the potential of gamified approaches for data quality control in citizen science projects.
Strobl B, Etter S, van Meerveld I, Seibert J (2019) The CrowdWater game: A playful way to improve the accuracy of crowdsourced water level class data. PLoS ONE 14(9): e0222579. https://doi. org/10.1371/journal.pone.0222579