Clara Valdez Research | Blogpost written together with Anna Riley-Shepard

Dip, shake, read, repeat… nitrate concentration: zero?

Kneeling in the grass by the confluence of three irrigation canals into the river Aa, I tried to wrap my head around why my Drinkable Rivers measurement kit wasn’t picking up on the agricultural run-off I knew must be present in the water.

For the past three months, I had been dipping various strips, probes, and disks into river sites in the Netherlands and Belgium, collecting data I hoped would answer the most important question of my bachelor’s thesis: just how accurate are the materials in citizen science water quality measurement kits such as ours at Drinkable Rivers?

The European Union declares adequate water quality as a human right. But actually keeping track of whether water quality is, indeed, adequate has proved challenging. Water quality can change rapidly due to unpredictable flow patterns or sudden influxes of pollutants such as industrial effluents, agricultural run-off, or untreated sewage, requiring a measurement frequency far beyond current capacity. That’s where citizen science (CS) comes in. The idea is this: If enough everyday people like you and me take regular measurements of water sources close to our homes, we can aggregate this mass of data to provide a much higher resolution image of the health of our rivers.

Governments and other research institutions are increasingly embracing CS datasets to fulfill the large-scale, continuous monitoring needed for environmental protection. Although we know this data cannot always be as accurate and precise as data gathered using specialized lab equipment, we do want to meet the standards of the scientific community! So here at Drinkable Rivers, we decided to do a test: how many errors should we expect in our data? While there have been previous studies studying issues arising from volunteer-gathered measurements, we wanted to assess the performance of the more cost-efficient measurement materials used in citizen science initiatives. We decided to compare a subset of the parameters measured in our kit with results from two other CS kits – as well as lab-grade equipment for reference. Far from invalidating citizen science work, we hope that by diving into any problem areas discovered, we can improve our process and help citizen science reach its full potential for impact.

Back to the riverside. I had chosen three different sampling sites to represent variables of land-use and type of water body. Introducing: The Aa of Weerijs, The Bersche Maas, and the Donkmeer Lake. The Aa is a small, placid canal in the southern Netherlands surrounded by farm fields. Too narrow for boat transport, I expected the main pollution to come from agricultural fertilizers containing high concentrations of nitrogen and phosphorous – which cause excessive algal blooms that block sunlight, deplete oxygen, may emit toxins, and generally degrade the aquatic ecosystem. The Bersche Maas, meanwhile, is a large, fast-flowing canal – highly trafficked by container ships and close to many centers of urbanization. Here, I expected to find evidence of industrial compounds, heavy metals, sediments, and wastewater that raise the river’s pH and temperature, reduce oxygen availability, and contaminate the water with chemicals that harm humans, animals, and plants. Finally, the Donkmeer: a tree-lined lake in a Belgian nature reserve filled with rainwater and with no direct agricultural or industrial inputs. I hoped this water could provide a reference of a “healthy” aquatic environment – save for the inevitable effects of air pollution during rainfall.

My findings with the Drinkable Rivers measurement kits were mixed. On many water quality parameters – such as temperature, pH, conductivity, turbidity, and total hardness – the kit performed well within the 20% error margin accepted for CS data by the scientific community. But for other equally important parameters – such as nitrate, phosphate, and ammonia – the values read from the CS kit were far off those given by the lab equipment. This poses potential data validity concerns and prompts us to consider using different materials for those specific variables.

Before you go, “Uh-oh!”, let’s put my results into context. Citizen science measurements do not actually have to be highly accurate and precise to be meaningful and useful. Here’s why.

  1. The data can still show trends over time. Think: while nitrogen and phosphorous measurements may not be exact, they can still reflect a farm’s switch from industrial agriculture to regenerative practices.
  2. We can still make out differences upstream and downstream of, for example, a factory. So, the data could help us track whether they are following waste protocols or secretly dumping it untreated into the river.
  3. Accuracy may be better for some value ranges than on average. For example, to detect agricultural run-off, we are only interested in the range of phosphate concentrations of 0-2 ppm, because higher than 2 ppm immediately already means “too high”. While the kit performs poorly in detecting phosphate concentrations higher than 5 ppm, it does fine within the range we care about, showcasing when we cross the healthy threshold.
  4. The strength of citizen science datasets is their massive sample size. For my thesis, I only took 15 measurements of nitrate, phosphate, and ammonia in total – five samples at each of my three locations (with the pH and EC sensors I measured 45 times in comparison). Where small samples are more affected by each individual measurement, large datasets have the benefit of smoothing out random error to produce more reliable averages. All the more reason to keep growing the citizen science movement and taking more measurements with our Drinkable Rivers kits!
  5. Parameters like nitrate, phosphate and ammonia are so-called colorimetric measurements. They are based on chemical experiments that can be influenced by many environmental factors, and rely on visual color matching, which can be subjective and influenced by the person interpreting the result. The equipment we use is also used by professionals in the field, when wanting to get a fast indication of the level of these parameters. For increased reliability, they also use test strips as preliminary tools and follow up with laboratory methods for confirmatory, high-precision results when necessary. And that is exactly what our citizen science measurements are intended for.

And how did our Drinkable Rivers kit stack up against the other two CS kits we tested from organizations Water Rangers and Freshwater Watch? Well, the other kits were more accurate on a few parameters, but ours tests a much more comprehensive battery of parameters. Moreover, our is not only on the test kit itself, but also on the action communities and the global movement we create together. So, your choice of kit really depends on what your use case is. We recommend getting your hands on any of them… and getting your hands wet in the river!

The Drinkable Rivers team has in the meantime continued researching the Ammonia test strips together with my professor Dr. Renata van der Weijden, to decide whether or not to change this tool.