False positives and false negatives in citizen science monitoring data: should we be worried?
Date:
Summary
Citizen science projects, in which volunteers from the public contribute to data collection, represent a great opportunity for large-scale monitoring programs. The number of such programs is growing rapidly as the potential of such schemes are realised, yet concerns remain over the reliability of the resulting data. In recent years progress has been made in developing models to account for aspects of observation bias such as imperfect detection. However, very few analyses have considered the effects of false positives- i.e. species misidentification , despite the fact that such errors can have a serious impact on resulting population trends. We analysed a dataset from a government-run citizen science monitoring program for amphibians within Switzerland in order to determine the prevalence and effect of species misidentification . We used hierarchical models to analyse data from over 1000 sites across 14 years to estimate the prevalence of two sources of observation bias; imperfect detection and species misidentification . By comparing models that include these biases with naïve models ignoring these sources of error, we demonstrate the impact unmodelled observer effects have on occurrence, colonisation and extinction parameters and resulting population trend estimates. Errors and bias are common within monitoring data; if we are to realise the promise of volunteer monitoring programs it is essential that we apply appropriate methods to account for these errors in our analyses.