FROM FIELD TO FINDINGS: THE BIOACOUSTICS PIPELINE OF AUDIODASH

Keke Ray; The Institute for Bird Populations; kray@birdpop.org; Mary Clapp, Jerry Cole, Joe Weiss

At The Institute for Bird Populations, we have spent eight years conducting over a dozen passive acoustic monitoring projects, amassing >450,000 hours of audio. The volume and diverse applications of the data have necessitated the development of a systematic and efficient workflow to address bottlenecks between data collection and interpretable data. We introduce AudioDash, a web-based platform and standardized data pipeline designed to streamline management and analysis of large-scale acoustic datasets. From single-species studies to analyses of avian community responses to wildfire, our most recent methods, developed over the last three years, have generated almost 90,000 human verified audio samples. Audio and metadata are collected in the field via autonomous recording units (ARUs), then compressed, cleaned, renamed, and stored on a local server. AudioDash was built as a “model-agnostic” platform and can host multiple machine-learning models, including off-the-shelf options such as BirdNET or customizable classifiers like Perch. BirdNET generates species detections, confidence scores, and audio clips for expert verification, while we use Perch to build custom models for specific targets. Both classifiers support data export for subsequent statistical evaluation. We trace the path of audio from field to findings, illustrating the workflow, user-friendly design, and diverse ways to use AudioDash.

Poster Session