PASSIVE ACOUSTIC SURVEYS AND A NOVEL MACHINE LEARNING TOOL REVEAL DETAILED SPATIOTEMPORAL VARIATION IN THE VOCAL ACTIVITY OF TWO ANURANS - WITH IMPLICATIONS FOR CARNIVORE MONITORING

Connor M Wood; K. Lisa Yang Center for Conservation Bioacoustics, Cornell L; cmw289@cornell.edu; Stefan Kahl, Cathy Brown

Passive acoustic monitoring has proven effective for broad-scale population surveys of acoustically active species, making it a valuable tool for conserving endangered species like many Anurans (and some carnivores). However, successful automated classification of anuran vocalizations in large audio datasets has been limited. We deployed five autonomous recording units at three known breeding areas of the declining Yosemite toad to supplement ongoing, human survey efforts. We analyzed the audio data with the BirdNET algorithm, which was originally developed for birds but has been expanded to include the Yosemite toad and the sympatric Pacific chorus frog, among other non-avian classes. We achieved high classification accuracy for both species, and efficiently detected the two species in thousands of hours of audio data. For both species, (1) vocalization counts were correlated among three co-deployed recording units but varied substantially in magnitude, (2) we obtained phenological data about nearly the entire breeding period, and (3) we observed diel cycles in vocal activity. Vocalization counts are a precursor to acoustic-based abundance indices, while phenological data could reveal shifts in breeding linked to climate change, two types of information that could improve the conservation of vocally active amphibians. Finally, we extend the results presented here to illustrate how these techniques are relevant to population monitoring of carnivores.

Use of AI for Processing Camera Trap Images  InPerson Presentation

 

CALIFORNIA FISH AND WILDLIFE'S PARTNERSHIP WITH WILDLIFE INSIGHTS FOR STORING, PROCESSING, AND SHARING CAMERA IMAGES

Lindsey Rich; California Department of Fish and Wildlife; lindsey.rich@wildlife.ca.gov;

The California Department of Fish and Wildlife (CDFW) deploys thousands of cameras at strategic locations throughout the state to estimate wildlife distributions and population demographics, which is a critical step in detecting declines, managing populations, and understanding ecosystem health. The thousands of cameras produce tens of millions of images, which present data storage, processing, and sharing challenges. To address these challenges, CDFW partnered with Wildlife Insights, an online platform for storing, identifying, and analyzing camera trap data. Wildlife Insights has enabled CDFW to increase the security of its photos and holistically manage photos so that information can be shared across regions and programs, and assessments of wildlife communities can be done at landscape scales using existing camera data. Further, Wildlife Insights’ computer vision model expedites the processing of photos by automatically identifying blank images (e.g., images of moving vegetation), vehicles, and species, which users can then review and manually verify. CDFW staff from across the state have uploaded over 32 million images from across 10,250 camera deployments to Wildlife Insights, and there will be many more to come as historical data and new projects transition to the platform. 

Use of AI for Processing Camera Trap Images  InPerson Presentation

 

ARTIFICIAL INTELLIGENCE-SUPPORTED ANIMAL IMAGE PROCESSING

David P Waetjen; Dudek; dwaetjen@dudek.com; Fraser Shilling, Brock Ortega, Fraser Shilling

Artificial intelligence (AI) and machine learning are terms describing software approaches that can be trained to perform tasks. Pattern recognition is at the core of most AI tools, including the growing suite of approaches for identifying wildlife. We describe the AI Image Toolkit (AIT, https://ait.dudek.com), a web-based system using a series of tasks in an overall workflow: 1) processing of large image datasets to identify and isolate images containing animals, 2) management of image files as part of camera trap projects, and 3) provision of data useful in occupancy and other modeling. In the first case, raw data from camera traps are uploaded to a cloud location. The tool identifies images containing animals (>95% accuracy) and returns them to a user in a zip file, along with a count of number of individual animals. In the second case, images containing animals are transferred to a web-based system, where the user can tag images with species, number of animals, behavior, demographics, and other information. In (3), data and metadata are organized and can be queried and automatically packaged into formats used in GIS or statistical analysis; for example, occupancy models, diversity indices, effectiveness of crossing structures.

Use of AI for Processing Camera Trap Images  InPerson Presentation

 

OPEN SOURCE COMPUTER VISION MODELS FOR DATA AGGREGATION AND SORTING

William H Duvall; wduvall@ecorpconsulting.com; Caroline A. Garcia

A study was conducted using a TrapCam dataset to develop a model to assist biologist(s) with sorting TrapCam pictures of interest from blank pictures. This study utilized a dataset of over 50,000 pictures. There were eight categories of interest identified for this study: baby owls, dogs, owls, people, cars, equipment, trucks, and feeding with owls being the primary class. The first seven categories are objects detected by the model and the last category is a behavior which was treated the same way as the objects. The model utilized is the yolo v4 model which is an open-source computer vision model designed for object detection utilizing bounding boxes. The model was ‘trained’ utilizing a data set from the project site and pictures found on google. This dataset can be used as a base for other models to be hand trained as well. Much of the code developed during the training process can be reused for other projects cutting down significantly on development time. The dataset generation can be conducted by anyone with minimal set up and training. This project was conducted utilizing open-source software along with the google cloud which resulted in minimal costs for development and a path to a ‘low cost’ commercial product via the cloud. The google cloud can be swapped out for other ‘clouds’ if ever needed. Open source licenses almost always allow for use of the code in commercial products, so there should not be any licensing issues for use in project work.

Use of AI for Processing Camera Trap Images  InPerson Presentation

 

WILDLIFE IMAGE AI DETECTION: FROM LAB TESTING TO ENTERPRISE ADOPTION

Martin Slosarik; Picogrid, Inc.; martin@picogrid.io; David Delaney, USACE CERL

Natural Resource Managers use camera traps and other sensors to monitor and adaptively manage threatened and endangered (T&E) species and other species of interest. However, moving, storing and analyzing large quantities of photos currently poses multiple significant challenges. Picogrid Platform addresses these challenges by combining hardware capable of real-time photo collection, cloud-based photo/video management software and a variety of third-party AI algorithms. This talk will cover the experience of building and testing enterprise-grade software for camera trap image management used by the Department of Defense.

Use of AI for Processing Camera Trap Images  Zoom Presentation

 

DETECTING AND MONITORING RODENTS USING CAMERA TRAPS AND MACHINE LEARNING VERSUS LIVE TRAPPING FOR OCCUPANCY MODELING

Jaran Hopkins; California Polytechnic State University; jhopki05@calpoly.edu; Gabriel Marcelo Santos-Elizondo, Francis Villablanca

Determining best methods to detect individuals and monitor populations that balance effort and efficiency can assist conservation and land management. This may be especially true for small, non-charismatic species, such as rodents (Rodentia), which comprise >40% of all mammal species. Given the importance of rodents to ecosystems, and the number of listed species, we tested two commonly used detection and monitoring methods, live traps and camera traps, to determine their efficiency in rodents.  An artificial-intelligence machine-learning model was developed to process the camera trap images and identify the species within them which reduced camera trapping effort.  We used occupancy models to compare probability of detection and occupancy estimates for six rodent species across the two methods.  Camera traps yielded greater detection probability and occupancy estimates for all six species. Live trapping yielded biasedly low estimates of occupancy, required greater effort, and had a lower probability of detection.  Camera traps, aimed at the ground to capture the dorsal view of an individual, combined with machine learning provided a practical, non-invasive, and low effort solution to detecting and monitoring rodents. Thus, camera trapping with A.I. is a more sustainable and practical solution for the conservation and land management of rodents.

Use of AI for Processing Camera Trap Images   Student Paper Zoom Presentation

 

USING MACHINE LEARNING TO MANAGE LARGE REMOTE CAMERA DATASETS AND DETECT SAN JOAQUIN FOX IN WESTERN MERCED COUNTY

Ryan B Avery; Development Seed; ryan@developmentseed.org; Steven Avery

As a requirement of the Habitat Conservation Plan prepared for the Wright Solar Park project, ICF has used 10 remote cameras annually since 2020 to determine if San Joaquin kit fox (Vulpes macrotis mutica) are present. Unbaited camera stations were established along the fence line of the solar facility and continuously collected images for 4 months in 2020 (May-August) and for 7 months in 2021 and 2022 (February-August). Tens of thousands of images were collected each year. Traditionally, these large image collections are reviewed by humans, who need to sift through many uninteresting images. To improve this process, we created a data processing pipeline using Microsoft’s open-source Megadetector and Species Classification machine learning models, developed from millions of examples of camera trap images. At the project site, we were able to filter out most images without objects of interest, leaving a manageable number of images for human review. The results of the surveys have confirmed the presence of San Joaquin kit fox at the site each year. There were 5 detections in 2020, 9 detections in 2021, and 19 detections in 2022. We present methods for calibrating and running these models on large image collections typical of long-term monitoring projects.

Use of AI for Processing Camera Trap Images  InPerson Presentation

 

PANEL DISCUSSION

;
Use of AI for Processing Camera Trap Images  InPerson Presentation