Soundscape ecology to assess environmental and anthropogenic controls on wildlife behavior
Across North America, Arctic and boreal regions have been warming at a rate two to three times higher than the global average. At the same time, human development continues to encroach and intensify, primarily due to demand for natural resources, such as oil and gas. The vast and remote nature of Arctic-boreal regions typify their landscapes, environment, wildlife, and people, but their size and isolation also make it difficult to study how their ecosystems are changing. To overcome these challenges, autonomous recording networks can be used to characterize "soundscapes"—a collection of sounds that emanate from landscapes. Unlike traditional observing methods that are expensive, labor-intensive, and logistically challenging, sound-recording networks provide a cost-effective means to both monitor and understand the response of wildlife to environmental and anthropogenic changes across vast areas. One particular challenge with this sound-measurement approach is extracting useful ecological information from the large volumes of soundscape data that are collected. This project will develop the techniques necessary to overcome this challenge.
The researchers' goal is to understand the influence of both environmental dynamics and increasing anthropogenic activity on the behavior and phenology of migratory caribou (Rangifer tarandus), waterfowl, and songbird communities in Arctic-boreal Alaska and northwestern Canada. Through co-production of knowledge with local land managers and Indigenous communities, the research team will combine field observations, modeling, and analyses that include: (1) soundscape measurements, (2) camera-trap observations, (3) automated soundscape analyses, (4) analyses of camera-trap caribou observations, (5) high-resolution modeling of environmental variables, (6) statistical analyses including wildlife occupancy, diversity, and phenology modeling, and (7) a human-computation game to collect descriptions of our acoustic recordings that allows for the participation of local and Indigenous players of the game. The project will contribute understanding of how both avian communities and caribou populations are responding to spatiotemporal variations in environmental conditions and increasing development of the oil and gas industry in a region where such comprehensive, large-scale research has rarely been possible. Further, at the request of various Tribal organizations, our research will provide insight into how industrial noise influences traditional practices. In addition, our research will provide baseline data on all-natural sounds, including data on bird and caribou activity, in the Arctic National Wildlife Refuge prior to oil and gas development. These datasets will be available to inform Indigenous practices and natural resource management, as well as facilitate future Environmental Assessments required by land managers and oil and gas developers.
This collaborative project between Boelman (1839198, LEAD, LDEO), Mandel (1839185, CUNY), Liston (1839195, CSU), and Brinkman (1839192, UAF) will use soundscape ecology to assess environmental and anthropogenic controls on wildlife behavior via observations and analyses to quantify the influence of changing environmental dynamics and increasing anthropogenic activity on the behavior and phenology of migratory caribou, waterfowl, and songbird communities in Arctic-Boreal Alaska and northwestern Canada.During late April through early October in each year of 2019 through 2022, research teams of two will use soundscape instrumentation consisting of cameras and acoustics to collect data from caribou calving grounds in northern Alaska and Canada.
Co-Principal Investigators
Publications
Reinking, A.K., S.H. Pedersen, K. Elder, N.T. Boelman, T.W. Glass, B.A. Oates, S. Bergen, S. Roberts, L.R. Prugh, T.J. Brinkman, M.B. Coughenour, J.A. Feltner, K.J. Barker, T.W. Bentzen, A.O. Pedersen, N.M. Schmidt, and G.E. Liston, 2022: Collaborative wildlife–snow science: Integrating wildlife and snow expertise to improve research and management, Ecosphere, 13(6), https://doi.org/10.1002/ecs2.4094
Project Outcomes
NSF Award #1839185
The overarching goal of the project was to develop state-of-the-art computational approaches to collecting and analyzing audio recordings of large wilderness areas. These recordings contain information about wildlife presence, behavior, and life events, as well as environmental conditions, insect activity, and human activity. These data can be quantitatively modeled to understand the influence of changing environmental conditions and increasing anthropogenic (human-based) activity on the behavioral patterns of migratory caribou, waterfowl, and songbird communities.
To study this in Arctic-boreal Alaska and northwestern Canada, we recorded 168,736 hours (~19.5 years) of audio data from 98 recorders in 5 different regions (along the Dalton highway, along the Dempster highway, in Ivvavik national park, in the Prudhoe Bay region, and in the Arctic National Wildlife Refuge) between 2018 and 2024. We developed and fine-tuned a set of deep-learning-based hierarchical models to identify the occurrences of 11 specific categories of audio activity. These include both soundscape-level labels (e.g., anthrophony [sounds produced by humans], biophony [sounds produced by animals], and geophony [sounds produced by nature]) and more precise categories within these (aircraft in anthrophony, birds and insects in biophony, and rain and wind in geophony). Within the bird category, the models are also capable of predicting bird types, specifically songbird, waterfowl, and upland game bird.
The development of these deep learning models made several contributions to the fields of computer science and machine learning. The project collected a large amount of audio data, but it was only feasible for humans to annotate a small portion of it with labels that could be used to train and evaluate these models. This is a general problem in the field, and the project made several contributions towards addressing it. We initially investigated whether existing general audio classification models could be repurposed to recognize the categories of interest here, which would require less labeled data. We found that they could, to some extent, but that models trained especially for this purpose with data labeled from our audio recordings were more accurate. We then showed that an innovative data valuation technique could identify both unreliable and redundant data and that by removing them we could train models that were as accurate or more accurate with substantially less labeled data. We also showed that automatically generated weather model data could be used to train audio classifiers to predict wind and rain at a finer temporal and spatial scale than a strictly audio-based model, although they were less accurate than those trained with human-generated labels. And finally, we experimented with multimodal large language models that process both text and audio to determine whether they could be trained from textual descriptions to recognize new sound classes like species-specific bird songs and calls. We showed that the current obstacle in these models for achieving these goals is that these models do not reason in similar ways about the text and the audio, a problem that must be overcome to enable this low-data application.
To make our deep-learning models broadly accessible to scientists from other disciplines (such as ecologists and climatologists), we created and distributed a set of notebooks and tutorials to guide users through the notebooks. The co-investigators on the project used these tools, in collaboration with local land managers and indigenous communities, to examine how avian communities are responding to variations in environmental conditions across time and space as well as in human-generated noise and activity. Understanding the interrelationships among these systems is crucial to their survival and success in the face of changing climate regimes and expanding human presence on the landscape.
NSF Award #1839195
This project used cutting-edge observations and analyses of acoustic landscapes (i.e., soundscapes) to quantify the influence of changing environmental dynamics and increasing anthropogenic activity on the behavior and phenology of migratory caribou, waterfowl, and songbirds in Arctic-boreal Alaska and northwestern Canada. We used acoustic and camera-trap monitoring to evaluate wildlife responses in novel, non-invasive ways across broad spatial ranges during ecologically crucial seasons. Our study combined field observations, modeling, and analyses, including (1) soundscape measurements, (2) camera-trap observations, (3) automated soundscape analyses, (4) analyses of camera-trap caribou observations, (5) high-resolution modeling of environmental variables, and (6) statistical analyses of wildlife occupancy, diversity, and phenology.
Colorado State University’s role in this project was to develop the environmental (weather and snow) modeling tools required for this research. We used those models to produce weather and snow information, and passed the resulting datasets on to other team investigators so they could evaluate how environmental factors control the wildlife behavior captured by the project’s soundscape observations. We used the MicroMet and SnowModel modeling tools to produce environmental variables like air temperature, wind speed, snow depth, and snow-free date. Then, these variables were used by other project researchers to determine, for example, the relationships between spring snow disappearance and waterfowl and songbird arrival on the Arctic tundra, and how air temperature, humidity, and wind speed impact mosquito activity, and how those mosquitoes, in turn, influence caribou health.