Skip to main content

IMIS

A new integrated search interface will become available in the next phase of marineinfo.org.
For the time being, please use IMIS to search available data

 

[ report an error in this record ] Print this page

Acoustic salient event annotations
Citable as data publication
Parcerisas, C.; Schall, E.; Aubach, J.; Te Velde, K.; Slabbekoorn, H.; Debusschere, E.; Flanders Marine Institute (VLIZ); Alfred Wegener Institute; Leiden University; (2024): Acoustic salient event annotations. Marine Data Archive. https://doi.org/10.14284/667

Availability: Creative Commons License This dataset is licensed under a Creative Commons Attribution 4.0 International License.

Description
This repository contains all the data used to produce the results of the publication Parcerisas, C.; Schall, E.; Te Velde, K.; Botteldooren, D.; Devos, P.; Debusschere, E. Machine learning for efficient segregation and labeling of potential biological sounds in long-term underwater recordings (2024). Frontiers in Remote Sensing. doi: 10.3389/frsen.2024.1390687 All the necessary scripts to reproduce the publication results can be found on through github: https://github.com/lifewatch/sound-segregation-and-categorization. more

Studying marine soundscapes by detecting known sound events and quantifying their spatio-temporal patterns can provide ecologically relevant information. However, the exploration of underwater sound data to find and identify possible sound events of interest can be highly time-intensive for human analysts. To speed up this process, we propose a novel methodology that first detects all the potentially relevant acoustic events and then clusters them in an unsupervised way prior to manual revision. We demonstrate its applicability on a short deployment. To detect acoustic events, a deep learning object detection algorithm from computer vision (YOLOv8) is re-trained to detect any (short) acoustic event. This is done by converting the audio to spectrograms using sliding windows longer than the expected sound events of interest. The model detects any event present on that window and provides their time and frequency limits. With this approach, multiple events happening simultaneously can be detected. To further explore the possibilities to limit the human input needed to create the annotations to train the model, we propose an active learning approach to select the most informative audio files in an iterative manner for subsequent manual annotation. The obtained detection models are trained and tested on a dataset from the Belgian Part of the North Sea, and then further evaluated for robustness on a freshwater dataset from major European rivers. The proposed active learning approach outperforms the random selection of files, both in the marine and the freshwater datasets. Once the events are detected, they are converted to an embedded feature space using the BioLingual model, which is trained to classify different (biological) sounds. The obtained representations are then clustered in an unsupervised way, obtaining different sound classes. These classes are then manually revised. This method can be applied to unseen data as a tool to help bioacousticians identify recurrent sounds and save time when studying their spatio-temporal patterns. This reduces the time researchers need to go through long acoustic recordings and allows to conduct a more targeted analysis. It also provides a framework to monitor soundscapes regardless of whether the sound sources are known or not.

Scope
Themes:
Biology > Acoustics
Keywords:
Marine/Coastal, Fresh water, Acoustic data, Annotations, Event detection, Underwater raw acoustic files, Unknown underwater sounds, Belgian part of the North Sea, Europe

Geographical coverage
Belgian part of the North Sea [Marine Regions]

Parameters
Acoustic detections Methodology
Human annotations Methodology
Acoustic detections: Underwater recording
Human annotations: Raven Pro

Contributors
Vlaams Instituut voor de Zee (VLIZ), moredata creator
Alfred Wegener Institute for Polar- and Marine Research; Ocean Acoustics Group (OZA), moredata creator
Universiteit Leiden; Faculteit Wetenschappen; Institute of Biology, moredata creator

Related datasets
Parent datasets:
PhD_Parcerisas: Broadband Acoustic Network dataset, more
Lifewatch observatory data: Broadband acoustic sensor network in the Belgian Part of the North Sea, more

Project
Marine Soundscapes in Shallow Water: Automated Tools for Characterization and Analysis, more

Publication
Based on this dataset
Parcerisas, C. et al. (2024). Machine learning for efficient segregation and labeling of potential biological sounds in long-term underwater recordings. Front. Remote Sens. 5: 1390687. https://dx.doi.org/10.3389/frsen.2024.1390687, more
Describing this dataset
Flanders Marine Institute (VLIZ) (2024). Multipurpose seabed moorings: Developed for coastal dynamic seas. Oceanography Suppl. : In prep., more

Dataset status: Completed
Data type: Data
Data origin: Monitoring: field survey
Metadatarecord created: 2024-04-08
Information last updated: 2024-06-11
All data in the Integrated Marine Information System (IMIS) is subject to the VLIZ privacy policy