Published January 14, 2019
Julie Cattiau, Google AI Product Manager
Like us, whales sing. But unlike us, their songs can travel hundreds of miles underwater. Those songs potentially help them find a partner, communicate and migrate around the world. But what if we could use these songs and machine learning to better protect them?Despite decades of being protected against whaling, 15 species of whales are still listed under the Endangered Species Act. Even species that are successfully recovering—such as humpback whales—suffer from threats like entanglement in fishing gear and collisions with vessels, which are among the leading causes of non-natural deaths for whales.To better protect those animals, the first step is to know where they are and when, so that we can mitigate the risks they face—whether that’s putting the right marine protected areas in place or giving warnings to vessels. Since most whales and dolphins spend very little time at the surface of the water, visually finding and counting them is very difficult. This is why NOAA’s Pacific Islands Fisheries Science Center, responsible for monitoring populations of whales and other marine mammals in U.S. Pacific waters, relies instead on listening using underwater audio recorders .
NOAA has been using High-frequency Acoustic Recording Packages (HARPs) to record underwater audio at 12 different sites in the Pacific Ocean, some starting as early as 2005. They have accumulated over 170,000 hours of underwater audio recordings. It would take over 19 years for someone to listen to all of it, working 24 hours a day!
To help tackle this problem, we teamed up with NOAA to train a deep neural network that automatically identifies which whale species are calling in these very long underwater recordings, starting with humpback whales. The effort fits into our AI for Social Good program, applying the latest in machine learning to the world’s biggest social, humanitarian and environmental challenges.
The problem of picking out humpback whale songs underwater is particularly difficult to solve for several reasons. Underwater noise conditions can vary: for example, the presence of rain or boat noises can confuse a machine learning model. The distance between a recorder and the whales can cause the calls to be very faint. Finally, humpback whale calls are particularly difficult to classify because they are not stereotyped like blue or fin whale calls—instead, humpbacks produce complex songs and a variety of vocalizations that change over time.
We decided to leverage Google’s existing work on large-scale sound classification and train a humpback whale classifier on NOAA’s partially annotated underwater data set. We started by turning the underwater audio data into a visual representation of the sound called a spectrogram, and then showed our algorithm many example spectrograms that were labeled with the correct species name. The more examples we can show it, the better our algorithm gets at automatically identifying those sounds. For a deeper dive (ahem) into the techniques we used, check out our Google AI blog post. To find out how this project was started, read the NOAA Fisheries blog post by Research Oceanographer Ann Allen.
Now that we can find and identify humpback whales in recordings, it allows us to understand where they are and where they are going. Read more
There is no longer a tradeoff between social good and reaching your financial goals. Start investing now!
Location
San Rafael, CaliforniaSend us an email
[email protected]