AI for wildlife management
Connecting state and local government leaders
Several projects are using artificial intelligence to analyze camera images, helping researchers quickly leverage field-collected data.
With coyote attacks on humans in cities and suburbs making headlines – coyotes injured two people in Chicago earlier this month – officials could tap into a data repository to get a better handle on what’s bringing the area's animals into such close proximity to humans.
Called eMammal, the tool has been around for several years in one form or another and has helped researchers manage camera-trapping projects. It uses a data pipeline that takes images and metadata from the field through a cloud-based review processes and into SIdora, a Smithsonian Institution data repository. To date, eMammal has data on more than 1 million detections of wildlife worldwide, including in cities.
Smithsonian researchers collaborated with others at the North Carolina Museum of Natural Sciences, Conservation International and the Wildlife Conservation Society to develop an open standard for camera trap metadata -- the Camera Trap Metadata Standard -- as part of the eMammal project. Camera traps are ruggedized cameras that researchers place in forests, jungles, grasslands, cities and elsewhere to capture images of mammals. Those images are then tagged with metadata and added to the eMammal website, where anyone can use the data to better understand trends, such as coyotes’ migration.
Wildlife Insights: Camera Trap Data Network, a collaboration among the four organizations, updates and maintains the data standard, and provides application programming interfaces for sharing and accessing the data.
EMammal works like this: Researchers uploads the video from a camera trap to a desktop application that forces them to use the standard metadata structure. The researchers manually enter the metadata such as species type, camera type, latitude and longitude of the camera trap as well as the date and time of the image capture. Next, the metadata and images go to the cloud -- currently, eMammal uses Amazon Web Services but is moving to Microsoft Azure -- where an expert reviews the information for accuracy before adding it to the repository, where it becomes publicly accessible.
“EMammal is based on a distributed network of contributors” working on different projects and adding their data to the cloud, according Bill McShea, eMammal lead and wildlife ecologist at the Smithsonian Institution’s National Zoo and Conservation Biology Institute.
One current eMammal project is called Snapshot USA. Researchers nationwide use the tool to collect data using the same protocol to get a snapshot of the mammals in all 50 states. “That’s my dream for something like eMammal because I’m very jealous of the bird community," which has thousands of birdwatchers collecting data in the same way and putting it into the same system, Shea said. They can "look at changes over time in bird populations and changes in migratory patterns through their volunteer network,” he said. “They’re able to tell a lot about how bird populations are doing in North America. We have nothing like for mammals.”
Last September and October, 109 co-operators collected mammal data and submitted it to eMammal, where it’s currently in the review stage.
EMammal grew out of Smithsonian WILD! (SI Wild), a collection of wildlife images from institution projects that shared basic information such as species and country where the image was taken.
“We wanted to turn that more into a data repository, something that is the equivalent of the museum specimens that everybody knows the Smithsonian for,” Shea said. “If you can take an image and add metadata to it of the person that got the image, the camera they used, the location, the time, the date, the species, then it becomes like the museum specimens," he said. "That allows you to be much more current with tracking biodiversity and a lot more ubiquitous. Rather than the Smithsonian mounting an expedition to someplace and collecting specimens and bringing them back, you can have a whole bunch of people out there setting up cameras and sending back stuff continuously.”
Wildlife Insights, an online portal launched in December, is designed to help wildlife managers around the world identify wildlife from camera traps, is applying artificial intelligence to automatically recognize and categorize image data. The organization, whose members include the Smithsonian, Google and the San Diego Supercomputer Centers, has trained models to recognize 614 species, according to its website.
Users upload camera trap data to the Google cloud where they can run AI models that filter out the blank images and classify animals by species. Those models would reduce the amount of manual work researchers have to do with the metadata, and eventually eMammal could be rolled into Wildlife Insights, Shea said.
“We’ve built an API to move people and their data into eMammal and it moves over to Wildlife Insights on a daily basis. Eventually maybe we’ll be simply a piece inside of Wildlife Insights,” he said, adding that sensors on the cameras also hold promise for additional data collection. For example, acoustic sensors can detect bats, which would broaden the suite of species the repository includes.
AI is also being tested on data from the Penguin Watch Project, which taps volunteers to help count penguins in Antarctica from aerial photos taken from drones and planes. A newly developed solution could enable researchers to use computer vision to count penguin populations faster and more accurately.
Data science company Gramener began with data from Zooniverse’s Penguin Watch tool, which consisted of penguin camera trap images that had been manually tagged by volunteers who indicated where penguins were in the image, according to a report on Datanami. The tagged data – nearly 50,000 images – was fed into a deep learning model based on Microsoft’s Deep learning ecosystem to solve the problem of accurately counting the penguins.
The approach used a density-based counting method to estimate the numbers of penguins within a given group in an image. The results were validated on another dataset of 8,000 images, and eventually, it will give researchers a faster, more reliable, accurate and economical way to count penguins, the company said on its blog.
NEXT STORY: CDOs move the needle on data strategy