Take the camera trap, a pretty common technique used to study wildlife habits and biodiversity — and one that has been supported by an array of big-name tech companies. Except what researcher has the time or bandwidth to analyze thousands, let alone millions, of images? Enter systems such as Wildlife Insights, a collaboration between Google Earth and seven organizations, led by Conservation International.
Wildlife Insights is, quite simply, the largest database of public camera-trap images in the world — it includes 4.5 million photos that have been analyzed and mapped with AI for characteristics such as country, year, species and so forth. Scientists can use it to upload their own trap photos, visualize territories and gather insights about species health.
Here’s the jaw-dropper: This AI-endowed database can analyze 3.6 million photos in an hour, compared with the 300 to 1,000 images that you or I can handle. Depending on the species, the accuracy of identification is between 80 and 98.6 percent. Plus, the system automatically discounts shots where no animals are present: no more blanks.
The net effect: faster analysis, faster insights. And let’s be frank, the rapidity with which global temperatures are rising, and habitats and unique species are disappearing, demands that the sustainability community move much faster than it has to make decisions and take action. If 2019 told us anything, it was that we are running out of time to act on climate change. Indeed, almost three-quarters of business decision-makers believe AI will be instrumental in driving solutions that improve environmental sustainability, according to research published last year by Intel.
At the same time, we are certainly right to be cautious about the potential side effects of AI. That theme comes through loud and clear in five AI predictions published by IBM in mid-December. Two resonate with me the most: first, the idea that AI will be instrumental in building trust and ensuring that data is governed in ways that are secure and reliable; and second, that before we get too excited about all the cool things AI might be able to do, we need to make sure that it doesn’t exacerbate the problem. That means spending more time focused on ways to make the data centers behind AI applications less energy-intensive and less-impactful from a materials standpoint.
From an ethical standpoint, I also have two big concerns: first, that sufficient energy is put into ensuring that the data behind the AI predictions we will come to rely on more heavily isn’t flawed or biased. That means spending time to make sure a diverse set of human perspectives are represented and that the numbers are right in the first place. And second, we must view these systems as part of the overall solution, not replacements for human workers.
Regarding to AI Ethics, AI World Society (AIWS) initiated and promoted to design AIWS Ethics framework within four components including transparency, regulation, promotion and implementation for constructive use of AI.
The original article can be found here.