Among the many benefits of artificial intelligence touted by its proponents is the technology’s potential ability to help solve climate change. If this is indeed the case, the recent step changes in AI couldn’t have come any sooner. This summer, evidence has continued to mount that Earth is already transitioning from warming to boiling.
However, as intense as the hype has been around AI over the past months, there is also a lengthy list of concerns accompanying it. Its prospective use in spreading disinformation for one, along with potential discrimination, privacy, and security issues.
Furthermore, researchers at the University of Cambridge, UK, have found that bias in the datasets used to train AI models could limit their application as a just tool in the fight against global warming and its impact on planetary and human health.
As is often the case when it comes to global bias, it is a matter of Global North vs. South. With most data gathered by researchers and businesses with privileged access to technology, the effects of climate change will, invariably, be seen from a limited perspective. As such, biased AI has the potential to misrepresent climate information. Meaning, the most vulnerable will suffer the most dire consequences.
Call for globally inclusive datasets
In a paper titled “Harnessing human and machine intelligence for planetary-level climate action” published in the prestigious journal Nature, the authors admit that “using AI to account for the continually changing factors of climate change allows us to generate better-informed predictions about environmental changes, allowing us to deploy mitigation strategies earlier.”
This, they say, remains one of the most promising applications of AI in climate action planning. However, only if datasets used to train the systems are globally inclusive.
“When the information on climate change is over-represented by the work of well-educated individuals at high-ranking institutions within the Global North, AI will only see climate change and climate solutions through their eyes,” said primary author and Cambridge Zero Fellow Dr Ramit Debnath.
In contrast, those who have less access to technology and reporting mechanisms will be underrepresented in the digital sources AI developers rely upon.
“No data is clean or without prejudice, and this is particularly problematic for AI which relies entirely on digital information,” the paper’s co-author Professor Emily Shuckburgh said. “Only with an active awareness of this data injustice can we begin to tackle it, and consequently, to build better and more trustworthy AI-led climate solutions.”
The authors advocate for human-in-the-loop AI designs that can contribute to a planetary epistemic web supporting climate action, directly enable mitigation and adaptation interventions, and reduce the data injustices associated with AI pretraining datasets.
The need of the hour, the study concludes, is to be sensitive to digital inequalities and injustices within the machine intelligence community, especially when AI is used as an instrument for addressing planetary health challenges like climate change.
If we fail to address these issues, the authors argue, there could be catastrophic outcomes impacting societal collapse and planetary stability, including not fulfilling any climate mitigation pathways.