Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

How 4 AI Coalitions Are Powering Sustainable AI Progress


The AI Action Summit has launched four programs driving sustainable AI progress through energy monitoring, environmental standards, public resource access, and ethical governance.

At the Paris AI Action Summit, the air buzzed with grand declarations — mostly financial ones. Europe is committing new funds to AI development. So is the United Arab Emirates. Stargate may be the biggest bet in town, but it’s not the only one.

The numbers are substantial: French President Emmanuel Macron has committed €109 billion for domestic AI initiatives, while the UAE is set to invest €50 billion in a European data center campus. Both are dwarfed by Stargate’s $500 billion AI project in the US, spearheaded by SoftBank, OpenAI, Oracle, and MGX.

Yet, beyond the financial pledges, tangible progress in key themes — such as Public Service AI, the Future of Work, Innovation and Culture, Trust in AI, and Global AI Governance — was harder to pinpoint. That’s where four new initiatives — each tackling AI’s sustainability problem from a different angle — come in.

1) AI Energy Score: Bringing Transparency to AI’s Power Hunger

In the U.S. alone, data center electricity consumption is projected to surge from 4.4% of total electricity use in 2023 to a staggering 6.7%–12% by 2028, primarily fueled by AI’s appetite. To meet this soaring demand, the U.S. is adding 46 gigawatts of natural gas capacity by 2030—equivalent to the entire electricity system of Norway.

AI is increasingly integrating into our daily lives, impacting countless stakeholders, yet its environmental footprint remains unclear’’ says Bruna Sellin Trevelin, Legal Counsel at Hugging Face. Without greater transparency and clear frameworks, we can’t make truly informed decisions. Community-driven and international efforts like the AI Energy Score and the Coalition for Sustainable AI are essential to ensure that sustainability isn’t just a talking point but a real, shared commitment.”

The AI Energy Score is the brainchild of a coalition that includes Salesforce, Hugging Face, Cohere, and Carnegie Mellon University.

It represents the first standardized framework for measuring and comparing AI inference energy consumption— a previously opaque metric. The initiative has launched a public leaderboard of 166 models and a benchmarking portal for companies to submit additional models, both open-source and proprietary.

Boris Gamazaychikov, Head of AI Sustainability at Salesforce, highlights its significance: “Until now, there was no clear way to compare AI models based on environmental impact, and proprietary models lacked a secure path for evaluation. The AI Energy Score eliminates these barriers, empowering businesses, AI developers, and policymakers to make informed decisions about low-impact AI models.”

He emphasizes the urgent stakes:This is critical for the industry as a whole. The construction of new energy infrastructure and increased fossil fuel development to power AI data centers underscores the urgency of addressing this issue.”

2) Coalition for Sustainable AI: Aligning AI with SDG goals

‘Sustainable AI’ can be a slippery term. For some, it’s about reducing AI’s carbon footprint. For the organizations forming the ‘Coalition for Sustainable AI,’ it’s also about “ensuring AI contributes to the UN Agenda 2030 and Sustainable Development Goals (SDGs), with a focus on climate action and environmental protection.”

Led by France in collaboration with the UN Environment Programme (UNEP) and the International Telecommunication Union (ITU), this coalition has assembled an impressive 91 partners—including 37 tech companies, 11 countries, and five international organizations.

Though lacking binding financial commitments, the coalition’s mission is clear: standardize measurements of AI’s environmental impact, incentivize energy-efficient AI development, and create solutions aligned with global sustainability goals.

The Summit also launched complementary initiatives: a scientific position paper on AI’s environmental performance, a ‘frugal AI’ hackathon, and the International Energy Agency’s first global observatory dedicated to monitoring AI and energy.

Sara Hooker, Head of Cohere For AI and VP at Cohere, affirms the coalition’s value: “We look forward to working with the Coalition for Sustainable AI along with other industry leaders as part of our broader commitment to AI efficiency. We’ve never believed that throwing endless computing is the solution, and as a frontier research lab, our key focus is reducing the amount of computing required for breakthroughs. This also empowers more access for everyday users and innovation.”

3) Current AI: Serving the Public (AI) Interest

Taking a broader view beyond environmental concerns, Current AI aims to fundamentally reshape the AI landscape through large-scale initiatives that serve the public interest.

With an initial $400 million investment from the French government, AI Collaborative, leading governments and industry partners, Current AI is ambitious—targeting $2.5 billion in funding over five years.

The initiative stands on three pillars: expanding access to high-value, locally relevant data in sectors like healthcare, media, and education; promoting open standards to ensure AI systems remain transparent, adaptable, and inclusive; and developing robust frameworks for accountability, transparency, auditing, and public engagement.

“Who benefits from AI will be determined by the choices we make today,” says Vidushi Marda, Director of Partnerships at Current AI. “With the right choices, we can create an open, public AI ecosystem, with access to valuable datasets, that enables collaboration and spurs the innovations that deliver tangible improvements to people’s lives.”

4) ROOST: Safeguarding the Digital Future

While investment dominated the Summit’s narrative, conversations among asset owners and investors from both sides of the Atlantic revealed persistent concerns around AI security and safety. Dr. Johannes Lenhard, CEO of VentureESG, notes: “People are aware of the potential adverse impacts and are keen to identify and mitigate those.”

Navrina Singh, Founder and CEO of Credo AI, underscores the urgency: “What’s becoming increasingly clear from the narrative and outcomes of the AI Summit is that organizations must take ownership of their own governance. While multiple benchmarks and evaluation tools already exist, there is a growing need for use-case and industry-specific evaluations, benchmarks, and standards. A key emerging theme is that companies cannot effectively leverage AI unless they establish proper governance.”

ROOST, a new initiative incubated by Columbia University with over $27 million in funding from tech companies and philanthropic entities, responds directly to these concerns.

Its primary mission focuses on online safety, with particular emphasis on child protection in the AI era. As generative AI increases the risks of harmful content, ROOST aims to create scalable, interoperable safety infrastructure, starting with a suite of free tools to detect and mitigate online child exploitation materials.

“We need real alternatives to closed, black-box AI,” Lenhard adds. “We believe very strongly that early-stage investors, especially, need to scrutinize the AI companies they invest in. Any coalition that raises awareness and spreads tools and resources to get investors onto this track is helpful. We need to have positive alternative visions – for Europe and the world.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *