According to a 2025 study published in Frontiers in Communication, the race for smarter AI has revealed a critical trade-off: advanced reasoning models, such as OpenAI's o3 and DeepSeek's R1, emit up to 50 times more carbon dioxide than simpler large language models (LLMs).
A study led by Maximilian Dauner at Hochschule München University of Applied Sciences looked at 14 large language models (LLMs) using 1,000 benchmark questions and found that models with advanced reasoning produce 543.5 "thinking tokens" for each question, while simpler models only produce 37.7. These extra computational steps skyrocket energy use, driving emissions to unsustainable levels.
This finding challenges the AI industry's pursuit of ever-higher accuracy. For instance, Cogito, a 70-billion-parameter model, achieved 84.9% accuracy but produced three times more CO2 than similar-sized models with brief responses. No model with emissions below 500 grams of CO₂ equivalent surpassed 80% accuracy, signaling a steep environmental cost for precision.
How Do Reasoning Processes Inflate Emissions?
Reasoning models employ extensive computational resources to simulate human-like problem-solving, significantly increasing their carbon footprint. A 2025 Live Science article explains that these models perform iterative calculations, such as chain-of-thought processing, before generating answers.
This "hidden" workload, invisible to users, accounts for their high energy consumption. For example, DeepSeek's R1, when answering 600,000 questions, emits CO₂ equivalent to a London-to-New York round-trip, while the Qwen 2.5 model could handle 1.9 million questions with similar accuracy at the same emissions level.
The study's data underscores the urgency of optimizing AI design. As data centers now consume 4–5% of U.S. electricity, up from 1–2% historically per MIT, AI's growing energy demands solutions that minimize computational overhead while maintaining performance.
ALSO READ | Illegal Crypto Mining Threatens Malaysia’s Renewable Energy Ambitions
Will Industry Giants Shift Priorities?
Major AI developers face mounting pressure to address their environmental impact. Google's 2025 sustainability report disclosed a 48% emissions increase since 2019, largely due to AI-driven data center growth. Meanwhile, a 2025 Indian Express analysis suggests that OpenAI and DeepSeek prioritize model accuracy to remain competitive, often sidelining sustainability. The study's findings highlight that simpler models, while less accurate, offer a greener alternative for tasks not requiring deep reasoning, urging companies to diversify their offerings.
Some firms are exploring mitigation strategies. A 2025 Time report notes that Microsoft is investing in renewable energy to run AI data centers, while smaller players experiment with energy-efficient algorithms. However, no industry-wide standard exists to cap AI emissions, leaving the balance between accuracy and sustainability unresolved.
Complex Queries Escalate Environmental Toll
The type of query significantly influences AI emissions. The Frontiers study found that complex subjects, like abstract algebra or philosophy, generate up to six times more CO₂ than straightforward topics, such as high school history. This variance stems from the increased reasoning steps required for nuanced answers. For users, the result suggests that query formulation could play a role in reducing environmental impact, though awareness remains low.
Educating users about the carbon cost of their prompts could drive behavioral change. Dauner emphasized that transparency, such as displaying CO₂ estimates per query, might encourage more selective AI use, particularly for non-critical tasks.
Did you know?
In 2008, Google's data centers consumed 0.01% of global electricity, a figure that has surged to 1-2% by 2025 due to AI and cloud computing growth. This 100-fold increase underscores the environmental challenge of scaling digital infrastructure.
No Easy Fix for AI's Carbon Surge
The path to sustainable AI is fraught with challenges. While hardware innovations, like AWS's energy-efficient Graviton4 chip, promise incremental gains, they fall short for reasoning-heavy models. Software optimizations, such as pruning redundant computations or using smaller models for specific tasks, show potential but require significant R&D. A 2025 Indian Express report warns that without concerted action, AI's carbon footprint could double by 2030, exacerbating global climate goals.
What Lies Ahead for AI Reasoning Models?
The Frontiers study exposes a stark reality: AI reasoning models deliver superior accuracy at a staggering environmental cost, emitting up to 50 times more CO₂ than simpler LLMs. While complex queries and computational intensity drive this surge, solutions like efficient algorithms and renewable energy offer hope. Transparency and user awareness could further mitigate impact, but industry-wide action is lacking.
Comments (0)
Please sign in to leave a comment
No comments yet. Be the first to share your thoughts!