Back to selected work

Tool • AI & Climate

LLM Emissions Calculator

Native Integration Calculator Environmental Impact
1000 tokens
10 1000 2000
Model

Tokens

Words

Energy

Water

CO2e

Every question we ask a large language model consumes energy, evaporates water, and releases carbon. These are invisible costs, scattered across the data centers that power our daily conversations with AI.

The Emissions Counter translates this hidden footprint into numbers we can see. It is grounded in the framework proposed by Jegham et al. (2025), the first large-scale study to measure the environmental cost of AI inference, the act of generating text, rather than training alone.

The research behind this tool combines:

  • Model-specific performance data: speed, latency, and token throughput.
  • Hardware power characteristics: H100, H200, and A100 GPU systems.
  • Regional infrastructure multipliers: PUE for electricity overhead, WUE for cooling water, and CIF for carbon intensity.

Together, these factors reveal how much real-world electricity, water, and carbon are embodied in a single prompt. A short GPT-4o query, for instance, consumes about 0.4 Wh, roughly 40 percent more than a Google search. Reasoning-heavy models such as DeepSeek-R1 or o3 can draw more than 30 Wh per response, equivalent to running a large TV for half an hour.

  • Energy (Wh) -> the electricity used by servers to generate a reply.
  • Water (mL) -> the freshwater lost to evaporation as data centers cool their processors.
  • CO2e (g) -> the greenhouse gases emitted from the electricity that powers those systems.

When scaled to hundreds of millions of queries each day, these small units compound dramatically. The study estimates that daily GPT-4o activity alone consumes electricity comparable to tens of thousands of U.S. homes, evaporates enough freshwater to meet the annual drinking needs of over a million people, and produces carbon that would require a Chicago-sized forest to offset.

As language models become faster and cheaper, our collective usage grows even faster, a pattern known as the Jevons Paradox. Efficiency gains per query cannot offset the environmental load if total demand keeps multiplying.

Understanding these hidden costs helps make AI's physical footprint visible, not to discourage use, but to invite transparency, accountability, and more sustainable infrastructure choices.

All metrics shown are estimates based on 2025 industry averages for data-center efficiency (PUE approximately 1.1-1.3), cooling water intensity (WUE approximately 0.3-1.2 L per kWh), and regional carbon factors (CIF approximately 0.35-0.6 kg CO2e per kWh).

The values shown represent operational (Scope 1 + 2) impacts during model inference and exclude manufacturing emissions. They vary by hardware type, energy source, and deployment region. The counter translates these parameters into real-time approximations of electricity, water, and carbon embodied in each generated response.

Back to top