Compare the environmental impact of popular AI models. Estimated CO₂e emissions, energy consumption, and overall rankings based on our calibrated environmental model.
Rank | Model | Provider | Parameters | CO₂e per 1M Tokens Input / Output | Energy per 1M Tokens Input / Output | Cost per 1M Tokens Input / Output |
---|---|---|---|---|---|---|
#1 | moonshotai/kimi-vl-a3b-thinking | moonshotai | 0.75B active (3B total) | ~62.32 µg (±25%)🟢 31.16 µg/93.49 µg | ~119.9mWh (±25%) 59.9mWh/179.8mWh | $0.0000/$0.0000 Avg: $0.0000 |
#2 | anthropic/claude-3.5-haiku | anthropic | 13B Est. | ~136 µg (±25%)🟢 67.99 µg/204 µg | ~523.0mWh (±25%) 261.5mWh/784.5mWh | $0.000001/$0.000004 Avg: $0.000002 |
#3 | meta-llama/llama-4-maverick | meta-llama | 4.25B active (17B total) | ~137.5 µg (±25%)🟢 68.73 µg/206.2 µg | ~624.8mWh (±25%) 312.4mWh/937.3mWh | $0.0000/$0.000001 Avg: $0.0000 |
#4 | meta-llama/llama-4-scout | meta-llama | 4.25B active (17B total) | ~137.5 µg (±25%)🟢 68.73 µg/206.2 µg | ~624.8mWh (±25%) 312.4mWh/937.3mWh | $0.0000/$0.0000 Avg: $0.0000 |
#5 | meta-llama/llama-3.1-8b-instruct | meta-llama | 8B | ~226.4 µg (±25%)🟢 113.2 µg/339.6 µg | ~1.0Wh (±25%) 514.6mWh/1.5Wh | $0.0000/$0.0000 Avg: $0.0000 |
#6 | z-ai/glm-4.5 | z-ai | 7B Est. | ~234.6 µg (±25%)🟢 117.3 µg/351.8 µg | ~521.3mWh (±25%) 260.6mWh/781.9mWh | $0.0000/$0.000001 Avg: $0.000001 |
#7 | moonshotai/kimi-k2 | moonshotai | 7B Est. | ~249.3 µg (±25%)🟢 124.6 µg/373.9 µg | ~479.4mWh (±25%) 239.7mWh/719.1mWh | $0.0000/$0.000002 Avg: $0.000001 |
#8 | anthropic/claude-3.7-sonnet | anthropic | 30B Est. | ~340 µg (±25%)🟢 170 µg/509.9 µg | ~1.3Wh (±25%) 653.8mWh/2.0Wh | $0.000003/$0.000015 Avg: $0.000009 |
#9 | anthropic/claude-opus-4.1 | anthropic | 175B Est. | ~340 µg (±25%)🟢 170 µg/509.9 µg | ~1.3Wh (±25%) 653.8mWh/2.0Wh | $0.000015/$0.000075 Avg: $0.000045 |
#10 | openai/o3-mini | openai | 13B Est. | ~360 µg (±25%)🟢 180 µg/540 µg | ~1.3Wh (±25%) 642.9mWh/1.9Wh | $0.000001/$0.000004 Avg: $0.000003 |
#11 | openai/o3-pro | openai | 175B Est. | ~360 µg (±25%)🟢 180 µg/540 µg | ~1.3Wh (±25%) 642.9mWh/1.9Wh | $0.00002/$0.00008 Avg: $0.00005 |
#12 | qwen/qwen3-30b-a3b-instruct-2507 | qwen | 7.5B active (30B total) | ~623.2 µg (±25%)🟢 311.6 µg/934.9 µg | ~1.2Wh (±25%) 599.3mWh/1.8Wh | $0.0000/$0.0000 Avg: $0.0000 |
#13 | x-ai/grok-4 | x-ai | 30B Est. | ~733 µg (±25%)🟢 366.5 µg/1.10 mg | ~1.6Wh (±25%) 814.5mWh/2.4Wh | $0.000003/$0.000015 Avg: $0.000009 |
#14 | openai/gpt-5-nano | openai | 8B Est. | ~960 µg (±25%)🟢 480 µg/1.44 mg | ~3.4Wh (±25%) 1.7Wh/5.1Wh | $0.0000/$0.0000 Avg: $0.0000 |
#15 | mistralai/ministral-8b | mistralai | 8B | ~1.955 mg (±25%)🟢 977.4 µg/2.932 mg | ~4.3Wh (±25%) 2.2Wh/6.5Wh | $0.0000/$0.0000 Avg: $0.0000 |
#16 | meta-llama/llama-guard-4-12b | meta-llama | 12B | ~39.93 mg (±25%)🟡 19.96 mg/59.89 mg | ~181.5Wh (±25%) 90.7Wh/272.2Wh | $0.0000/$0.0000 Avg: $0.0000 |
#17 | google/gemini-2.0-flash-001 | 300B Est. | ~96.15 mg (±25%)🟡 48.07 mg/144.2 mg | ~801.2Wh (±25%) 400.6Wh/1201.8Wh | $0.0000/$0.0000 Avg: $0.0000 | |
#18 | mistralai/mistral-small-3.2-24b-instruct | mistralai | 24B | ~135.1 mg (±25%)🟠 67.56 mg/202.7 mg | ~300.3Wh (±25%) 150.1Wh/450.4Wh | $0.0000/$0.0000 Avg: $0.0000 |
#19 | meta-llama/llama-3.3-70b-instruct | meta-llama | 70B | ~163 mg (±25%)🟠 81.52 mg/244.6 mg | ~741.1Wh (±25%) 370.5Wh/1111.6Wh | $0.0000/$0.0000 Avg: $0.0000 |
#20 | mistralai/codestral-2508 | mistralai | 22B Est. | ~176.9 mg (±25%)🟠 88.47 mg/265.4 mg | ~393.2Wh (±25%) 196.6Wh/589.8Wh | $0.0000/$0.000001 Avg: $0.000001 |
#21 | openai/gpt-5-mini | openai | 20B Est. | ~197.5 mg (±25%)🟠 98.75 mg/296.3 mg | ~705.4Wh (±25%) 352.7Wh/1058.1Wh | $0.0000/$0.000002 Avg: $0.000001 |
#22 | deepseek/deepseek-chat-v3.1 | deepseek | 37B active (671B total) | ~253 mg (±25%)🟠 126.5 mg/379.5 mg | ~486.6Wh (±25%) 243.3Wh/729.9Wh | $0.0000/$0.000001 Avg: $0.000001 |
#23 | z-ai/glm-4.5-air | z-ai | 100B Est. | ~266.7 mg (±25%)🟠 133.3 mg/400 mg | ~592.6Wh (±25%) 296.3Wh/888.9Wh | $0.0000/$0.000001 Avg: $0.000001 |
#24 | openai/gpt-5-chat | openai | 175B Est. | ~286.5 mg (±25%)🟠 143.2 mg/429.7 mg | ~1023.2Wh (±25%) 511.6Wh/1534.8Wh | $0.000001/$0.00001 Avg: $0.000006 |
#25 | qwen/qwen3-coder | qwen | 120B active (480B total) | ~340.1 mg (±25%)🟠 170.1 mg/510.2 mg | ~654.1Wh (±25%) 327.0Wh/981.1Wh | $0.0000/$0.000001 Avg: $0.000001 |
#26 | google/gemini-2.5-flash | 300B Est. | ~343.4 mg (±25%)🟠 171.7 mg/515.1 mg | ~2861.5Wh (±25%) 1430.8Wh/4292.3Wh | $0.0000/$0.000003 Avg: $0.000001 | |
#27 | google/gemini-2.5-pro | 300B Est. | ~343.4 mg (±25%)🟠 171.7 mg/515.1 mg | ~2861.5Wh (±25%) 1430.8Wh/4292.3Wh | $0.000001/$0.00001 Avg: $0.000006 | |
#28 | qwen/qwen3-235b-a22b-2507 | qwen | 58.75B active (235B total) | ~401.8 mg (±25%)🟠 200.9 mg/602.6 mg | ~772.6Wh (±25%) 386.3Wh/1158.9Wh | $0.0000/$0.0000 Avg: $0.0000 |
#29 | qwen/qwen3-235b-a22b-thinking-2507 | qwen | 58.75B active (235B total) | ~401.8 mg (±25%)🟠 200.9 mg/602.6 mg | ~772.6Wh (±25%) 386.3Wh/1158.9Wh | $0.0000/$0.0000 Avg: $0.0000 |
#30 | baidu/ernie-4.5-300b-a47b | baidu | 75B active (300B total) | ~482.6 mg (±25%)🟠 241.3 mg/723.9 mg | ~1072.4Wh (±25%) 536.2Wh/1608.6Wh | $0.0000/$0.000001 Avg: $0.000001 |
#31 | x-ai/grok-3-mini | x-ai | 30B Est. | ~603.2 mg (±25%)🟠 301.6 mg/904.8 mg | ~1340.5Wh (±25%) 670.3Wh/2010.8Wh | $0.0000/$0.000001 Avg: $0.0000 |
#32 | anthropic/claude-opus-4 | anthropic | 400B Est. | ~618.4 mg (±25%)🟠 309.2 mg/927.6 mg | ~2378.4Wh (±25%) 1189.2Wh/3567.6Wh | $0.000015/$0.000075 Avg: $0.000045 |
#33 | x-ai/grok-3 | x-ai | 200B Est. | ~666.7 mg (±25%)🟠 333.3 mg/1.00 g | ~1481.5Wh (±25%) 740.8Wh/2222.3Wh | $0.000003/$0.000015 Avg: $0.000009 |
#34 | anthropic/claude-sonnet-4 | anthropic | 200B Est. | ~773 mg (±25%)🟠 386.5 mg/1.159 g | ~2973.0Wh (±25%) 1486.5Wh/4459.5Wh | $0.000003/$0.000015 Avg: $0.000009 |
#35 | openai/gpt-5 | openai | 400B Est. | ~1.637 g (±25%)🔴 818.6 mg/2.456 g | ~5846.9Wh (±25%) 2923.4Wh/8770.3Wh | $0.000001/$0.00001 Avg: $0.000006 |
#36 | mistralai/mistral-medium-3.1 | mistralai | 500B Est. | ~4.167 g (±25%)🔴 2.083 g/6.25 g | ~9259.7Wh (±25%) 4629.8Wh/13889.5Wh | $0.0000/$0.000002 Avg: $0.000001 |
Help us make your environmental data more accurate. We're constantly working to improve our measurements and would love to collaborate with you to ensure we're representing your models fairly and accurately.
These environmental impact estimates are based on our calibrated models using publicly available data and research. Actual emissions may vary significantly based on hardware configuration, data center efficiency, energy sources, and usage patterns. We encourage providers to share more accurate data to improve these estimates. This data is provided for informational purposes and should not be considered as definitive environmental impact assessments.