4 Comments
User's avatar
XP's avatar

Are people actually arguing "normalization"? It wouldn't surprise me, but I hadn't heard that one before. It's a bit late in the game, though. Normalization is now a cultural phenomenon, not a meme secretly transmitted by seeing someone use ChatGPT in public, or by crossing a picket line.

There could actually be a per-prompt anti-normalization effect: "I'm seeing everyone around me is using AI so much, it really must be destroying the planet!" or "Look at them! I'll never be one of those drones who hands off their thinking to AI!"

Either way, there's a natural ceiling to personal AI use, and personal use already mostly doesn't require today's smartest models. (Though for now ChatGPT-5 Thinking still does a lot better at creating recipes.) It's just that the ceiling is a lot higher than critics are comfortable with.

Richard Y Chappell's avatar

Trying to account for "contagion" is tricky. On the one hand, the causal effect doesn't stop after 1 spread, so you end up with exponential causal responsibility. On the other hand, if you have exponential spread happening anywhere then pretty soon everyone has been infected so maybe your contribution doesn't make any *counterfactual* difference: unless *every* spreader is neutralized, the end result is inevitable no matter what you in particular do.

[Edited to add: I take this to show that we shouldn't really model "normalization" as a form of unlimited contagion. But it's left a bit of a mystery how we *should* think about it.]

theobserver's avatar

"You have to prompt ChatGPT 2000 times in a day to increase your emissions by 1%"

Welp . . . based on my current project . . . good-bye, planet.

Kamran Shah's avatar

Am I correct in reading this as saying the vast majority of carbon output is coming from energy costs? If all the energy used throughout the whole process was fueled by nuclear, do you have any estimates on how this would lower the carbon output?