69 Comments
User's avatar
Karen Hao's avatar

Hi Andy, thanks so much for engaging with Empire of AI and taking the time to write up your thoughts. Most of your critiques here are based in philosophical differences with me, and I lay out my philosophical stance transparently in the book, so I won't repeat them here.

But I did want to come back to you on the numbers for Cerrillos. The figures I mention for Cerrillos' water consumption, 5,097,946 liters in 2019, comes not from a publicly available study but a freedom of information request that was filed to SMAPA (Servicio Municipal de Agua Potable y Alcantarillado), the agency which oversees these figures. I'm not sure how to attach a document to a Substack comment, so I'll send you the original document with their response in an email.

Here is the text from the document (originally in Spanish, translated to English):

---------

Response:

To whom it may concern, please note the following:

1. First, I request information on the total liters of water consumed in residential use within the territory of two municipalities: Cerrillos and Maipú (both belonging to the Metropolitan Region), during the years 2018 and 2019, respectively; as recorded by the Municipal Water and Sewerage Service (SMAPA) of the Maipú municipality. ((Karen's note: this is just a copy and paste of the original request))

A: The following table shows the amounts for residential customers:

CERRILLOS: 2018: 4,911,432.46 | 2019: 5,097,946.72

MAIPU: 2018: 33,079,905.63 | 2019: 35,322,547.1

---------

To your point, these numbers initially seemed strange to me. So I asked my collaborator, a Chilean journalist, to follow up with SMAPA and ask whether they had gotten their units wrong and given the numbers in cubic meters instead of liters. Unfortunately, they never got back to us. So instead we spoke with various Chilean journalists and organizations who had followed this story over the previous years, who corroborated these numbers. Based on the document as well as the other corroborations, we proceeded with the number as written.

Given your questions about this number again, I have asked my collaborator to once again follow up with SMAPA. And if they continue to not respond via email, she is going to go in person to their offices to get to the bottom of this. I will keep you posted on the outcome.

Thanks again for the raising these questions.

Andy Masley's avatar

Hey Karen! First of all I want to massively thank you for replying to this. I realize this was a lot of very specific criticism and that you’re very busy. Also thank you for taking the time to find and email me your water source.

First and most importantly, the source you sent me actually strongly confirms my criticism, and unfortunately this does mean your measurement of the data center’s proportion is 1000x too high. My criticism specifically is that you misread cubic meters as liters, and this is making your estimate of how much water the city uses 1000x lower than it should actually be (and thus making the data center seem like it’s using 1000x as much water as the city).

Here's a link to the key part of what I see in the document you sent: https://drive.google.com/file/d/1_O_ohmOgNWVPkdRz5taeGpTFUWagh9Wq/view?usp=sharing

Importantly, these numbers don’t have units. I see you requested the measurement in liters, but your source just says these are the “amounts.”

There are 4 ways we can tell that these numbers are measuring cubic meters and not liters:

-In the document I’m relying on, Cerrillos is about 10% of the water demand of the municipality, and the total water demand of the municipality is 54,148,639 m³. So 10% of that gets us to around 5,400,000 m^3, basically the exact same number as the one your source gave without units. This seems to conclusively show that the numbers you sent are measuring meters cubed, NOT liters, and your estimate is for the city is 1000x as low as it should be. These numbers are just way way too close to be a coincidence. (https://media.smapa.cl/media/documentos/2024/07/Estudio%20de%20Demanda%20FVQ%20V03%20%28SMAPA%20total%29%20Sep%2027.pdf)

-In every document this municipality produces on water I can find, they always measure in meters cubed. (https://www.transparenciamaipu.cl/wp-content/uploads/2020/04/08_Gestion_SMAPA_2019.pdf)

-My argument stands that 5 million liters for a city of 88 thousand people is so little water that people wouldn’t even have enough to drink every day. There are no cities where each individual person only uses 0.12 L per day. During the Zero Day Capetown water emergency in 2018, the government limited citizens to 50 L per person per day. So even in one of the most extreme water scarcity scenarios that has ever happened, people are still using 500x as much water as your number. No way it's correct. https://en.wikipedia.org/wiki/Cape_Town_water_crisis

-This number also brings the water use per person basically perfectly in line with Chile’s average water per person. 5,000,000 M^3 converted to liters, divided by 365, and then divided by 88000 people gets us to 155 L per person, basically exactly in line with Chile's average of 170-180 L per person. So multiplying by 1000 is the only possible way to keep your number in the same order of magnitude as the water average Chileans use. (https://dialogue.earth/en/water/46221-chile-seeks-to-guarantee-water-rights-amid-severe-drought/#:~:text=In%20Chilean%20cities%2C%20average%20water%20consumption%20is,the%20distribution%20network%20because%20of%20poor%20infrastructure.)

I’m completely confident that this shows you made a mistake and that you need to correct this section of your book. Your estimate is 1000x too high.

I have to say I’m still extremely worried that readers are coming away from this section with a lot of massive misunderstandings of basic questions of water use that don’t reduce to philosophical disagreements, that would really be helped by some corrections in the book itself.

Going down the list:

-It’s not the case that Making AI Less Thirsty predicted that AI would consume that much water. It might be that we have a philosophical disagreement on how to define the word consume, where you use it more expansively to mean temporarily diverting water from a source before putting it back, but this clearly doesn’t fit everyday people’s understanding of the word. If I put a water wheel in a river that raised water up and then dropped it back in a few seconds later, I think most people would be confused if I said that the wheel were “consuming” the water. I would really appreciate some kind of correction on the wording there, and that only 3% of those 1.1-1.7 trillion gallons is drinkable water. I’m confident that if any reader came away from your paragraph, and I said “90% of that water is returned unaffected to the source, and only 3% is drinkable water” the reader would be very surprised. Even if this is a philosophical difference, surely you agree that this surprise would happen and is a sign that the average reader is coming away with a wildly off base understanding of how much water AI is actually predicted to use by the paper you cite? Your language around data centers “sucking up” the water in your Atlantic piece also seems like it can’t possibly mean “they temporarily use and then return water.” It’s really important to me that fellow environmentalists have a clear picture of where the largest water issues in society are, and I’m worried this section can seriously lead them astray.

-No data centers anywhere use their maximum permitted water allowance regularly. Your calculation implies this one would, even though on average it seems like data centers only use about 20% of the permitted amount. Again, this isn’t a philosophical difference, because your readers are left to infer that the data center would actually use this much water.

I’d be really eager to get a response.

Will Kiely's avatar

Hi Karen, I know it can be frustrating to make a request for a figure in liters and get it in m^3 instead, but the responsibility is on you as the author of the book to make sure that the information you are reporting is accurate.

If some Chilean journalists and organizations say that the number SMAPA gave you is in liters as you requested, not m^3, but you know that can't be right, the responsibility is on you to report the correct units in your book. Reporting the false units as if they are true just because you got someone else to mistakenly tell you that they were correct would be seriously dishonest. You can't just say "well some Chilean journalists and organizations told me it was true, so I'm free to write like it's true."

Doing all of the research for a nonfiction book takes a lot of work and I'd be willing to assume you just made an honest mistake and did not intentionally publish misinformation. But the proper way to respond to someone pointing out that you made a mistake is to own up to it.

It's really disappointing that after Andy went through all of this work to point out the errors you are not accepting responsibility for them, but are just saying you will have your collaborator follow up with SMAPA again as if there is still some chance that the figures may be in liters after all despite the impossibility of that.

It's obvious from Andy's words that your claim in the book is false (there's just no way that those 88,000 residents use only 0.2 liters of water per day) and so you should just own your mistake and say that you'll make an effort to correct it rather than blame someone else (SMAPA) by saying you'll go ask them again if the units are in m^3 rather than liters.

My opinion of you would have been higher had you not responded to Andy's post. Responding in general is good, but your specific response that dodges taking responsibility for the mistake is very bad. I wanted to buy your book to read your reporting of OpenAI's firing of Sam Altman and see what it adds to the reporting from The Optimist, but after seeing this response I feel like I shouldn't financially support your book on principle.

(Society has a bad epistemics problem and I think part of the issue is that people are often financially rewarded for saying false things instead of being financially punished for spreading misinformation. Buying your book despite these egregious errors and your refusal to own up to them would therefore mean I'd be contributing to the bad epistemics problem.)

You can still fix your reputation in my mind. I'd recommend starting by admitting the mistake in Andy's comment section here and committing to adding a "Mistakes" section to your website for the book. You can then start that section by just linking to Andy's blog post and acknowledging that the claims about water that Andy critiques are indeed false. If you do that, I'll happily buy your book.

Sal Marre's avatar

Hi Will, I know it can be frustrating when your righteous comment doesn't get the validation you're seeking, but the responsibility is on you to ensure your tone actually encourages corrections rather than just performative grandstanding.

If you know from basic human psychology that public shaming typically backfires, the responsibility is on you to communicate more constructively. You can't just say "well, the EA forums validated my approach, so I'm free to be as sanctimonious as I want."

It's really disappointing that after all this effort, you're not accepting responsibility for your counterproductive framing. Writing holier-than-thou lectures to published authors when you've accomplished nothing of note in your own career is poor form. My opinion of you would have been higher had you just said "this seems like a significant error" rather than crafting a manifesto about societal epistemics.

You can still fix your reputation in my mind. I'd recommend admitting your comment was unnecessarily harsh and perhaps publishing something yourself before instructing others on how to maintain their reputations. If you do that, I'll happily read your future comments.

—Someone with a career that's not just a brief stint leading an EA org and a couple sales roles

Will Kiely's avatar

Writing with the right tone so my message is well received is hard. I don’t know if my being autistic (and related desire to communicate directly and literally) makes it’s harder for me than most others, but how hard it is for me relative to others doesn’t matter much except insofar as it might inform your expectations of the probability that I would get my tone right. (You seemed to have high expectations for me with respect to my tone, which is good to see, and I’ve obviously disappointed those expectations, sadly.) What’s more relevant is that it’s hard enough for me in an absolute sense that I know I’ve gotten my tone wrong on many occasions in the past. In any case, I agree with you that my tone is mostly my responsibility. While different people can interpret the same words differently, it’s absolutely the case that the words that are used are the primary determining factor of how they will be determined, and I’m the one who chose the words.

Same with:

If you know from basic human psychology that public shaming typically backfires, the responsibility is on you to communicate more constructively.

I agree it’s my responsibility to communicate constructively.

And:

It's really disappointing that after all this effort, you're not accepting responsibility for your counterproductive framing.

It’s not clear to me that my wording was counterproductive despite your allegation, but in any case I’m in agreement with you that I’m responsible for my framing.

I’m not sure what to make of all of this besides ‘try to get my tone right’ and ‘try to communicate constructively and frame things productively.’ I do try. I know I don’t always succeed. Thanks for the reminder.

Regarding:

Writing holier-than-thou lectures to published authors when you've accomplished nothing of note in your own career is poor form.

I was not trying to be holier-than-thou. I think I was making an important point. Your previous statements seem to criticize the way in which I communicated that message, not the message itself. Maybe this is still a criticism of the way? Is writing not-holier-than-thou lectures to published authors when I’ve accomplished nothing of note in my own career poor form? If not, then we’re on the same page. (Well actually, I think you would say that the ‘lecture’ dig is also a criticism, one that I’m on board with. I definitely struggle to write concise comments, and when I try writing carefully I often end up focusing too much on ensuring that I’ve precisely conveyed my intended message and too little on being precise. E.g. See my reply to meefburger. So I end up with a long-winded ‘lecture’ (assuming this is what you meant by “lecture.”)

My opinion of you would have been higher had you just said "this seems like a significant error" rather than crafting a manifesto about societal epistemics.

Noted. I’m curious though, would you have understood my point if I had just said “this seems like a significant error”? I’d be worried that you’d conflate my meaning with the claim that the liters versus m³ error is significant (—a claim which I don’t agree with. I had a paragraph at the end of my reply to meefburger about that that I deleted for brevity, believe it or not. That’s right, I did not fail with abandon with the length of that comment. To try to be concise here: Despite the ~4,500x magnitude of the error, I don’t think that making a simple factual error like that in a book is as big of a deal as not admitting it is an error would be when confronted with a very clear, well-written blog post explaining the error.)

I'd recommend admitting your comment was unnecessarily harsh

To be clear, it was not intended to be harsh. I think the extent to which it is is mostly a consequence of the message I was trying to convey. When I think of times that I’ve been told that I ought to admit something or take responsibility for something it certainly felt hard to hear.

That said, I agree that I could have made it come across less harshly. One way I did consider doing this was by making it more of a “compliment sandwich” by first stating that I think it’s great that she took the time to respond to Andy’s post and share the details about how she arrived at the liters unit. But I skipped doing that because I thought that would make it unnecessarily long. However, there probably were other ways that would have softened the tone without adding to the length. Perhaps I should have taken more time to identify them before posting. I do often not post things because of not being able to identify a good enough way to say them.

and perhaps publishing something yourself before instructing others on how to maintain their reputations.

Hmm. I don’t see why that’d be helpful. Why would my being a published author myself affect the legitimacy or illegitimacy of my comment? Or is the idea that your prediction is that I wouldn’t have written what I wrote if I was a published author and so publishing something would improve the quality of my comments in your eyes by my making it so I don’t write what I wrote?

Anyways, I appreciate you mirroring the structure of my comment back to me.

—Someone who never lead an EA org and has worked less than half his adult life (I quit my last job a year and a half ago and have just been living on savings since. Definitely not the career impact I wish I had.)

Sal Marre's avatar

This is a much more thoughtful comment than I think my somewhat snarky and facetious post, half written by LLM, probably deserves. I applaud the effort and the sincerity.

I don't care to really dig in on all of this, suffice to say that I was largely trolling you because your reply to Karen reads (to me) like someone of authority and experience lecturing someone much their junior. Whether the point is correct isn't really material to what I was trolling you for. Call it tone-policing or anything you like, it just didn't sit quite right with me how reminiscent it felt of a bad manager providing (bad) coaching to their recent college grad employee.

That said, it was probably a bit harsh and over the line for me to attack your career or whatever. Best of luck in your future endeavors and substack commenting.

md's avatar

Lol, you told someone else off for how they engaged online with a "somewhat snarky and facetious post, half written by LLM" containing "you can still fix your reputation in my mind" (how self-important of you) and ending with the insult "someone with a career that's not just a brief stint leading an EA org and a couple sales roles".

You got a much more polite response than you deserved!

Tyler Sayles's avatar

Girl take a Valium with a tall glass of water ^3

Meefburger's avatar

I mostly disagree, in that I think it is good and virtuous for Karen to have responded in this way. She was transparent about the path that led to those numbers showing up in the book. And she's indicated she thinks this warrants further investigation to make sure she's getting things right. I does look like she made a pretty bad, unforced error in her reporting, but I think if you do that, and give a response of the form "thanks for pointing this out, our process for arriving at this number was <this>, I will look into it and get back to you" is good and should be seen as a positive update.

Will Kiely's avatar

I agree that those things are positive.

I just think there is an additional negative update (large enough to outweigh the positives) from not saying more.

My apologies for failing to be concise. In my effort to be precise, my comment became very long-winded:

In particular, the negative update in my mind comes from the fact that she neither (a) admitted that she made an error about the units or (b) stated that she’s still uncertain about whether the units are in error.

Really I think she should have just admitted to making an error, since Andy’s post conclusively demonstrates that the figure she included in the book can’t possibly be correct in liters.

But she also could have just said “Your arguments that the units actually are in m^3, as I initially suspected, seem compelling—you’re probably right—but I still think the correct units might actually be liters.”

My reaction to this would have been surprise at why she is still in doubt, but I would have reserved judgment about her not admitting error until later when she follows up with Andy.

Stating that she still thinks the figure might actually be in liters despite Andy’s arguments would serve the function of explaining why she’s not admitting error now: if it might not be an error, she shouldn’t admit error yet; she should find out for sure the truth and then admit error only if it definitely seems to be an error.

But she did not say that she still thinks what she wrote in the book (liters) may be correct. This omission suggests that she thinks she indeed made an error (just like I assume other readers of Andy’s post think based on the evidence Andy provided), but that she doesn’t want to admit that it’s an error yet.

“To your point, these numbers initially seemed strange to me. So I asked my collaborator, a Chilean journalist, to follow up with SMAPA and ask whether they had gotten their units wrong and given the numbers in cubic meters instead of liters. Unfortunately, they never got back to us. So instead we spoke with various Chilean journalists and organizations who had followed this story over the previous years, who corroborated these numbers. Based on the document as well as the other corroborations, we proceeded with the number as written.”

This reads as a justification as what she wrote in the book. She’s saying she herself suspected that the units were wrong, but that she was very diligent about ensuring that the liters were the correct unit before publishing the figure in her book. If the number is an error, she’s saying, it’s not because she was careless.

Okay, she was not careless. That’s good. But can she admit that liters are not the correct unit for the figure?

It sounds like she’s saying she won’t admit that until she can get SMAPA to tell her that the units are in m³ not liters (even though getting SMAPA to respond doesn’t at all seem necessary to tell that the units aren’t in liters, as Andy points out).

“I have asked my collaborator to once again follow up with SMAPA. And if they continue to not respond via email, she is going to go in person to their offices to get to the bottom of this.”

Going to their offices to ask in person if they continue to not respond via email seems like a lot of effort. Again, she is signaling that she is diligent, not careless. But why is it necessary to go through that effort of asking in person? Why is it necessary to get a response from SMAPA at all? Why can’t she just look at the evidence in Andy’s post and conclude that liters can’t possibly be the correct unit for the number on that basis of the evidence he provides?

Because then she can’t use SMAPA as a scape goat. She can’t blame the error on the fact that SMAPA did not respond to her team’s emails seeking clarification about the units if it’s true that you can just tell that the units are m³ without asking SMAPA by just looking at the evidence Andy provides. Whereas if she goes through all the effort to finally get a response from SMAPA, she can say that now knows the units are m³ because SMAPA said so, implying that before she didn’t know that only because they didn’t say so.

Maybe it’s really hard for her to acknowledge the error and take responsibility for it, especially given the scale of the error (off by a factor of ~4,500x). If so, major props to her if/when she does acknowledge it and take responsibility for it. My hope was that my comment would raise the probability that she would. If it had the opposite effect by making it feel even harder then I regret it. But my intention is to state as clearly as I can that if she admits error and takes responsibility for it I will do what I can to socially praise and reward her for doing so, not punish her for it.

Sam Barton's avatar

I can't for the life of me see what philosophical difference his critique is based on - care to elaborate?

SVF's avatar

The same “philosophical difference” that creationists and vaccine skeptics use. “I believe what I want and that’s final.”

allanderek's avatar

I also note it's a bit suspicious that one data center in Chile is to use 1000 x more water than 88k residents but another data center in Uruguay (detailed later in the same chapter) would use *the same* as 55k residents.

I think that using the *same* amount of water as 88k residents is more worrying than using 1000 x as much. If the water source can support 1000x the need of the residents what is the likelihood that it **cannot** support 1001x (i.e. the data center's 1000x plus the residents 1x)? Or even say 1003x to give the residents a 3x buffer. That's pretty unlikely, and also means that if there ever were a shortage, well the data center only needs to reduce its usage by 0.1 percent to satisfy the needs of the residents.

However, if the data center uses approximately the same amount of water as the residents it seems much more likely that the data center could threaten the needs of the residents.

SVF's avatar

So you’re really going with the “honest mistake” excuse huh.

XP's avatar

Karen Hao isn't just any a criticial journalist who made some sloppy mistakes. Her main bit is that AI is literally (in the most literal sense of "literally") modern colonialism and built on extracting labor and resources from the global South and/or everyone.

To support that thesis, she needs to argue:

- That AI has no benefits at all and is entirely a moneymaking / propaganda / surveillance tool for tech elites.

- That looking and learning "without consent" constitutes a claim on someone's data or work as much as a colonial claim on a territory.

- That cognitive labor (labeling, mostly) to train AI in low-income countries is somehow uniquely exploitative in ways other local labor is not.

- That AI is permanently removing resources from some global resource pool.

Based on what I've unfortunately read of her work in the past, I'm not sure she's actually writing in bad faith here - she's just _ludicrously_ credulous of any claim that's detrimental of AI and _ludicrously_ skeptical of any claim made by an AI company or AI expert.

There are plenty of people in academia with "AI" in their role whose main credential in the field is "hates AI".

Alex's avatar

Could you elaborate on why it's important to prove this point: "That cognitive labor (labeling, mostly) to train AI in low-income countries is somehow uniquely exploitative in ways other local labor is not."?

I don't think exploitation needs to be unique to be taken seriously.

I agree that the AI supply chain is not unique in its exploitative practices, but that doesn't excuse them.

Regarding your first point, I don't think something has to have no benefits to be exploitative; it's just that the benefits don't justify the exploitation. Those benefiting most from the resulting technology are often not the same people who are exploited while it is being built. The beneficiaries are much more likely to justify why someone else's suffering is worth the positive outcomes they are enjoying.

This is a new industry and scaling rapidly, so it seems especially important that it is not built on the same exploitative and extractive practices as many other industries.

For the record, I appreciate the fact-checking about the water in this post. Numerical precision and factual integrity are always important. I'm just not sure I agree with the philosophical points made in this comment.

I do agree with you though, that it's a problem when people readily accept information that confirms their stance while quickly rejecting information that doesn't.

CMar's avatar

How dare they give economic opportunities to people in the global south! It really grinds my gears when white people think they have the right to give jobs to people in underdeveloped countries.

Alex's avatar
Nov 18Edited

My goal is to figure out the best ways to improve the wellbeing of people everywhere. If you have sources and facts about how the current methods of data labeling contribute to that goal, feel free to message me directly and I'll engage with that information in good faith. As of now, it's hard for me to see labeling horrific, traumatizing content for barely enough money to eat as an economic opportunity, for anyone, anywhere. It's difficult for me to learn a lot from sarcasm, though.

CMar's avatar

I didn’t say it was the cushiest job ever. Nor do I think it is anywhere near the hardest job. It is probably a lot easier than a lot of other jobs available in developing countries. And it literally is an economic opportunity. I mean, that’s just literally what it is. Those workers aren’t slaves. They chose to work there. There are now more jobs available is some very poor places, making those people at least a bit less poor than they used to be.

Alex's avatar

I appreciate you taking the time to clarify. I'm not an expert in foreign labor markets, and I'm sure it's more nuanced than just not having those jobs exist at all. There are different dimensions to what is "hard." There's mental vs physical, and then there's skill vs unpleasantness. I'm guessing we can agree there are few more mentally unpleasant jobs than labeling violent content. I'm also hoping we can agree that there should be protections put in place regarding working hours, and mental health care should be a top priority? I'm not sure if you'd had the chance to read individual accounts of lasting trauma. I just think workers should be cared for, no matter where they are. Again, thanks for taking the time to discuss.

sobriquet_mdear's avatar

This seems like a bad faith reply, clearly trying to ape the values of the left (and failing), in order to shame the commenter. Silly.

One can believe that economic development is good, opportunity is good, and people should have autonomy to choose jobs that others may not like. *And* believe that any large company has choices about how to treat workers and locals, and it would be good if they made good choices. Of course definitions of “good” may vary but treating the topic like it’s a no-brainer (when it is not) doesn’t do much to assuage fears that a company might mistreat its workers and local communities.

There are other ways to do that, usually using data for the truly curious, as Andy’s post did with water.

CMar's avatar

It’s certainly a caricature, but also most certainly an accurate one. I read the part of Empire of AI about data workers. She was not arguing that the AI companies need to treat their workers better. Her argument was that the whole enterprise is colonial and exploitative. The person I was replying to seems to be using the same framing. Its great that you understand the importance of economic development, opportunity and autonomy, but that doesn’t contradict the fact that there are plenty of people on the left who actively see western companies giving jobs to people in third world countries as a bad thing.

sobriquet_mdear's avatar

The commenter did not seem to be using the same framing to me. It seemed you overshot at them by making a Hao-shaped bogeymen out of someone asking genuine questions. We don’t need to assume everyone who cares about human/worker welfare believes everything Hao believes, and doing so is not helpful to goals of helping people correct the misinformation she’s giving.

Another commenter below was much more helpful.

XP's avatar

Agreed with all of the above, and the major AI companies and the industry as a whole can probably do a lot better.

But other industries that benefit from cheap labor don't get accused of (literal) colonialism, yet in Karen Hao's talks and book it's a pillar of her assertions. She is specifically outraged that the labor is used for AI.

I _think_ that she's implying that the exploitation is a systemic aspect of AI, and perhaps even that modern AI could not (continue to) exist at all without ongoing exploitative labor. If so, that comes close to a Guardian article from a while back, which left readers with the impression that Gemini was essentially a fraud, just a database of facts that was constantly being maintained by underpaid fact-checkers.

Alex's avatar

Thanks for the response! I always want to make sure my concerns about AI are grounded in facts, and to place claims about AI within the broader context of other industries.

I know many people had valid issues with the book, but in the last chapter, there were concrete suggestions about ways to develop AI without exploitation. For instance, focusing on specialized models for specific tasks like cancer detection or materials discovery, decentralization of AI development, community-driven model development, proper compensation for labor, consent for data use, regulation, and governance.

Let me know if you would recommend other books/articles on responsible AI development that offer a different point of view. I think it's always healthy to engage with a variety of perspectives, even if I don't end up agreeing with them.

Rune Norderhaug's avatar

different than op but I think a book you may consider reading is AI Ethics by Mark Cockelberg

https://direct.mit.edu/books/book/4612/AI-Ethics

This essay you might find interesting

https://en.prolewiki.org/wiki/Essay:Intellectual_property_in_the_times_of_AI

This paper

https://arxiv.org/abs/2403.05104

and perhaps though not directly releated to ai but releated to different topics that come up

https://archive.org/details/free_culture

https://www.taylorfrancis.com/books/mono/10.4324/9780203602263/uses-heritage-laurajane-smith

To be honest, one of my concerns as someone coming from a social sciences background is that the anti-ai movement seems to ironically have become a bit of an ableist system justifying movement rather than one simply focused on regulation. In many ways, it reminds me of the populist movements that led to the far right including some of the conspiracy theories within them such as great replacement theory.

I think there is some discussion you can definitely have around how any industry acts in a colonial way, but I think what authors like Karen Hao are also purposeily leaving out is the colonialist effects their own movement can have by effectively system justifying in reaction to potential increases in accessibility and potential ways that countries including in the global south want to develop their infrastructure. Ironically, they fail to consider the perspective that it is just as much a colonist technique to enforce control over what industries can be developed in such countries as it is for corporation to take and steal.

Part of this is natural though because like I mentioned, there are quasi conservative elements that she and other anti-ai individuals are promoting successfully.

CMar's avatar

I listened to the first chapter, which was about the coup against Altman at OpenAI, and enjoyed it, so I bought the audiobook. After getting into later chapters I quickly realized it is woke nonsense. I don’t think I’ll be able to get through it. I feel ripped off.

Kenneth E. Harrell's avatar

Yep I already deleted it. It was a waste of time and money honestly.

Kenneth E. Harrell's avatar

Her Anti-Ai agenda was obvious from the start. Honestly the truth doesn’t matter she has already set the narrative for average people and the media PMCs.

allanderek's avatar

Hi Andy, really good post.

> This is the single largest error in any popular book that I’ve found on my own, and to my knowledge I’m the first person to notice it.

Not sure when you noticed it, but I blogged about the exact same error on the third of November: https://blog.poleprediction.com/posts/empire-ai-water-usage/

I pointed out that it was a weird error because later in the same chapter she discusses a similar data center in Uruguay

> I feel like the editor has dropped the ball a bit here, is it likely that one data centre in Chile uses 1000 x more water than 88k people but another one in Uruguay uses 1 x 55k people?

Andy Masley's avatar

Oh wow! Want to give you credit here, I'll share your blog on my twitter

Andy Masley's avatar

Shared here https://x.com/AndyMasley/status/1992679447770632401

I think in both cases she's still significantly overestimating the water draw because she's using the maximum permitted amount as the average water draw, so I suspect the Uruguay center probably would have actually used 10-20k people's worth of water. Significant, but a lot of farms and industry use way more

allanderek's avatar

Thank you.

Jacco Rubens's avatar

Incredible. What is going on with this industry? Is it some vicious circle of AI mistrust leading to incredibly unfavourable reporting, and repeat?

Andy Masley's avatar

Zero clue at all honestly, I'm flabbergasted

Kenneth E. Harrell's avatar

AI is quickly becoming a hot political / ideological issue like everything else in American society it seems and I am exhausted by it, honestly.

Steven Adler's avatar

Thanks for the service you’re doing by looking into the statistical claims - I find it very grounding

Steven Adler's avatar

I wonder if there’s a scale at which you do expect the environmental issues of AI to become more pronounced? My intuition is that the power-draw _will_ eventually matter, which is different of course from whether it’s significant yet. (There’s a chance you’ve also already written about this, in which case my apologies!)

Jamie Freestone's avatar

You’re doing God’s work here Andy but it won’t be popular. I’m massively anti-Big Tech for various reasons, but most of my pals won’t want to concede that they do anything good or even that they’re not as bad as reported in Hao’s book. The interesting thing is that even the companies themselves don’t care to rebut these claims. Or do they? Have you seen that in your investigations?

Andy Masley's avatar

I haven't actually looked at how companies themselves are considering refuting stuff like this. I've seen a few pro data center ads from Meta but that's it? I'm also otherwise more skeptical of AI companies' motives so I'm not exactly itching to advise them on how to advertise this tuff.

Jamie Freestone's avatar

Fair!

John's avatar

I wonder if refuting these arguments is worth the effort for the companies. I doubt that the environmental concerns about AI will decrease usage in a meaningful way. 95% of people will use AI if it is useful to them, just as they will get on a plane if they want to go somewhere despite the catbon costs. As you have stated elsewhere, exaggerated claims about the environmental damage caused by AI will mostly generate increased levels of guilt and anxiety for a segment of the population.

Erich's avatar

This genre of post is catnip for me.

Adam Smith's avatar

The timing of this is fitting, as I recently bought this and “If Anyone Build it, Everyone Dies” to read.

Andy Masley's avatar

What a pairing

Adam Smith's avatar

I was trying to consider different angles and perspectives on AI lol. I was going to start with Eliezer’s book

Uwe PLEBAN's avatar

We now live in a world that goes by "My ideology beats your data any day!".

Sarah's avatar

This article reads like it wasn’t written by AI

Tyler Sayles's avatar

I wonder where these water-keeper yet meat-eater and Amazon package delivery-enjoyers and generally non-technical people thought their search engine queries were being routed before big bad ai augmented results came online?

Nicolás Mladinic's avatar

Hi Andy, thanks for sharing this. I'm from Chile and, based on a few research papers, I made this enviromental calculator: https://calculadora-llm.lovable.app/calculator

PS: it's in spanish and focusing on Chile, but since the error that was pointed out was from our country I thought you'd be interested.

Kenneth E. Harrell's avatar

Makes me look at this interview very differently.

"Hasan sits down with reporter Karen Hao on her new book Empire of AI on how AI companies are taking over the world, what can be done about it..."

Will AI Take My Job? with Karen Hao

https://www.youtube.com/watch?v=e70RT6c01M8

Erik Moeller's avatar

Regarding tax revenue from data centers, I think the story gets a bit more complicated once you look at the state-level picture. 36 states offer often significant tax exemptions for data centers:

https://goodjobsfirst.org/cloudy-data-costly-deals-how-poorly-states-disclose-data-center-subsidies/

These are being criticized as fiscally highly ineffective subsidies. I'm not a subsidy expert, but looking at the numbers above, it looks to me like a tax revenue race to the bottom. When everyone is offering similar incentives, the case that the incentives are needed for a DC to be set up in a specific state becomes pretty weak, and you risk just leaving lots of money on the table for no good reason.

My intuition here is that if states want to avoid a backlash against data centers, one way to do that is to tax them more fairly, and use that revenue to invest in communities and infrastructure.

Rainbow Roxy's avatar

Couldn't agree more; how do books get publishe with such staggering data errors, and kudos to you for meticulously debugging the claims?

SVF's avatar

No need to bend over backwards to try to rationalize why an ME grad has such an embarrassing lack of understanding of what numbers and units are, and a near total lack of intuition about the physical world.

The goal isn’t accurate reporting, it’s just to vocalize “grrr AI I hate AI!” As loudly as possibly, and to drag as many people into your ignorant flailing tantrum as possible. It’s an ideological project. Numbers are only used to give an air of legitimacy, until they’re shown to be nonsense, at which point the author will pretend it was an honest mistake before moving on to the next set of nonsense. Rinse and repeat.

It’s absolutely no different from creationists, flat earthers, vaccine skeptics, etc. they believe what they want to believe. Whether their beliefs conform to reality is completely irrelevant.

The best possible good-faith interpretation is that the author is merely stupid, and has called on her equally stupid social circle to review the book. But I doubt that, and every other interpretation is worse.

It’s a religious text and makes a bit more sense when seen in that light.