"but it’s very hard to imagine anyone having such a strong moral reaction to Wolfram Alpha, or Wikipedia, or YouTube"
People did have a very strong moral reaction to Wikipedia! It's remarkable now that it's seen as something to be defended as a human-created source of truth against the scourge of chatbots
I do remember this! The reaction where I was felt more like a sneer, "Oh haha you trust Wikipedia?" rather than, like "Wikipedia is the end of truth" but I wasn't actually following the broader debate
I'm an LLM centrist. I think they can be much more interesting and useful than their detractors give credit, but they're not as good as their promoters believe them to be. To give a quick qualified defence of these three critiques of LLM chatbots:
1. Chatbots are much worse than their loudest boosters claim. Compared to the over-exaggeration of their capabilities, what they can actually consistently do is much less capable. As soon as you try and get them to do something broadly out of their training set, they flounder and (worse!) fabricate. To take one example: they're very bad at fiction (in part because they have no inner eye or long-term consistency), and the studies that seek to show otherwise are laughable in their design.
2. This is the weakest critique, but I think it's true that for many people adding a chatbot to their workflow or daily life wouldn't provide much added value. And for some people it clearly detracts (like when delusional psychotics use it to confirm their false beliefs).
3. Chatbots are in a sense demonic: they promise vast knowledge, they appear as minions willing to do our billing, and then they lie to us! (I wrote about it here: https://politicalpelorus.substack.com/p/swampbots-and-demonbots) I've seen first hand a guy in my community get sucked into socialising full time with his bots and it's like he's in a cult.
This is a fantastic piece. The anecdote about your friend spiralling and 'tapping into some higher-dimensional knowledge' or having a tendency towards cults reminds me of some conversations I often have with people when they discover I do open source investigations. They immediately start discussing the regime changes in South Asia over recent years and claim they are orchestrated by Western intelligence. I don't have the heart to tell them they would need to provide very strong evidence for me to believe their version of events, so I just nod along.
Re: Chatbots are demonic — maybe it's implied in their framing that they are demonic because they are built on copyrighted materials or stolen labour? That's what the popular consensus is.
I am working on a similar piece, but it examines how this polarising and contrarian viewpoint has developed over the last decade or so and how similar instances have occurred in much earlier periods. Perhaps we can discuss this on Twitter? I have a draft, but it currently reads very incoherently.
This is great. I think there's a lot in the status idea. My own armchair diagnosis was in terms of how high-status people often call for censorship. They criticise popular sources of (mis)information, because they fear that "others" (of less intellectual/moral calibre than themselves, naturally) will be fooled by the chintzy or vacuous content. So they say that some popular media is not only stupid but demonic because it will lead to society's downfall by fooling less savvy consumers. Hence, we need censorship or whatever. It could be the same with chatbots. They simply fear the contagion of corrupting ideas.
This might also ultimately be about preserving status as a guardian of morals, or simply by controlling others' information and entertainment. But it suggests that 1 & 3 aren't necessarily in contradiction. People might think chatbots are crappy but that others aren't sophisticated enough to see that.
this is spot on. most of this became extremely clear to me recently, having been sort of cancelled by my fellow classmates at a creative writing program after having asked chat gpt to analyse their literary output (as a way of trying to understand the extent to which current LLMs have something like a coherent aesthetic sensibility, or at least the capability of mimicking distinct such sensibilities in a convincing manner). I was accused of having shared their texts with a "person" outside of our class (against the rules), and simultaneously alleged to have overestimated the capabilities of AI (apparently utterly useless, etc). it wasn't exactly clear how an anthropomorphic understanding of LLMs cohered with their complete and utter worthlessness at all conceivable tasks.
aspiring writers constitute an interesting test case, I think: they're excessively status oriented, and the status they seek is bound up with precisely the sort of skill that one can get all Wittgenstein's duck-rabbit about with regards to LLMs: from a certain angle, they seem pretty fantastic at generating linguistic output, and from another they seem incapable of anything beyond stale slop.
lastly, one probably shouldn't underestimate the extent to which chatbots have become associated with fascist oligarchy in the minds of a lot of sophisticates, either. there's certainly a pretty standard political dimension to this, with non-crazy right-wingers probably being less inclined to incoherent hysteria than leftists (and most of this difference probably comes down to boring status games).
I really enjoyed reading this, thank you Andy! It helped me notice how I and others use status in new ways.
The content was v different to what I expected from the title - I thought it would be about the moral panic surrounding AI relationships and friends, which is interesting to think about from the same armchair: Is there a related fear of developing friendships and relationships with something that fundamentally doesn’t have status? Does this undermine the purpose of relationships for some people?
When I read “they don’t themselves have any status to convey”, I was thinking about how using Claude and Grok carry different status associations bc of political leanings, the perceived ethics of their parent companies, etc. I overall agree with your point that a chatbot doesn’t have status in the same way that a human author does and that this might be cruxxy for people’s aversion to them, but I disagree that they’re completely void of status for this reason. Maybe you would call these associations something other than status?
I typed this fast and was sloppy with how I'm using the term status here, definitely agree on the experience of different chatbots. I always come back to claude thinking "Ah yeah my guy"
I'm giving a workshop on LLMs to local startup founders in a few weeks. I gave another version a couple years ago, in the halcyon days of GPT 3.5 — now I'm realizing that I need to come prepared to counter the AI hate that a few folk in the crowd are likely to feel, or at least be tempted to.
If this theory of AI opposition is right, anyone have a notion on how we might leverage it to open people up to using LLMs?
To be honest my most successful moves here have just been to try to not come across as being aggressively pro AI, and just trying to ask simple direct friendly questions like "What's something that, if they were better, chatbots would be helpful with? What would you use them for if they did work as advertised?" and then just have them try using it to do that. A lot of people just haven't poked around with them enough to notice.
Given the important contrast you identify between how ideas are presented on TikTok and on ChatGPT, what do you think of Sora2, where ChatGPT is turning into TikTok?
I think I’ve figured out why some people react so negatively to AI, especially systems like ChatGPT.
So, think about it this way: every person, in every culture, has a hierarchy of belief. At the top of that hierarchy are really abstract, all-encompassing ideas, things like “we are smart”, or “we are good people.” At the bottom are practical behaviors, for example, cooking a meal for someone in need. That kind of action might be a concrete expression of the belief “I’m a good person.”
Now, many intellectual and creative people have reacted negatively to AI. Why? Sure, part of it can be attributed to technical reasons. In the earlier days, say, two and a half years ago, the technology was objectively less capable. Models like GPT-3.5 made errors that were obvious and weird in ways humans typically wouldn’t. So, some skepticism was warranted.
But I think there’s a deeper reason: AI threatens the top of the belief hierarchy. It undermines things like “I’m intelligent,” or “I’m creative.” That’s existential for a lot of people, especially those whose identities and status are built around being “smart” or “original.”
I also suspect many of these people are higher in neuroticism, they often express intense negative opinions online, which could be a signal of emotional reactivity. They might also be higher in disagreeableness, since they seem quite willing to aggressively critique, moralize, or even attack the existence of AI itself, at least higher than the average, anyway.
There's so much to unpack about this moral panic, and status is definitely part of it. I've seen commenters respond to a quoted - correct and helpful - ChatGPT response with a terse: "This literally has no worth." That is, if we can label LLMs as not only low-status, but forever outside of the realm of any status at all, we can avoid the comparison and thus the competition.
Then, to toss in my own hobbyhorse, almost every moral panic has some "purity/disgust" component and the revulsion is clearly on display here. If chatbots are just "mashing up" or "plagiarizing" the works of other people, are we accidentally letting in the words of a bad person? If chatbots are just "stochastic parrots" and "autocomplete", why are we taking their random gibberish seriously? If chatbots are trained by right-wing/woke billionaires who bias the data, why are we letting them control our thoughts? If chatbots are insidious expert manipulators, why are we letting these deceitful tools pollute our minds? And again, every one of these statements contradicts the others.
There's also human exceptionalism (as in your Cartesian theater series), a sudden emphasis on the moral necessity of effort / suffering / hard work that seems to have come out of nowhere, and an IMO petulant insistence that this can't be "true" AI yet because it's associated with the wrong people and/or humanity hasn't earned it yet - because their mental timeline tells them that we need to achieve the Star Trek future first, solve capitalism, and only _then_ get round to AI.
Endlessly frustrating, but I'll admit - so fascinating as well.
Honestly a lot of the behavior you describe sounds way too insane to happen irl. Eg people using ad hominems in an argument with someone they consider a "friend"
Being a foundherent, sociological hobbyist. I was figuring it was a fundamentalist presupposalism. Of that to hustle culture and there need to find thought terminating cliches in order to get back to grindset. Can only blame publishers like this indoctrination.
Thats the best Marxist analysis I can think off, even when I'm a doctrinare there.
Status rolls, that works too. Keith Thompson outlined it brillantly in his firs chapter book Impro. Llms cause a status swing.
High status of symbolic and relatable capital, under there long tail of distribution from citizen journalism and pop culture.
Guess neurodivergent and others are speaking out now for there self-determination.
"but it’s very hard to imagine anyone having such a strong moral reaction to Wolfram Alpha, or Wikipedia, or YouTube"
People did have a very strong moral reaction to Wikipedia! It's remarkable now that it's seen as something to be defended as a human-created source of truth against the scourge of chatbots
I do remember this! The reaction where I was felt more like a sneer, "Oh haha you trust Wikipedia?" rather than, like "Wikipedia is the end of truth" but I wasn't actually following the broader debate
I'm an LLM centrist. I think they can be much more interesting and useful than their detractors give credit, but they're not as good as their promoters believe them to be. To give a quick qualified defence of these three critiques of LLM chatbots:
1. Chatbots are much worse than their loudest boosters claim. Compared to the over-exaggeration of their capabilities, what they can actually consistently do is much less capable. As soon as you try and get them to do something broadly out of their training set, they flounder and (worse!) fabricate. To take one example: they're very bad at fiction (in part because they have no inner eye or long-term consistency), and the studies that seek to show otherwise are laughable in their design.
2. This is the weakest critique, but I think it's true that for many people adding a chatbot to their workflow or daily life wouldn't provide much added value. And for some people it clearly detracts (like when delusional psychotics use it to confirm their false beliefs).
3. Chatbots are in a sense demonic: they promise vast knowledge, they appear as minions willing to do our billing, and then they lie to us! (I wrote about it here: https://politicalpelorus.substack.com/p/swampbots-and-demonbots) I've seen first hand a guy in my community get sucked into socialising full time with his bots and it's like he's in a cult.
This is a fantastic piece. The anecdote about your friend spiralling and 'tapping into some higher-dimensional knowledge' or having a tendency towards cults reminds me of some conversations I often have with people when they discover I do open source investigations. They immediately start discussing the regime changes in South Asia over recent years and claim they are orchestrated by Western intelligence. I don't have the heart to tell them they would need to provide very strong evidence for me to believe their version of events, so I just nod along.
Re: Chatbots are demonic — maybe it's implied in their framing that they are demonic because they are built on copyrighted materials or stolen labour? That's what the popular consensus is.
I am working on a similar piece, but it examines how this polarising and contrarian viewpoint has developed over the last decade or so and how similar instances have occurred in much earlier periods. Perhaps we can discuss this on Twitter? I have a draft, but it currently reads very incoherently.
Yeah sounds good!
You are a pretty high status person imo. Thank you for being a Good Guy and letting me know which armchair opinion to adopt🫡🫡
This is great. I think there's a lot in the status idea. My own armchair diagnosis was in terms of how high-status people often call for censorship. They criticise popular sources of (mis)information, because they fear that "others" (of less intellectual/moral calibre than themselves, naturally) will be fooled by the chintzy or vacuous content. So they say that some popular media is not only stupid but demonic because it will lead to society's downfall by fooling less savvy consumers. Hence, we need censorship or whatever. It could be the same with chatbots. They simply fear the contagion of corrupting ideas.
This might also ultimately be about preserving status as a guardian of morals, or simply by controlling others' information and entertainment. But it suggests that 1 & 3 aren't necessarily in contradiction. People might think chatbots are crappy but that others aren't sophisticated enough to see that.
this is spot on. most of this became extremely clear to me recently, having been sort of cancelled by my fellow classmates at a creative writing program after having asked chat gpt to analyse their literary output (as a way of trying to understand the extent to which current LLMs have something like a coherent aesthetic sensibility, or at least the capability of mimicking distinct such sensibilities in a convincing manner). I was accused of having shared their texts with a "person" outside of our class (against the rules), and simultaneously alleged to have overestimated the capabilities of AI (apparently utterly useless, etc). it wasn't exactly clear how an anthropomorphic understanding of LLMs cohered with their complete and utter worthlessness at all conceivable tasks.
aspiring writers constitute an interesting test case, I think: they're excessively status oriented, and the status they seek is bound up with precisely the sort of skill that one can get all Wittgenstein's duck-rabbit about with regards to LLMs: from a certain angle, they seem pretty fantastic at generating linguistic output, and from another they seem incapable of anything beyond stale slop.
lastly, one probably shouldn't underestimate the extent to which chatbots have become associated with fascist oligarchy in the minds of a lot of sophisticates, either. there's certainly a pretty standard political dimension to this, with non-crazy right-wingers probably being less inclined to incoherent hysteria than leftists (and most of this difference probably comes down to boring status games).
Another absolute banger by Andy
I really enjoyed reading this, thank you Andy! It helped me notice how I and others use status in new ways.
The content was v different to what I expected from the title - I thought it would be about the moral panic surrounding AI relationships and friends, which is interesting to think about from the same armchair: Is there a related fear of developing friendships and relationships with something that fundamentally doesn’t have status? Does this undermine the purpose of relationships for some people?
When I read “they don’t themselves have any status to convey”, I was thinking about how using Claude and Grok carry different status associations bc of political leanings, the perceived ethics of their parent companies, etc. I overall agree with your point that a chatbot doesn’t have status in the same way that a human author does and that this might be cruxxy for people’s aversion to them, but I disagree that they’re completely void of status for this reason. Maybe you would call these associations something other than status?
I typed this fast and was sloppy with how I'm using the term status here, definitely agree on the experience of different chatbots. I always come back to claude thinking "Ah yeah my guy"
I'm giving a workshop on LLMs to local startup founders in a few weeks. I gave another version a couple years ago, in the halcyon days of GPT 3.5 — now I'm realizing that I need to come prepared to counter the AI hate that a few folk in the crowd are likely to feel, or at least be tempted to.
If this theory of AI opposition is right, anyone have a notion on how we might leverage it to open people up to using LLMs?
To be honest my most successful moves here have just been to try to not come across as being aggressively pro AI, and just trying to ask simple direct friendly questions like "What's something that, if they were better, chatbots would be helpful with? What would you use them for if they did work as advertised?" and then just have them try using it to do that. A lot of people just haven't poked around with them enough to notice.
Given the important contrast you identify between how ideas are presented on TikTok and on ChatGPT, what do you think of Sora2, where ChatGPT is turning into TikTok?
No strong thoughts yet, more short form video's probably bad? I guess I don't see many ideas being presented on Sora compared to tiktok
I think the status argument is onto something.
I think I’ve figured out why some people react so negatively to AI, especially systems like ChatGPT.
So, think about it this way: every person, in every culture, has a hierarchy of belief. At the top of that hierarchy are really abstract, all-encompassing ideas, things like “we are smart”, or “we are good people.” At the bottom are practical behaviors, for example, cooking a meal for someone in need. That kind of action might be a concrete expression of the belief “I’m a good person.”
Now, many intellectual and creative people have reacted negatively to AI. Why? Sure, part of it can be attributed to technical reasons. In the earlier days, say, two and a half years ago, the technology was objectively less capable. Models like GPT-3.5 made errors that were obvious and weird in ways humans typically wouldn’t. So, some skepticism was warranted.
But I think there’s a deeper reason: AI threatens the top of the belief hierarchy. It undermines things like “I’m intelligent,” or “I’m creative.” That’s existential for a lot of people, especially those whose identities and status are built around being “smart” or “original.”
I also suspect many of these people are higher in neuroticism, they often express intense negative opinions online, which could be a signal of emotional reactivity. They might also be higher in disagreeableness, since they seem quite willing to aggressively critique, moralize, or even attack the existence of AI itself, at least higher than the average, anyway.
So yeah, that’s what I think is going on.
There's so much to unpack about this moral panic, and status is definitely part of it. I've seen commenters respond to a quoted - correct and helpful - ChatGPT response with a terse: "This literally has no worth." That is, if we can label LLMs as not only low-status, but forever outside of the realm of any status at all, we can avoid the comparison and thus the competition.
Then, to toss in my own hobbyhorse, almost every moral panic has some "purity/disgust" component and the revulsion is clearly on display here. If chatbots are just "mashing up" or "plagiarizing" the works of other people, are we accidentally letting in the words of a bad person? If chatbots are just "stochastic parrots" and "autocomplete", why are we taking their random gibberish seriously? If chatbots are trained by right-wing/woke billionaires who bias the data, why are we letting them control our thoughts? If chatbots are insidious expert manipulators, why are we letting these deceitful tools pollute our minds? And again, every one of these statements contradicts the others.
There's also human exceptionalism (as in your Cartesian theater series), a sudden emphasis on the moral necessity of effort / suffering / hard work that seems to have come out of nowhere, and an IMO petulant insistence that this can't be "true" AI yet because it's associated with the wrong people and/or humanity hasn't earned it yet - because their mental timeline tells them that we need to achieve the Star Trek future first, solve capitalism, and only _then_ get round to AI.
Endlessly frustrating, but I'll admit - so fascinating as well.
Honestly a lot of the behavior you describe sounds way too insane to happen irl. Eg people using ad hominems in an argument with someone they consider a "friend"
I’ve had some weird experiences! But others are with randos I bump into at parties
Being a foundherent, sociological hobbyist. I was figuring it was a fundamentalist presupposalism. Of that to hustle culture and there need to find thought terminating cliches in order to get back to grindset. Can only blame publishers like this indoctrination.
Thats the best Marxist analysis I can think off, even when I'm a doctrinare there.
Status rolls, that works too. Keith Thompson outlined it brillantly in his firs chapter book Impro. Llms cause a status swing.
High status of symbolic and relatable capital, under there long tail of distribution from citizen journalism and pop culture.
Guess neurodivergent and others are speaking out now for there self-determination.