Weirdly, I agree with most of your reasoning and claims (especially regarding LLMs), and yet I still find myself imagining consciousness as "an (imperfect) observer watching a movie screen". To me this doesn't seem like a much bolder claim than "there is something it is like to be me" – do you fundamentally object to this mental-model, and if so, what mental-model of consciousness would you replace it with?
I agree, which is why I hedge a lot in the qualia section. I'm pretty agnostic about whether qualia exist and it's very hard to deny there's "something it's like to be me." David Chalmers has done a lot of creative stuff around this and I don't really have a satisfying final answer at all. I've tried to write stuff I think Chalmers would mostly sign on to
- The "forward feed" criticism always strikes me as odd. Neither humans nor LLMs based on transformers are capable of "taking back a thought", while both can course-correct while responding and especially when revisiting their earlier words. And then there's various experimental diffusion-based LLMs, which generate/denoise their response as a whole, like an image. Would critics say that diffusion LLMs are capable of some kind of uniquely holistic, "non-linear" kind of introspection? (Obviously not.)
- This article/story by Hofdstadter and Dennett rarely gets mentioned, but seems relevant, despite its weird premise:
It maps quite nicely onto static AI models. What's interesting is that the authors describe the book's consciousness as a _process_ that the characters carry out, but don't mention _information_. That is, they never really reflect on the "working memory" that their process needs, even if it's just something like keeping track of what page they're on, where they came from, etc. It's all essential information, and it apparently doesn't matter that it's located in the minds (or on the scratchpads) of the characters. Similarly, it's too reductive to locate a model's apparent smarts in its fixed weights - context is the water it needs to swim in.
- Depending on what the brain is doing, there is some measurable (100-400ms) perception/intention lag. Awareness is not immediate, not synchronous, and not complete. Even if we experience our thoughts as a Cartesian theater, then we're still completely oblivious to the Cartesian editing room. If we must compare humans to AI, then the brain is the model, but our consciousness is the Google AI summary (or ChatGPT fanfic, take your pick).
Supremely interesting! Your example with of a machine that connects peoples brains is very interesting (i.e. to try to produce identical qualia) but I would argue it may be impossible or near impossible to do this "perfectly" between 2 distinct people or maybe even the same person past and present.
This is because, as far as know, every brain is unique with perhaps some nearly infinite number of possible configurations (neuronal connections and counts, supporting cells, spatial distribution/timing of neurons) which you have to replicate in order to truly replicate a physicalist model of qualia between 2 brains.
That being said from empirical evidence and the seeming fact that we can "agree upon" a lot of things in this world seem to indicate sufficient similarity in the replicability of a "shared experience." Like, in the example you provided, a cat is a furry feline with a tail, claws, etc. and oftentimes a wild personality. So I believe the machine could mostly work by stimulating similar ish neuronal pathways / receptors in the brain if invented.
Weirdly, I agree with most of your reasoning and claims (especially regarding LLMs), and yet I still find myself imagining consciousness as "an (imperfect) observer watching a movie screen". To me this doesn't seem like a much bolder claim than "there is something it is like to be me" – do you fundamentally object to this mental-model, and if so, what mental-model of consciousness would you replace it with?
I agree, which is why I hedge a lot in the qualia section. I'm pretty agnostic about whether qualia exist and it's very hard to deny there's "something it's like to be me." David Chalmers has done a lot of creative stuff around this and I don't really have a satisfying final answer at all. I've tried to write stuff I think Chalmers would mostly sign on to
Such a comprehensive overview, Especially with Daniel Dennet, which is the only one of the atheists that resonated.
Figured with Susan Haacks Foundherntism of epistemological justification, and then Gettier problem with complete justification https://en.wikipedia.org/wiki/Foundherentism
Within my circles, they've been talking about qualia too. Using Materialism to dismiss it. There's some similar rationale, sameish to yours.
https://nicolasdvillarreal.substack.com/p/materialist-semiotics-and-the-nature
Thanks, Andy. I appreciate how you steelman the arguments you address.
Awesome series.
Some scattered thoughts:
- The "forward feed" criticism always strikes me as odd. Neither humans nor LLMs based on transformers are capable of "taking back a thought", while both can course-correct while responding and especially when revisiting their earlier words. And then there's various experimental diffusion-based LLMs, which generate/denoise their response as a whole, like an image. Would critics say that diffusion LLMs are capable of some kind of uniquely holistic, "non-linear" kind of introspection? (Obviously not.)
- This article/story by Hofdstadter and Dennett rarely gets mentioned, but seems relevant, despite its weird premise:
https://themindi.blogspot.com/2007/02/chapter-26-conversation-with-einsteins.html
It maps quite nicely onto static AI models. What's interesting is that the authors describe the book's consciousness as a _process_ that the characters carry out, but don't mention _information_. That is, they never really reflect on the "working memory" that their process needs, even if it's just something like keeping track of what page they're on, where they came from, etc. It's all essential information, and it apparently doesn't matter that it's located in the minds (or on the scratchpads) of the characters. Similarly, it's too reductive to locate a model's apparent smarts in its fixed weights - context is the water it needs to swim in.
- Depending on what the brain is doing, there is some measurable (100-400ms) perception/intention lag. Awareness is not immediate, not synchronous, and not complete. Even if we experience our thoughts as a Cartesian theater, then we're still completely oblivious to the Cartesian editing room. If we must compare humans to AI, then the brain is the model, but our consciousness is the Google AI summary (or ChatGPT fanfic, take your pick).
Supremely interesting! Your example with of a machine that connects peoples brains is very interesting (i.e. to try to produce identical qualia) but I would argue it may be impossible or near impossible to do this "perfectly" between 2 distinct people or maybe even the same person past and present.
This is because, as far as know, every brain is unique with perhaps some nearly infinite number of possible configurations (neuronal connections and counts, supporting cells, spatial distribution/timing of neurons) which you have to replicate in order to truly replicate a physicalist model of qualia between 2 brains.
That being said from empirical evidence and the seeming fact that we can "agree upon" a lot of things in this world seem to indicate sufficient similarity in the replicability of a "shared experience." Like, in the example you provided, a cat is a furry feline with a tail, claws, etc. and oftentimes a wild personality. So I believe the machine could mostly work by stimulating similar ish neuronal pathways / receptors in the brain if invented.
Is this related to the Humunculus idea?