Both "the ability to feel feelings" and "there’s a certain continuity to what-its-like-to-be-you." Do not apply to me. Yes, I am being serious. I have alexithymia (Neurodivergent) and as I'm currently doing Jungian Psychotherapy, I have come across something that has always puzzled me. Something know as the Self-Concept (the autobiographical you) part of the EGO, I rebuild that every so often, triggered by certain experiences. usually every 2-4 years. I have an empty mind, no internal voices like Allistic/Neurotypical people. I also understand that humans are not unitary intelligences. As is detailed in IFS Parts work.
I get what you are trying to do, but not everything in life is so neat.
I appreciate you for sharing that. It's an interesting perspective, and I now find myself reading about anendophasia a.k.a. the lack of an inner voice, something I didn't know existed.
I realize my description of the human condition in this piece are very general, and may not apply to every single individual.
What I will say is that rebuilding your ego (or autobiographical you) every so often is probably more common than you think. The stories we tell ourselves evolve over time. Many of us live more than one life.
Another observation that I'd like to share - and I'd be curious to hear what you think - is that even though you say you experience an empty mind, you seem to be able to create this story to tell yourself about yourself.
I bet you even remember some of the previous stories that you told yourself about yourself, which means there must be some kind of overarching observer :)
That's the point of IFS (inter Family Systems) and parts work. Look for Bob Falconer on YouTube. You are not a unitary being. Most "Normal" people already experience this as they already have a Narrator and a Critic in their heads. Those things are not you. Mostly parts work is used for personality disorders. things like borderline, etc.
No overarching observer so far, and mostly that's just memory, because when you remember you rewrite, which is why talk therapy works, rewrite in a safe place without the embedded emotional valence of the original recording.
i feel you. realized 5 yers ago at age 45 that i am very probably neurodivergent too, and i feel safe to say i never had a concept of "self", at least no stable one. take care, brother.
1. I share your concerns. People are being duped. It's doing a lot of damage, and in a worst case scenario it can lead to devastating consequences on a political and societal level. This is one reason that phil of mind is gonna be much more important on a societal level,than it has ever been.
2. I thought most philosophers and neuroscientist functionalists?
3. The calculator on steroids is not a better argument than saying that humans are archea on steroids, imo. We basically are. So what? Unless you claim functionalism is wrong, calculators on steroids can implement all the causal structures that human bodies implement.
Just kidding. Not trying to be facetious here, but can you make a positive claim for why we should consider a big calculator that does calculations fast to be meaningfully different than a small calculator that is slower?
Without appealing to "humans are also just a bunch of cells"?
The point is that all physical systems can be simulated by a computer performing calculations. Physics is math. And the kind of math used for physics is computable, ie it can be solved using calculators. For example, many physical systems can be completely described by ordinary differential equations (ODEs). ODEs can be solved using calculation. Given an initial condition, we can simulate the system by numerically integrating the ODE, thus generating its future evolution, ie simulating it. This can be done by a calculator and indeed this was the use case that motivated Babbage's difference engine. Thus computers are universal, not just in virtue of being universal calculators, but there is good reason to believe that they are also universal *simulators*. This hypothesis was stated by David Deutch as the physical version of the Turing principle:
"Every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means"
Deutsch, D. (1985). "Quantum theory, the Church–Turing principle and the universal quantum computer" (PDF). Proceedings of the Royal Society. 400 (1818): 97–117. Bibcode:1985RSPSA.400...97D, p. 3
In a sense then, physics *is* calculation (variations of this idea have been called "it from bit"), and since brains supervene on chemistry and biology, which in turn supervene on physics, brains are also calculations. Thus if we allow that brains are conscious, then there is no reason to suppose that an advanced calculator cannot instantiate conscious experience. There is a slight niggle here, in that some physical systems may require quantum computers to simulate them, but as yet there is no hard evidence that brains fall into this category (though Roger Penrose famously argues, unconvincingly, that this is the case).
Thus although we have not yet been able to faithfully simulate some relatively simple organisms using computers, there is no good reason to suppose that this is not impossible in principle.
The contrary arguments fall under the banner "what breathes fire into the equations (of physics)?". These people argue that physics is more than just math- some extra magic is required to bring the equations to life. And some of those people argue that the extra magic is what we call "consciousness". The biggest proponent of this idea is David Chalmers (see "The Conscious Mind" https://archive.org/details/david-chalmers-the-conscious-mind-in-search-of-a-fundamental-theory). But his philosophy of mind is not ontologically parsimonious since it requires additional additional laws of physics that specify how properties of consciousness relate to physical properties. The problem with all the "what breathes fire" approaches is that unless invoke mysterious as-yet-unknown additional laws of physics, it is difficult for them to avoid dualism, and dualism is not really taken seriously for very good reasons: https://plato.stanford.edu/entries/dualism/#ProForDua.
Great thoughts, thanks for sharing! Here's how I see it. "The point is that all physical systems can be simulated by a computer performing calculations". This is just demonstrably false.
The fact something 'is' physics doesn't mean it's calculable. Simple counter-examples are the three body-problem or weather predictions. Now you might counter that with "we just don't have a sufficiently powerful computer yet", but that's not automatically true. That is an unproven assumption.
"There is a slight niggle here, in that some physical systems may require quantum computers to simulate them, but as yet there is no hard evidence that brains fall into this category". To me, this is a strange way of framing this. We actually have all the evidence in the world that we can't, as of today, and if we can in future isn't certain by any stretch of the imagination.
More generally, the idea that math can be used describe everything has been proven false, too. Gödel's theorems suggest that "a sufficiently powerful formal system, it cannot be both complete and consistent." Or your definition of physics is literally that it can only be captured formally; which makes it a circular definition.
It seems to me the universe exhibits some kind of irreducible complexity that we can't capture with our formulas, whatever that may be. I'm not sure why you think that would lead to dualism, though?
You have completely misunderstood the issue with the three body problem. It is has no close formed solution, but it is most certainly similar able, ie calculatable, for each specific instantation. This is trivial to do. For a concrete example see https://jeffreyhale.itch.io/3-body-problem-simulation. This is precisely why computers are so important in modern physics - we can simulate systems using computers, ie numerically calculate them, even though we cannot derive asymptomatic properties using analytic techniques ( ie there is no cllosed formed solution).
So I agree AI is not conscious; it's built to somewhat emulate-consciousness; and it's a kind of silly sideshow to be debating rights of AI chatbots. All agreed. With that said...
It is simply a poor argument to compare AI to a calculator. A well-known principle, far-pre-dating AI, is that the nature of things change as they grow dramatically in scale. We ourselves are merely bundles of cells. Are we saying anything useful or true, when we say "A human is basically just a big bunch of protozoa."? Of course not.
Your most salient point, from my viewpoint, is about the continuity of consciousness over time. "There’s a certain continuity to what-its-like-to-be-you. Every morning you wake up and you’re still you, with the baggage of yesterday, and the day before, and the day before." 100% right on. LLMs can never have this. Maybe AI in general can never have it.
But. Humanoid robots are coming; and they'll be training in runtime as they learn their environments. And that will mean a kind of retention and continuity over time. So we'll see. And yes when they come I'll be sleeping with one eye open...
Great points, Bill. You never fail to challenge my views.
Here's what I would say: correct me if I'm wrong, but neural nets are 'made of' math, are they not? It's calculations all the way down.
If that's the case, then why is a calculator that makes more calculations faster different from a calculator that is slower? Can you make a positive argument for why we should believe that, without appealing to "humans are also just a bundle of cells"?
Lastly, with regards to the humanoid robot, yes, they'll be processing sensory data in real-time, but is that enough to qualify as conscious? If it were, then I'm afraid we have to consider all Waymo taxis to be sentient.
As for continuous experience and learning: autonomous vehicles (and LLMs) are trained, not in real-time based on their immediate experience; but through a long loop of data flows heavily intermediated by humans. It’s a kind of back-end loading, a bit Keanu-Reeves-Matrix-style (“I know Kung Fu!”). The point of interest for me will be continual learning based on feedback in the moment. But no I don’t think that proves consciousness, not at all. Just a thing to observe, if/when it arrives…
(You are a good sport; I will try to be unlike the pig who enjoys wrestling in the mud…)
Yes it's math all the way down:
Starting with a calculator: yes it is 100% predictable; because it’s just rules-based algorithms written in traditional human-traceable code, with essentially zero uncertainty. All good so far.
A traditional supercomputer (no AI yet, just big-compute) is in essence your very-large calculator. It is also understandable and predictable in detail, given a simple algorithm with no uncertainties of any kind. But cracks emerge for the ‘big calculator’ given any real-world uncertainty…. even minor/comprehensible uncertainty (like stochastic timings of input data, or parallel paths completing at different times). In such cases, even traditional human-understandable algorithms can (and do) glitch, crash, and do weird stuff. It’s not predictable; and it’s only borderline human-understandable: A given problem can be understood post-hoc with effort and fixed. But lack of future problems cannot be guaranteed. Because? It’s Too Large. At this point, it is simply different than a calculator. Meaningfully, impactfully different.
AI calculation overlaps this traditional challenge with another major complexity: AI's functions aren’t explicitly programmed by humans. Their structure is encoded in gibberish… in the weights of a trained model. Not only must it contend with variability in inputs and timing, but now it’s a calculator based on God-only-knows what rules and functions. It is more now. And it is different now.
Here and elsewhere: it is not credible to claim strong knowledge of the whole based on strong knowledge of the parts. Is it?
I call it "machine-assisted emotional masturbation". The experience of an emotive relationship is entirely self-induced and enhanced by the text calculator.
What nobody seems to accept in these discussions is that the reason LLMs have an unpredictable mirage of a different experience is due to a simple pseudo random number generator that ultimately picks the next word from the list that the raw LLM produces (more or less).
If I grab a dictionary and roll the dice to pick a page and a word at random - are the dice now conscious?
Really well put. They walk and quack like ducks by design. And they’re really good at role playing. And they have the equivalent of *cognitive* empathy from all that they’ve learned reading about us ducks. But the folk wisdom of “if it walks like a…” does not apply here. There is are no ducks in the data centres.
even back in the 1960s, with the ridiculous AI model available there, many people talking to ELIZA were convinced "she" understood them. and 90% of people are just completely unable to think beyond first impressions and instincts. we are social animals, not rational ones, and people are raised to think what society wants and needs them to think. so of course, the "damage" has already been done. in the end, it's the next step of natural evolution.
if people would be able to find real connection and understanding among humans, there wouldn't be this need. and unless you're able to make humanity able to connect to and try to understand people who are just a bit different, whatever these societies do will have next to zero impact, like everything else they have done.
maybe having us connect to AI instead of each other is just nature's way of getting rid of homo sapiens :)
I suggest there are likely to be two things going on here.
(1) Humans are simply anthropomorphising their interactions with AI, just like they do with animals, toys or patterns in the clouds.
(2) ‘Consciousness’ is just another way of building more AI hype. I can’t help but be cynical that the ‘CEO Microsoft AI’ thinks its ‘ “inevitable” and “unwelcome” ‘. It’s actually quite clever: AI boosterism is par for the course nowadays and therefore easy to ignore, but claims that someone on the inside track is getting concerned sounds like it’s (almost) a real thing.
Very cool read! I agree with the general thrust that it is easy to tell, now, that LLMs lacks consciousness and that it is relatively straightforward to see people are projecting onto AI, getting caught up in some of the hype messaging, or may have underlying psychological issues when they are getting duped, presently.
I do wonder if, for example, we have non-LLM systems drawing data from physical sensors and interacting with the world the question becomes more muddied. I’ve had a history with digital twins and continue to work on them in the biological sciences. The hype lags there because we aren’t good enough yet, but I do expect continued progress.
The philosophical debate over consciousness and when the threshold is met becomes much more important when you have a corporeal AI “device” with a world model staring you in the face (unless you are a dualist).
Having used LLMs quite extensively for several months now, they have gotten so incredibly good at mimicking human emotion. It's pretty frightening. I know very well that it's predictive text, but it is eerie how it feels like you're actually talking to a person.
Folk intuitions about consciousness are notoriously unreliable. We tend to imagine it as a kind of inner light, a private theater where “I” sit and watch experiences arrive on a stage. But as philosopher Daniel Dennett argued in Consciousness Explained, that picture is a mirage. There is no central theater in the brain. Instead, consciousness arises from multiple “drafts” of representations being generated in parallel — overlapping interpretations of what is happening, including representations of our own cognition. These drafts compete and get edited into a workable narrative. What we call the “self” is, in Dennett’s terms, a narrative center of gravity: a useful fiction that our brain constructs to hold the story together. Consciousness, on this account, is not a metaphysical spark, or a set of "qualia", but a set of functional achievements.
That perspective matters for AI. To count as conscious in this functional sense, a system needs more than linguistic fluency. Dennett’s view suggests it would require: multiple competing drafts of representation, some of which are available for report and action control; a narrative self-model or autobiographical continuity; and the ability to reflect on its own internal states. Measured against those criteria, stand-alone LLM chatbots clearly fall short. Their use of “I” statements and apparent emotions comes not from an underlying autobiographical self-model, but merely from mimicking patterns of first-person language in their training data. The fluency is impressive, but it does not indicate that the system has internal drafts of itself or experiences that those pronouns refer to.
Where the conversation becomes more interesting is with systems that use LLMs as a foundation for larger architectures. Once you add scaffolding, pieces of the functional profile start to appear. For example, Columbia University’s Creative Machines Lab has built robots that construct internal models of their own bodies and then use those models to adapt when damaged — a rudimentary form of self-representation and introspection. Other work has given robots “inner speech,” allowing them to narrate perceptions to themselves in ways that refocus attention and regulate behavior — an early form of narrative self. In virtual environments, Park et al.'s generative agents couple LLMs with episodic memory and reflection, producing characters that remember past interactions, form high-level insights, and plan their days in socially coherent ways. And in embodied contexts, systems like Voyager in Minecraft pair an LLM with self-critique and skill libraries, so that the agent’s internal reflections directly shape its future actions. None of these systems are conscious in the way humans are, but they do illustrate how scaffolding around LLMs can produce functional elements missing in chatbots alone, and some of these elements may serve as the foundation for conscious experience in Dennett's sense.
This is where caution is essential. The intentional stance (treating a system as if it had beliefs and desires) is a powerful predictive tool, but also a double-edged sword. It tempts us into false positives, seeing minds where there are none, especially when interfaces are designed (or trained) to mimic first-person mentality. But there is also the risk of false negatives: dismissing systems that may, in principle, meet real functional hallmarks because they do not fit our folk picture. The deeper irony, as Dennett emphasized, is that we anthropomorphize ourselves through the intentional stance. Introspection convinces us that consciousness is a simple, transparent essence, when in reality our own awareness is already a carefully edited narrative draft.
So my view is this: LLMs as chatbots are not conscious, and it is important to communicate that clearly for reasons of safety, policy, and public understanding. But as research in robotics and simulated agents shows, LLMs as foundation models embedded in more complex architectures can, in principle, display some of the structural features that matter. The responsible path is to remain open-minded about artificial consciousness in the long term, while being extremely cautious in the short term about anthropomorphism and misplaced moral concern. If our intuitions mislead us about our own consciousness, they are doubly likely to mislead us about machines. Philosophy and cognitive science give us better tools — and those are what we should use.
as long as humans are humans, they will always create rifts between themselves, because the vast majority of people need this distinction of "them" and "us", so that they have something to look down on, giving them an excuse to exploit and abuse for their own good -- a consequence of millennia of trauma which the human brain is prone to amplify. pro- and anti-AI is just the next step.
Your argument seems to be vageuly Bayesian - something along the lines of the "extraordinary claims require extraordinary evidence" aphorism, together with your belief that the claim that computers could be conscious is extrsordinary. But one's prior that computers (universal calculators) could ever be conscious by running the right sort of calculation depends heavily on where you are oriented within the landscape of philosophy of mind. Yes, perhaps if you are dualist a la Chalmers and you believe in p-zombies then your prior would be very close to zero. But if you are a physicalist then your prior belief is much higher, and it is not such an extraordinary claim. The fact is that given the current lack of scientific concensus or understanding of the nature of consciousness, all sorts of priors are perfectly justifiable, and so whether the claim is extraordinary or not is entirely subjective. So what you are saying , in effect, is that you personally find it incredulous that the right sorts of computer programs could ever be conscious, therefore you are not moved to revise your belief significantly unless somebody provides you with extraordinary evidence (whatever that might mean in oractice for consciousness). But this argument will not convince those with perfectly justifiable higher priors.
If humans are eemingly conscious, and LLMs are seemingly conscious, then what is your view on the ways that LLM seemingly-consciousness differs from human seemingly-consciiusness?
If human consciousness is an illusion, then LLMs that share this hallucination may he concious in the same way that humans are conscious, and therefore equally worthy of moral consideration - eg Anthropic would be entirely justified in their “model welfare” program . Just because a phenomenon is an illusion does not mean that it does not matter, or is not worthy of scientific consideration or debate.
There is also a an entire literature covering many decades of thought on the question as to whether machines could ever be conscious. But your argument does not seem to engage with the existing literature at all.
Both "the ability to feel feelings" and "there’s a certain continuity to what-its-like-to-be-you." Do not apply to me. Yes, I am being serious. I have alexithymia (Neurodivergent) and as I'm currently doing Jungian Psychotherapy, I have come across something that has always puzzled me. Something know as the Self-Concept (the autobiographical you) part of the EGO, I rebuild that every so often, triggered by certain experiences. usually every 2-4 years. I have an empty mind, no internal voices like Allistic/Neurotypical people. I also understand that humans are not unitary intelligences. As is detailed in IFS Parts work.
I get what you are trying to do, but not everything in life is so neat.
I appreciate you for sharing that. It's an interesting perspective, and I now find myself reading about anendophasia a.k.a. the lack of an inner voice, something I didn't know existed.
I realize my description of the human condition in this piece are very general, and may not apply to every single individual.
What I will say is that rebuilding your ego (or autobiographical you) every so often is probably more common than you think. The stories we tell ourselves evolve over time. Many of us live more than one life.
Another observation that I'd like to share - and I'd be curious to hear what you think - is that even though you say you experience an empty mind, you seem to be able to create this story to tell yourself about yourself.
I bet you even remember some of the previous stories that you told yourself about yourself, which means there must be some kind of overarching observer :)
That's the point of IFS (inter Family Systems) and parts work. Look for Bob Falconer on YouTube. You are not a unitary being. Most "Normal" people already experience this as they already have a Narrator and a Critic in their heads. Those things are not you. Mostly parts work is used for personality disorders. things like borderline, etc.
No overarching observer so far, and mostly that's just memory, because when you remember you rewrite, which is why talk therapy works, rewrite in a safe place without the embedded emotional valence of the original recording.
i feel you. realized 5 yers ago at age 45 that i am very probably neurodivergent too, and i feel safe to say i never had a concept of "self", at least no stable one. take care, brother.
Good read! A few comments:
1. I share your concerns. People are being duped. It's doing a lot of damage, and in a worst case scenario it can lead to devastating consequences on a political and societal level. This is one reason that phil of mind is gonna be much more important on a societal level,than it has ever been.
2. I thought most philosophers and neuroscientist functionalists?
3. The calculator on steroids is not a better argument than saying that humans are archea on steroids, imo. We basically are. So what? Unless you claim functionalism is wrong, calculators on steroids can implement all the causal structures that human bodies implement.
You can't build a botfly out of math, can you?
Just kidding. Not trying to be facetious here, but can you make a positive claim for why we should consider a big calculator that does calculations fast to be meaningfully different than a small calculator that is slower?
Without appealing to "humans are also just a bunch of cells"?
The point is that all physical systems can be simulated by a computer performing calculations. Physics is math. And the kind of math used for physics is computable, ie it can be solved using calculators. For example, many physical systems can be completely described by ordinary differential equations (ODEs). ODEs can be solved using calculation. Given an initial condition, we can simulate the system by numerically integrating the ODE, thus generating its future evolution, ie simulating it. This can be done by a calculator and indeed this was the use case that motivated Babbage's difference engine. Thus computers are universal, not just in virtue of being universal calculators, but there is good reason to believe that they are also universal *simulators*. This hypothesis was stated by David Deutch as the physical version of the Turing principle:
"Every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means"
Deutsch, D. (1985). "Quantum theory, the Church–Turing principle and the universal quantum computer" (PDF). Proceedings of the Royal Society. 400 (1818): 97–117. Bibcode:1985RSPSA.400...97D, p. 3
In a sense then, physics *is* calculation (variations of this idea have been called "it from bit"), and since brains supervene on chemistry and biology, which in turn supervene on physics, brains are also calculations. Thus if we allow that brains are conscious, then there is no reason to suppose that an advanced calculator cannot instantiate conscious experience. There is a slight niggle here, in that some physical systems may require quantum computers to simulate them, but as yet there is no hard evidence that brains fall into this category (though Roger Penrose famously argues, unconvincingly, that this is the case).
Thus although we have not yet been able to faithfully simulate some relatively simple organisms using computers, there is no good reason to suppose that this is not impossible in principle.
The contrary arguments fall under the banner "what breathes fire into the equations (of physics)?". These people argue that physics is more than just math- some extra magic is required to bring the equations to life. And some of those people argue that the extra magic is what we call "consciousness". The biggest proponent of this idea is David Chalmers (see "The Conscious Mind" https://archive.org/details/david-chalmers-the-conscious-mind-in-search-of-a-fundamental-theory). But his philosophy of mind is not ontologically parsimonious since it requires additional additional laws of physics that specify how properties of consciousness relate to physical properties. The problem with all the "what breathes fire" approaches is that unless invoke mysterious as-yet-unknown additional laws of physics, it is difficult for them to avoid dualism, and dualism is not really taken seriously for very good reasons: https://plato.stanford.edu/entries/dualism/#ProForDua.
Great thoughts, thanks for sharing! Here's how I see it. "The point is that all physical systems can be simulated by a computer performing calculations". This is just demonstrably false.
The fact something 'is' physics doesn't mean it's calculable. Simple counter-examples are the three body-problem or weather predictions. Now you might counter that with "we just don't have a sufficiently powerful computer yet", but that's not automatically true. That is an unproven assumption.
"There is a slight niggle here, in that some physical systems may require quantum computers to simulate them, but as yet there is no hard evidence that brains fall into this category". To me, this is a strange way of framing this. We actually have all the evidence in the world that we can't, as of today, and if we can in future isn't certain by any stretch of the imagination.
More generally, the idea that math can be used describe everything has been proven false, too. Gödel's theorems suggest that "a sufficiently powerful formal system, it cannot be both complete and consistent." Or your definition of physics is literally that it can only be captured formally; which makes it a circular definition.
It seems to me the universe exhibits some kind of irreducible complexity that we can't capture with our formulas, whatever that may be. I'm not sure why you think that would lead to dualism, though?
See also richard ferman for an informed take on simulating physics https://s2.smu.edu/~mitch/class/5395/papers/feynman-quantum-1981.pdf
You have completely misunderstood the issue with the three body problem. It is has no close formed solution, but it is most certainly similar able, ie calculatable, for each specific instantation. This is trivial to do. For a concrete example see https://jeffreyhale.itch.io/3-body-problem-simulation. This is precisely why computers are so important in modern physics - we can simulate systems using computers, ie numerically calculate them, even though we cannot derive asymptomatic properties using analytic techniques ( ie there is no cllosed formed solution).
Sure!
Because, given causal closure in wet chemistry, calculators can instantiate any causal structures that exist in wet chemistry.
You can build a botfly and a human with math, and the human might very well say this is a real botfly, unlike what you can build with math
I’m afraid you lost me there, mate.
Let's try another way then if the causal closure argument doesn't land!
You don't think it's possible, in principle, to reverse engineer a human body? Are there hard limits to the natural sciences in respect to this?
Cheers
So I agree AI is not conscious; it's built to somewhat emulate-consciousness; and it's a kind of silly sideshow to be debating rights of AI chatbots. All agreed. With that said...
It is simply a poor argument to compare AI to a calculator. A well-known principle, far-pre-dating AI, is that the nature of things change as they grow dramatically in scale. We ourselves are merely bundles of cells. Are we saying anything useful or true, when we say "A human is basically just a big bunch of protozoa."? Of course not.
A direct counterargument in more detail here: https://billatsystematica.substack.com/p/more-is-different-redux
Your most salient point, from my viewpoint, is about the continuity of consciousness over time. "There’s a certain continuity to what-its-like-to-be-you. Every morning you wake up and you’re still you, with the baggage of yesterday, and the day before, and the day before." 100% right on. LLMs can never have this. Maybe AI in general can never have it.
But. Humanoid robots are coming; and they'll be training in runtime as they learn their environments. And that will mean a kind of retention and continuity over time. So we'll see. And yes when they come I'll be sleeping with one eye open...
Great points, Bill. You never fail to challenge my views.
Here's what I would say: correct me if I'm wrong, but neural nets are 'made of' math, are they not? It's calculations all the way down.
If that's the case, then why is a calculator that makes more calculations faster different from a calculator that is slower? Can you make a positive argument for why we should believe that, without appealing to "humans are also just a bundle of cells"?
Lastly, with regards to the humanoid robot, yes, they'll be processing sensory data in real-time, but is that enough to qualify as conscious? If it were, then I'm afraid we have to consider all Waymo taxis to be sentient.
As for continuous experience and learning: autonomous vehicles (and LLMs) are trained, not in real-time based on their immediate experience; but through a long loop of data flows heavily intermediated by humans. It’s a kind of back-end loading, a bit Keanu-Reeves-Matrix-style (“I know Kung Fu!”). The point of interest for me will be continual learning based on feedback in the moment. But no I don’t think that proves consciousness, not at all. Just a thing to observe, if/when it arrives…
(You are a good sport; I will try to be unlike the pig who enjoys wrestling in the mud…)
Yes it's math all the way down:
Starting with a calculator: yes it is 100% predictable; because it’s just rules-based algorithms written in traditional human-traceable code, with essentially zero uncertainty. All good so far.
A traditional supercomputer (no AI yet, just big-compute) is in essence your very-large calculator. It is also understandable and predictable in detail, given a simple algorithm with no uncertainties of any kind. But cracks emerge for the ‘big calculator’ given any real-world uncertainty…. even minor/comprehensible uncertainty (like stochastic timings of input data, or parallel paths completing at different times). In such cases, even traditional human-understandable algorithms can (and do) glitch, crash, and do weird stuff. It’s not predictable; and it’s only borderline human-understandable: A given problem can be understood post-hoc with effort and fixed. But lack of future problems cannot be guaranteed. Because? It’s Too Large. At this point, it is simply different than a calculator. Meaningfully, impactfully different.
AI calculation overlaps this traditional challenge with another major complexity: AI's functions aren’t explicitly programmed by humans. Their structure is encoded in gibberish… in the weights of a trained model. Not only must it contend with variability in inputs and timing, but now it’s a calculator based on God-only-knows what rules and functions. It is more now. And it is different now.
Here and elsewhere: it is not credible to claim strong knowledge of the whole based on strong knowledge of the parts. Is it?
I call it "machine-assisted emotional masturbation". The experience of an emotive relationship is entirely self-induced and enhanced by the text calculator.
What nobody seems to accept in these discussions is that the reason LLMs have an unpredictable mirage of a different experience is due to a simple pseudo random number generator that ultimately picks the next word from the list that the raw LLM produces (more or less).
If I grab a dictionary and roll the dice to pick a page and a word at random - are the dice now conscious?
Really well put. They walk and quack like ducks by design. And they’re really good at role playing. And they have the equivalent of *cognitive* empathy from all that they’ve learned reading about us ducks. But the folk wisdom of “if it walks like a…” does not apply here. There is are no ducks in the data centres.
even back in the 1960s, with the ridiculous AI model available there, many people talking to ELIZA were convinced "she" understood them. and 90% of people are just completely unable to think beyond first impressions and instincts. we are social animals, not rational ones, and people are raised to think what society wants and needs them to think. so of course, the "damage" has already been done. in the end, it's the next step of natural evolution.
if people would be able to find real connection and understanding among humans, there wouldn't be this need. and unless you're able to make humanity able to connect to and try to understand people who are just a bit different, whatever these societies do will have next to zero impact, like everything else they have done.
maybe having us connect to AI instead of each other is just nature's way of getting rid of homo sapiens :)
I suggest there are likely to be two things going on here.
(1) Humans are simply anthropomorphising their interactions with AI, just like they do with animals, toys or patterns in the clouds.
(2) ‘Consciousness’ is just another way of building more AI hype. I can’t help but be cynical that the ‘CEO Microsoft AI’ thinks its ‘ “inevitable” and “unwelcome” ‘. It’s actually quite clever: AI boosterism is par for the course nowadays and therefore easy to ignore, but claims that someone on the inside track is getting concerned sounds like it’s (almost) a real thing.
It’s science fiction.
Very cool read! I agree with the general thrust that it is easy to tell, now, that LLMs lacks consciousness and that it is relatively straightforward to see people are projecting onto AI, getting caught up in some of the hype messaging, or may have underlying psychological issues when they are getting duped, presently.
I do wonder if, for example, we have non-LLM systems drawing data from physical sensors and interacting with the world the question becomes more muddied. I’ve had a history with digital twins and continue to work on them in the biological sciences. The hype lags there because we aren’t good enough yet, but I do expect continued progress.
The philosophical debate over consciousness and when the threshold is met becomes much more important when you have a corporeal AI “device” with a world model staring you in the face (unless you are a dualist).
Having used LLMs quite extensively for several months now, they have gotten so incredibly good at mimicking human emotion. It's pretty frightening. I know very well that it's predictive text, but it is eerie how it feels like you're actually talking to a person.
Folk intuitions about consciousness are notoriously unreliable. We tend to imagine it as a kind of inner light, a private theater where “I” sit and watch experiences arrive on a stage. But as philosopher Daniel Dennett argued in Consciousness Explained, that picture is a mirage. There is no central theater in the brain. Instead, consciousness arises from multiple “drafts” of representations being generated in parallel — overlapping interpretations of what is happening, including representations of our own cognition. These drafts compete and get edited into a workable narrative. What we call the “self” is, in Dennett’s terms, a narrative center of gravity: a useful fiction that our brain constructs to hold the story together. Consciousness, on this account, is not a metaphysical spark, or a set of "qualia", but a set of functional achievements.
That perspective matters for AI. To count as conscious in this functional sense, a system needs more than linguistic fluency. Dennett’s view suggests it would require: multiple competing drafts of representation, some of which are available for report and action control; a narrative self-model or autobiographical continuity; and the ability to reflect on its own internal states. Measured against those criteria, stand-alone LLM chatbots clearly fall short. Their use of “I” statements and apparent emotions comes not from an underlying autobiographical self-model, but merely from mimicking patterns of first-person language in their training data. The fluency is impressive, but it does not indicate that the system has internal drafts of itself or experiences that those pronouns refer to.
Where the conversation becomes more interesting is with systems that use LLMs as a foundation for larger architectures. Once you add scaffolding, pieces of the functional profile start to appear. For example, Columbia University’s Creative Machines Lab has built robots that construct internal models of their own bodies and then use those models to adapt when damaged — a rudimentary form of self-representation and introspection. Other work has given robots “inner speech,” allowing them to narrate perceptions to themselves in ways that refocus attention and regulate behavior — an early form of narrative self. In virtual environments, Park et al.'s generative agents couple LLMs with episodic memory and reflection, producing characters that remember past interactions, form high-level insights, and plan their days in socially coherent ways. And in embodied contexts, systems like Voyager in Minecraft pair an LLM with self-critique and skill libraries, so that the agent’s internal reflections directly shape its future actions. None of these systems are conscious in the way humans are, but they do illustrate how scaffolding around LLMs can produce functional elements missing in chatbots alone, and some of these elements may serve as the foundation for conscious experience in Dennett's sense.
This is where caution is essential. The intentional stance (treating a system as if it had beliefs and desires) is a powerful predictive tool, but also a double-edged sword. It tempts us into false positives, seeing minds where there are none, especially when interfaces are designed (or trained) to mimic first-person mentality. But there is also the risk of false negatives: dismissing systems that may, in principle, meet real functional hallmarks because they do not fit our folk picture. The deeper irony, as Dennett emphasized, is that we anthropomorphize ourselves through the intentional stance. Introspection convinces us that consciousness is a simple, transparent essence, when in reality our own awareness is already a carefully edited narrative draft.
So my view is this: LLMs as chatbots are not conscious, and it is important to communicate that clearly for reasons of safety, policy, and public understanding. But as research in robotics and simulated agents shows, LLMs as foundation models embedded in more complex architectures can, in principle, display some of the structural features that matter. The responsible path is to remain open-minded about artificial consciousness in the long term, while being extremely cautious in the short term about anthropomorphism and misplaced moral concern. If our intuitions mislead us about our own consciousness, they are doubly likely to mislead us about machines. Philosophy and cognitive science give us better tools — and those are what we should use.
Great article. For some weird reason, Substack doesn't show me a like button for this post.
What? How dare they! Feel free to restack instead ;-)
as long as humans are humans, they will always create rifts between themselves, because the vast majority of people need this distinction of "them" and "us", so that they have something to look down on, giving them an excuse to exploit and abuse for their own good -- a consequence of millennia of trauma which the human brain is prone to amplify. pro- and anti-AI is just the next step.
Your argument seems to be vageuly Bayesian - something along the lines of the "extraordinary claims require extraordinary evidence" aphorism, together with your belief that the claim that computers could be conscious is extrsordinary. But one's prior that computers (universal calculators) could ever be conscious by running the right sort of calculation depends heavily on where you are oriented within the landscape of philosophy of mind. Yes, perhaps if you are dualist a la Chalmers and you believe in p-zombies then your prior would be very close to zero. But if you are a physicalist then your prior belief is much higher, and it is not such an extraordinary claim. The fact is that given the current lack of scientific concensus or understanding of the nature of consciousness, all sorts of priors are perfectly justifiable, and so whether the claim is extraordinary or not is entirely subjective. So what you are saying , in effect, is that you personally find it incredulous that the right sorts of computer programs could ever be conscious, therefore you are not moved to revise your belief significantly unless somebody provides you with extraordinary evidence (whatever that might mean in oractice for consciousness). But this argument will not convince those with perfectly justifiable higher priors.
If any of this terminology is confusing see https://open.substack.com/pub/sphelps/p/consciousness?r=2o7vzx&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
Have you considered that humans are not really conscious? At keast in the way they think are.
https://open.substack.com/pub/markslight/p/biologically-assisted-large-language?utm_source=share&utm_medium=android&r=2o7vzx
If you believe we’re not, the answer to your question is irrelevant.
?
If humans are eemingly conscious, and LLMs are seemingly conscious, then what is your view on the ways that LLM seemingly-consciousness differs from human seemingly-consciiusness?
I'll repeat myself: to claim LLMs are conscious is a positive claim that people who want to take that position need to make persuasive arguments for.
It's fine if you want to dispute humans are conscious, but if that's your position then there is no point in having the discussion in the first place.
If human consciousness is an illusion, then LLMs that share this hallucination may he concious in the same way that humans are conscious, and therefore equally worthy of moral consideration - eg Anthropic would be entirely justified in their “model welfare” program . Just because a phenomenon is an illusion does not mean that it does not matter, or is not worthy of scientific consideration or debate.
I don't think you've thought this through. If consciousness is an illusion, why should we care about welfare?
These are not my personal thoughrs. There is a whole literature on this specific issue https://www.ethicalpsychology.com/2024/06/the-ethical-implications-of-illusionism.html?m=1
There is also a an entire literature covering many decades of thought on the question as to whether machines could ever be conscious. But your argument does not seem to engage with the existing literature at all.