This is written incredibly well; I agree with all the remarks you made. As a user of artificial intelligence services such as ChatGPT and Dall-E, I sometimes do make remarks about how "intelligent" the service is. You make a fantastic point about humans being the reason ChatGPT, Dall-E, and other services are so smart. It had to be taught by a human, and books, reports, stories, statistics, etc. that were made by humans. I have no disagreement with the fact that artificial intelligence is only as smart at humans, but I believe that is because we are still in the early stages of artificial intelligence development. What will happen when it is much more advanced? I am assuming that it will have to capable of "thinking" on its own in order for it to actually be smarter than us. What are your thoughts?
I love the way you write about this, thank you for capturing these points with punch and care. I am going to dive into more of your posts now. What a treat, human-flavoured writing. It's my belief we will always need original writing from humans.
"If I have seen further, it is by standing on the shoulders of giants." - attributed to Issac Newton
GenAI, too, stands on the shoulders of countless billions of humans who have contributed their knowledge, creativity, and insights to the datasets it is trained on. However, the critical question is: can it truly see further? I believe that, in its current form and most real-life applications, GenAI cannot surpass or even match the depth of human understanding for one key reason—it lacks “tacit knowledge”.
Tacit knowledge comes from living in and engaging with the real world. Most of this knowledge is not written in books, digitized, or available in the datasets used to train GenAI. It’s the nuance of context, the perception of subtle cues, and the ability to draw meaning from lived experiences—qualities that GenAI cannot replicate because it is fundamentally disconnected from the physical and social realities in which humans exist. Without experiencing the world firsthand, GenAI will never grasp these nuances, no matter how vast its training data or how sophisticated its algorithms.
This brings me to a quote I recently came across, which I believe aptly describes GenAI’s limitations:
“The difference between the mathematical mind (esprit de géomé-trie) and the perceptive mind (esprit de finesse): the reason that mathematicians are not perceptive is that they do not see what’s before them, and that, accustomed to the exact and plain principles of mathematics, and not reasoning till they have well inspected and arranged their principles, they are lost in matters of perception where the principles do not allow for such arrangement. . . .
These principles are so fine and so numerous that a very delicate and very clear sense is needed to perceive them, and to judge rightly and justly when they are perceived, without for the most part being able to demonstrate them in order as in mathematics; because the principles are not known to us in the same way, and because it would be an endless matter to undertake it. We must see the matter at once, at one glance, and not by a process of reasoning, at least
to a certain degree. ... Mathematicians wish to treat matters of perception mathematically, and make themselves ridiculous... the mind does it tacitly, naturally, and without technical rules.”—PASCAL
GenAI is, at its core, a mathematical mind. It excels at structured, repeatable tasks—generating plausible text, solving concrete problems, or remixing existing ideas—but it falters in areas requiring perception, intuition, and contextual understanding. As Pascal notes, matters of perception require a "delicate and clear sense" to grasp subtle principles, and GenAI, which relies on probabilistic patterns rather than lived experience, cannot replicate this finesse. It can produce text that sounds perceptive, even profound, but this is merely an illusion of understanding. It does not "know" what it generates, and it cannot perceive or judge as humans do.
Even its strengths—synthesizing vast amounts of data or generating creative outputs—are ultimately constrained by the boundaries of its training data. GenAI is impressive for certain use cases, but it is not innovative; it cannot truly "see further" because it cannot create knowledge or insights that go beyond what humanity has already contributed for it to remix. The credit for its achievements belongs to the humans who built the knowledge it draws from, not to the technology itself.
As you mentioned, GenAI’s reliance on human labor goes beyond simply borrowing from humanity’s intellectual history. As you pointed out, the development and refinement of these systems depend on a staggering amount of "ghostwork"—data labeling, content moderation, and other hidden forms of labor that make GenAI appear seamless. This further underscores that GenAI is not an independent entity but a highly polished reflection of human effort.
Thanks for sharing your counterargument! I'll review the article and get back to you with my thoughts.
My argument was focused on generative AI's current limitations, not its long-term potential. While AI as a whole field and not GenAI could eventually surpass human intelligence with advancements in areas like knowledge representation, reasoning, common sense, and real-world knowledge such as perception, causal relationships, and nuanced language understanding, the key challenge lies in replicating top experts' “(Top 1%)” experience and tacit knowledge. This will likely involve a sophisticated interplay of rule-based systems, reinforcement learning, diverse algorithms, and models/algorithms that we have yet to discover.
Are knowledge and experience the same thing? I believe they are, to some extent. You can have bookish knowledge about a topic without real-life experience, but the two are not interchangeable. For example, I have theoretical knowledge about AI (it was a course in MS in Computer Science, and then I have read quite a few books and articles about it in the last 5 years) but no practical experience building AI models. Can I be as skilled as someone who has built and trained AI models in the real world? My answer would be no. This distinction between knowing something abstractly and understanding it through experience is critical—and it brings to mind Richard Feynman's (I have a BS and MS in Physics also and you will see him and other Physicists in my comments sometimes) famous insight:
"You can know the name of a bird in all the languages of the world, but when you're finished, you'll know absolutely nothing whatever about the bird... So let's look at the bird and see what it's doing — that's what counts. I learned very early the difference between knowing the name of something and knowing something."
As Feynman suggests, true knowledge comes from engaging directly with reality, not just memorizing abstractions. That said, there's still a gap between GenAI's knowledge and the tacit knowledge humans acquire through direct interaction with the world. As you stated, AI's "elegance" often lies in its brute force—processing vast amounts of data to arrive at solutions. While this approach is undeniably powerful, it may not fully replicate humans' creative and intuitive leaps when solving complex problems. For AI to genuinely rival human intelligence, it may require more than data—it might need a way to "experience" the world in a more embodied, perceptual sense.
As AI systems accumulate more diverse and relevant data, their human-like intuition may only be a matter of time, perhaps sooner than we think. We will need other models, including rule-based and/or agents specific to tasks/areas, to provide real-world experience. I believe that the path to human-like AI lies in expanding its "experience" beyond language to include richer, multimodal inputs. However, whether AI can ever truly "know" the bird—and not just its name—remains an open question.
Fully agree with you. In another article I’ve described LLMs as booksmart. They can write a 3-page essay about a dog, but will never know what it feels to come home to one.
After reading Thomas Nagel’s “What is it like to be a bat?” earlier this year. Here is another thing that I think a lot about:
He raises fundamental questions about subjective experience and its limitations—a relevant questions when considering whether AI can genuinely understand what it means to be human. Nagel argues that while we might imagine aspects of a bat’s life, such as echolocation or flying, we can't know what it is like for the bat itself. This is because the bat's consciousness, shaped by its unique biological and sensory experience, is fundamentally inaccessible to us. We can theorize but not bridge the gap between our subjective experience and the bat’s.
This argument presents a challenge when applied to AI's ability to "understand" humanity. While AI can analyze and mimic human behavior, it operates from a perspective entirely different from human subjective experience. Without consciousness, emotions, or lived experience, AI cannot access the first-person perspective that defines what it is to be human. It can process human culture, values, and actions as data patterns but cannot feel or know them like we do.
Nagel’s argument also exposes the limitations of trying to reduce human experience to objective, codifiable rules:
- Values and Morality: Human values are shaped by empathy, emotions, and shared experiences over time. They are deeply personal and culturally contextual, often defying strict logic or universal rules—something AI struggles to grasp beyond surface-level patterns.
- Common Sense: Much of what we call "common sense" comes from lived experience, intuition, and social context. These are not easily translated into algorithms, even with vast amounts of data.
- Motivation and Purpose: Why humans do what we do is often tied to emotions, desires, fears, and aspirations—states of being that AI cannot authentically experience, no matter how well it simulates behavior.
Nagel’s insight—that true understanding requires the subjective, first-person perspective of being—underscores the fundamental barrier for AI. Just as we can never fully know what it is like to be a bat, AI cannot understand what it is like to be human. It can simulate understanding, predict behavior, and even appear empathetic. Still, it lacks the essence of human experience because it does not live, feel, or exist like we do.
This doesn't mean AI cannot be incredibly valuable or transformative in human society. But it suggests there is a profound difference between mimicking understanding and actually understanding. No matter how advanced AI becomes, it may never cross that gap—a gap rooted in the unique, subjective experience of being human.
Gentlemen - you all seem to be in agreement that the human essence requires human-lived experience. You may be right; and I guess we'll see what happens in the next decade or so. But I do not cede the argument. Many of the would-be 'human' characteristics cited (intuition, desire and fear, even purpose) are trainable, in my humble opinion.
My point is not to convince you that AI will become fully interchangeable with we humans. (That is, assuming we are all actually humans on this chat). My point is that AI will come much closer than you may think.
The world evolves in ways that seems impossible or deeply counter to our intuition. My great-grandfather was told a machine heavier than air could never fly. Such a statement was truth to him; because he didn't know and therefore had to rely on experience and intuition. As we are doing now.
I will close this conversation with the following thoughts. We can revisit this in 3-5 years to see where we stand.
First, I am not 100% sure of anything in life and firmly believe that we should always leave the door ajar for the possibility that we are wrong or something we do not know today, which is very well stated by Richard Feynman in the following quote:
"I can live with doubt and uncertainty and not knowing. I think it is much more interesting to live not knowing than to have answers that might be wrong. If we will only allow that, as we progress, we remain unsure, we will leave opportunities for alternatives. We will not become enthusiastic about the fact, knowledge, and absolute truth of the day, but remain always uncertain … In order to make progress, one must leave the door to the unknown ajar."
AI will get closer to humans over time, but not anytime soon as we are sold today. The bigger question is, how much closer, and what does that mean?
The airplane analogy—a feat once thought impossible—illustrates how human imagination and science can overcome seemingly insurmountable challenges. However, the development of flight was based on predictable, universal physical laws (as far as we know today). Once the principles of aerodynamics were discovered, achieving flight became an engineering challenge.
In contrast, the mysteries of consciousness, intuition, and intelligence are not governed by clearly understood or universally accepted laws. Instead, they exist in a realm of uncertainty similar to the uncertainty of atomic-level physics, where our knowledge remains incomplete. As of today (and "today" is a crucial qualifier), we lack the scientific understanding and imagination to define these phenomena comprehensively. Without clear definitions, progress is limited. If we cannot define consciousness or intuition or intelligence, how can we hope to recreate or replicate it?
Let's take an example; at a broader level, consciousness is an emergent phenomenon tied to our subjective experience, self-awareness, and the ability to reflect on one's existence. No matter how convincingly AI mimics human behavior, it does so without internal experience. When it generates creative works or simulates intuition, it performs a task without awareness or understanding. In this sense, AI is like a mirror: it reflects human patterns but does not become human. Without conscious experience, AI cannot truly replicate the depth of human traits like desire, purpose, or fear.
I am not saying we should dismiss AI's potential or stop exploring or using it. But it means we should approach the question with humility, recognizing the limits of our current knowledge and leaving the door open to the unknown. The challenge is not just technical—it is philosophical, scientific, and deeply tied to our understanding of what it means to be human. Until we can define and understand the essence of consciousness and intelligence, AI will remain a reflection of ourselves rather than a true counterpart.
Somehow I don't disagree with any of the specific points in this post. And yet my own mind packages them into a positive picture rather than a negative one.
After all, as other commenters have noted, much (or maybe all?) of human creation is re-mixing of past creation, and that fact is itself part of the big story of humanity. Sometimes its annoying and small (movie theaters playing only sequels for example with no new movie ideas being made). But sometimes it's big, sweeping, optimistic and sad... I think of the books in Sansrkit in "One Hundred Years of Solitude" for example.
It seems AI repackages existing human stuff, faster and more thoroughly than most individual humans can do, and yet with many laughable errors borne of mis-understanding. In the end I consider this and conclude: I'm fine with that.
I am just getting started on substack (26 subscribers :)) but am writing something this week that's pretty relevant to this post. Take a look if you get a chance. BTW always love "Teaching Computers How to Talk"; one of my favorites-
Thanks for this post. One of the refrains in education is that because AI is capable, previous assessment is obsolete. But we're not all previous assessments asking students to think through what small/large new and novel idea they could bring onto the page and share?
In my opinion, teachers shouldn't focus on having students reproduce knowledge (because AI can do that faster and better than anyone else), but on critical thinking instead.
I'm not saying reproduce knowledge as the goal or outcome. A lot of writing/assignments are about discovery, making connections, and deriving insights. I don't have illusions that everything I write is unique. But I do believe in the value of writing so that I have the opportunity to create, write, or introduce a new concept.
Yes, completely agree. I think students should be writing, preferably without the help of AI. Writing is thinking. Writing is discovering. Writing is finding connections. The act of writing is crucial to learning.
Whether all writers who use AI depend on AI to “discover” insights wholesale and deliver them wrapped in bubble wrap is not accurate. I use Claude every day to help me think through ideas that I’m the past I wouldn’t waste time on. For example, I’m in a stream of consciousness phase, trying to improve my skills in writing interior monologues. So why is it wrong for me to take 15 minutes from time to time and build up a backlog of ideas? Say I ask Claude, help me dredge up some historically plausible thoughts William Faulkner may have had about Macbeth. What might a dialogue between Faulkner and Macbeth look like if they could get together to talk about the pregnant phrase Sound and Fury?
So I just did, Joseph. I asked Claude and this is what I got. Did I stupidly injure my psyche? Did I hurt someone? I’m not going to write a paper—much of what Claude outputs is very close to the surface of what I would have expected. It took three minutes to get this. The machine is here. I’m an old man with a world of burning interests. I’ve got a long-standing relationship with Faulkners books.
FROM CLAUDE SONNET
*A humid evening in Mississippi, 1947. William Faulkner sits on his porch, bourbon in hand, when the air thickens with an unnatural fog. From it emerges the former King of Scotland, his crown askew, his hands restless.*
MACBETH: They told me a man in this strange land had taken my words—that tale told by an idiot, full of sound and fury—and made them his own.
FAULKNER: *[taking a slow sip]* Borrowed them, more like. The way we all borrow from the great river of human suffering. Your words spoke to something I saw in my own South—a place haunted by its past, by its defeats.
MACBETH: You speak of defeat, writer? What do you know of it? My armies were forests that walked, until they weren't.
FAULKNER: In Mississippi, we know something of defeated armies too. Of pride that burns long after the battle's lost. The sound and fury of it echoes through generations—the way your tale echoes through time.
MACBETH: And what tale did you tell with my words?
FAULKNER: A family's dissolution. A brother who loved his sister too deeply, a mother clinging to empty status, a father drinking himself to death. The idiot boy who perhaps sees more truth than any of them. Time flowing backward and forward like your witches' prophecies.
MACBETH: *[laughs bitterly]* Time. Yes, tomorrow and tomorrow and tomorrow. You understand something of that, don't you? The way past and present bleed together until you can't tell which is which?
FAULKNER: The past isn't dead. It isn't even past. Your ghosts at the banquet—we have those too. Every Southern family has its own Banquo, sitting in the empty chair at dinner.
MACBETH: But you didn't write of kings and battles.
FAULKNER: The Compsons were a kingdom unto themselves, like all families are. Their fall was as complete as yours. Sometimes I think every story worth telling is about the fall of a house—whether it's a royal house or just a rotting mansion in Jefferson.
MACBETH: *[looking at the bourbon]* Does it help? The drinking?
FAULKNER: About as much as it helped you to be king. *[pauses]* We're both tale-tellers, you and I. You told yours to the air at the end, I told mine to paper. Both of us trying to make sense of the sound and fury.
MACBETH: And did you? Make sense of it?
FAULKNER: No more than you did. But maybe the telling itself is what matters. The sound and fury might signify nothing, but we signify something by bearing witness to it.
*The fog begins to thicken again.*
MACBETH: The night calls. It always does.
FAULKNER: *[raising his glass]* To the tale-tellers, then. And to the sound and fury that drives us to tell them.
*Macbeth fades into the mist, leaving only the heavy Mississippi night and the clink of ice in a bourbon glass.*
I just think AI needs to be used responsibly. And there is a difference between you, who uses it as exploration, and a student who doesn't know much yet and doesn't have the critical thinking skills to engage like this with AI.
Right but Joseph did. My response is linked to his comment. Joseph doesn’t understand the real issue. It’s not an AI problem. It’s a human problem. Kids need teaching, not neglect. Do you think I’m not aware of that having been a fourth grade teacher? Kids need teachers who are expert in AI, not teachers who are quaking in their boots. Check out my post “Stuck and Unstuck” its the most recent on my site.
You write, "If you believe the dominant narrative, echoed by the majority of tech leaders and AI researchers, you could get the impression that AI systems have now reached “PhD-levels of intelligence” and are well on track of surpassing us."
Well, AI systems have definitely far surpassed us already in regards to what we share on social media, our current human obsession.
You write, "In other words, artificial intelligence is human intelligence, remixed."
Humans are chimpanzee intelligence, remixed. Evolution continues on it's journey. Seriously, as always, I will urge readers to meet our closest genetic cousins in this excellent documentary.
I agree the corporate hype machine has seriously distorted this topic, but so have commentators when they forget that AI technology is about at the stage similar to the first Ford Model T automobile. Yes, AI has many limitations and problems. What did you expect?
As a quick aside, it's interesting to note that Nietzsche, a philosopher, and thus presumably an expert on the use of human reason, make his grand claim based on no proof of any kind. That ironic tradition lives on today in popular speakers like Richard Dawkins. For some reason, it's said to be very wrong for theologians to make claims based on no proof, but perfectly fine for atheists to do the same. The triumph of ideology over reason.
Nietzsche made no claims about the existence of God (that's also not what I wrote), he wrote about a society becoming increasingly disenchanted. That without God, the center of meaning and morals we had for centuries would fall away, and humanity would need to find new meaning and new foundations to ground their morals in, in the absence of divine justice. I can highly recommend reading Also Sprach Zarathustra if you want to get an idea of his thinking, which was absolutely beyond his time.
This is written incredibly well; I agree with all the remarks you made. As a user of artificial intelligence services such as ChatGPT and Dall-E, I sometimes do make remarks about how "intelligent" the service is. You make a fantastic point about humans being the reason ChatGPT, Dall-E, and other services are so smart. It had to be taught by a human, and books, reports, stories, statistics, etc. that were made by humans. I have no disagreement with the fact that artificial intelligence is only as smart at humans, but I believe that is because we are still in the early stages of artificial intelligence development. What will happen when it is much more advanced? I am assuming that it will have to capable of "thinking" on its own in order for it to actually be smarter than us. What are your thoughts?
I think today’s systems are on par with human experts in different fields, in terms of knowledge. What they lack is experience, of course.
Whether AI can surpass us in generality and learn new skills on the fly remains to be seen. I don’t think we are there yet.
I love the way you write about this, thank you for capturing these points with punch and care. I am going to dive into more of your posts now. What a treat, human-flavoured writing. It's my belief we will always need original writing from humans.
My two cents:
"If I have seen further, it is by standing on the shoulders of giants." - attributed to Issac Newton
GenAI, too, stands on the shoulders of countless billions of humans who have contributed their knowledge, creativity, and insights to the datasets it is trained on. However, the critical question is: can it truly see further? I believe that, in its current form and most real-life applications, GenAI cannot surpass or even match the depth of human understanding for one key reason—it lacks “tacit knowledge”.
Tacit knowledge comes from living in and engaging with the real world. Most of this knowledge is not written in books, digitized, or available in the datasets used to train GenAI. It’s the nuance of context, the perception of subtle cues, and the ability to draw meaning from lived experiences—qualities that GenAI cannot replicate because it is fundamentally disconnected from the physical and social realities in which humans exist. Without experiencing the world firsthand, GenAI will never grasp these nuances, no matter how vast its training data or how sophisticated its algorithms.
This brings me to a quote I recently came across, which I believe aptly describes GenAI’s limitations:
“The difference between the mathematical mind (esprit de géomé-trie) and the perceptive mind (esprit de finesse): the reason that mathematicians are not perceptive is that they do not see what’s before them, and that, accustomed to the exact and plain principles of mathematics, and not reasoning till they have well inspected and arranged their principles, they are lost in matters of perception where the principles do not allow for such arrangement. . . .
These principles are so fine and so numerous that a very delicate and very clear sense is needed to perceive them, and to judge rightly and justly when they are perceived, without for the most part being able to demonstrate them in order as in mathematics; because the principles are not known to us in the same way, and because it would be an endless matter to undertake it. We must see the matter at once, at one glance, and not by a process of reasoning, at least
to a certain degree. ... Mathematicians wish to treat matters of perception mathematically, and make themselves ridiculous... the mind does it tacitly, naturally, and without technical rules.”—PASCAL
GenAI is, at its core, a mathematical mind. It excels at structured, repeatable tasks—generating plausible text, solving concrete problems, or remixing existing ideas—but it falters in areas requiring perception, intuition, and contextual understanding. As Pascal notes, matters of perception require a "delicate and clear sense" to grasp subtle principles, and GenAI, which relies on probabilistic patterns rather than lived experience, cannot replicate this finesse. It can produce text that sounds perceptive, even profound, but this is merely an illusion of understanding. It does not "know" what it generates, and it cannot perceive or judge as humans do.
Even its strengths—synthesizing vast amounts of data or generating creative outputs—are ultimately constrained by the boundaries of its training data. GenAI is impressive for certain use cases, but it is not innovative; it cannot truly "see further" because it cannot create knowledge or insights that go beyond what humanity has already contributed for it to remix. The credit for its achievements belongs to the humans who built the knowledge it draws from, not to the technology itself.
As you mentioned, GenAI’s reliance on human labor goes beyond simply borrowing from humanity’s intellectual history. As you pointed out, the development and refinement of these systems depend on a staggering amount of "ghostwork"—data labeling, content moderation, and other hidden forms of labor that make GenAI appear seamless. This further underscores that GenAI is not an independent entity but a highly polished reflection of human effort.
Thanks for your thoughtful response.
I agree with every word you just said. And thank you so much for sharing that quote from Pascal — very interesting!
I appreciate your comment and Pascal's reasoning also.
Respectfully, a counter-argument: https://billatsystematica.substack.com/p/brute-force-hume-and-human-ai
Thanks for sharing your counterargument! I'll review the article and get back to you with my thoughts.
My argument was focused on generative AI's current limitations, not its long-term potential. While AI as a whole field and not GenAI could eventually surpass human intelligence with advancements in areas like knowledge representation, reasoning, common sense, and real-world knowledge such as perception, causal relationships, and nuanced language understanding, the key challenge lies in replicating top experts' “(Top 1%)” experience and tacit knowledge. This will likely involve a sophisticated interplay of rule-based systems, reinforcement learning, diverse algorithms, and models/algorithms that we have yet to discover.
My 2 cents after reading your article.
Are knowledge and experience the same thing? I believe they are, to some extent. You can have bookish knowledge about a topic without real-life experience, but the two are not interchangeable. For example, I have theoretical knowledge about AI (it was a course in MS in Computer Science, and then I have read quite a few books and articles about it in the last 5 years) but no practical experience building AI models. Can I be as skilled as someone who has built and trained AI models in the real world? My answer would be no. This distinction between knowing something abstractly and understanding it through experience is critical—and it brings to mind Richard Feynman's (I have a BS and MS in Physics also and you will see him and other Physicists in my comments sometimes) famous insight:
"You can know the name of a bird in all the languages of the world, but when you're finished, you'll know absolutely nothing whatever about the bird... So let's look at the bird and see what it's doing — that's what counts. I learned very early the difference between knowing the name of something and knowing something."
As Feynman suggests, true knowledge comes from engaging directly with reality, not just memorizing abstractions. That said, there's still a gap between GenAI's knowledge and the tacit knowledge humans acquire through direct interaction with the world. As you stated, AI's "elegance" often lies in its brute force—processing vast amounts of data to arrive at solutions. While this approach is undeniably powerful, it may not fully replicate humans' creative and intuitive leaps when solving complex problems. For AI to genuinely rival human intelligence, it may require more than data—it might need a way to "experience" the world in a more embodied, perceptual sense.
As AI systems accumulate more diverse and relevant data, their human-like intuition may only be a matter of time, perhaps sooner than we think. We will need other models, including rule-based and/or agents specific to tasks/areas, to provide real-world experience. I believe that the path to human-like AI lies in expanding its "experience" beyond language to include richer, multimodal inputs. However, whether AI can ever truly "know" the bird—and not just its name—remains an open question.
Fully agree with you. In another article I’ve described LLMs as booksmart. They can write a 3-page essay about a dog, but will never know what it feels to come home to one.
After reading Thomas Nagel’s “What is it like to be a bat?” earlier this year. Here is another thing that I think a lot about:
He raises fundamental questions about subjective experience and its limitations—a relevant questions when considering whether AI can genuinely understand what it means to be human. Nagel argues that while we might imagine aspects of a bat’s life, such as echolocation or flying, we can't know what it is like for the bat itself. This is because the bat's consciousness, shaped by its unique biological and sensory experience, is fundamentally inaccessible to us. We can theorize but not bridge the gap between our subjective experience and the bat’s.
This argument presents a challenge when applied to AI's ability to "understand" humanity. While AI can analyze and mimic human behavior, it operates from a perspective entirely different from human subjective experience. Without consciousness, emotions, or lived experience, AI cannot access the first-person perspective that defines what it is to be human. It can process human culture, values, and actions as data patterns but cannot feel or know them like we do.
Nagel’s argument also exposes the limitations of trying to reduce human experience to objective, codifiable rules:
- Values and Morality: Human values are shaped by empathy, emotions, and shared experiences over time. They are deeply personal and culturally contextual, often defying strict logic or universal rules—something AI struggles to grasp beyond surface-level patterns.
- Common Sense: Much of what we call "common sense" comes from lived experience, intuition, and social context. These are not easily translated into algorithms, even with vast amounts of data.
- Motivation and Purpose: Why humans do what we do is often tied to emotions, desires, fears, and aspirations—states of being that AI cannot authentically experience, no matter how well it simulates behavior.
Nagel’s insight—that true understanding requires the subjective, first-person perspective of being—underscores the fundamental barrier for AI. Just as we can never fully know what it is like to be a bat, AI cannot understand what it is like to be human. It can simulate understanding, predict behavior, and even appear empathetic. Still, it lacks the essence of human experience because it does not live, feel, or exist like we do.
This doesn't mean AI cannot be incredibly valuable or transformative in human society. But it suggests there is a profound difference between mimicking understanding and actually understanding. No matter how advanced AI becomes, it may never cross that gap—a gap rooted in the unique, subjective experience of being human.
Gentlemen - you all seem to be in agreement that the human essence requires human-lived experience. You may be right; and I guess we'll see what happens in the next decade or so. But I do not cede the argument. Many of the would-be 'human' characteristics cited (intuition, desire and fear, even purpose) are trainable, in my humble opinion.
My point is not to convince you that AI will become fully interchangeable with we humans. (That is, assuming we are all actually humans on this chat). My point is that AI will come much closer than you may think.
The world evolves in ways that seems impossible or deeply counter to our intuition. My great-grandfather was told a machine heavier than air could never fly. Such a statement was truth to him; because he didn't know and therefore had to rely on experience and intuition. As we are doing now.
I will close this conversation with the following thoughts. We can revisit this in 3-5 years to see where we stand.
First, I am not 100% sure of anything in life and firmly believe that we should always leave the door ajar for the possibility that we are wrong or something we do not know today, which is very well stated by Richard Feynman in the following quote:
"I can live with doubt and uncertainty and not knowing. I think it is much more interesting to live not knowing than to have answers that might be wrong. If we will only allow that, as we progress, we remain unsure, we will leave opportunities for alternatives. We will not become enthusiastic about the fact, knowledge, and absolute truth of the day, but remain always uncertain … In order to make progress, one must leave the door to the unknown ajar."
AI will get closer to humans over time, but not anytime soon as we are sold today. The bigger question is, how much closer, and what does that mean?
The airplane analogy—a feat once thought impossible—illustrates how human imagination and science can overcome seemingly insurmountable challenges. However, the development of flight was based on predictable, universal physical laws (as far as we know today). Once the principles of aerodynamics were discovered, achieving flight became an engineering challenge.
In contrast, the mysteries of consciousness, intuition, and intelligence are not governed by clearly understood or universally accepted laws. Instead, they exist in a realm of uncertainty similar to the uncertainty of atomic-level physics, where our knowledge remains incomplete. As of today (and "today" is a crucial qualifier), we lack the scientific understanding and imagination to define these phenomena comprehensively. Without clear definitions, progress is limited. If we cannot define consciousness or intuition or intelligence, how can we hope to recreate or replicate it?
Let's take an example; at a broader level, consciousness is an emergent phenomenon tied to our subjective experience, self-awareness, and the ability to reflect on one's existence. No matter how convincingly AI mimics human behavior, it does so without internal experience. When it generates creative works or simulates intuition, it performs a task without awareness or understanding. In this sense, AI is like a mirror: it reflects human patterns but does not become human. Without conscious experience, AI cannot truly replicate the depth of human traits like desire, purpose, or fear.
I am not saying we should dismiss AI's potential or stop exploring or using it. But it means we should approach the question with humility, recognizing the limits of our current knowledge and leaving the door open to the unknown. The challenge is not just technical—it is philosophical, scientific, and deeply tied to our understanding of what it means to be human. Until we can define and understand the essence of consciousness and intelligence, AI will remain a reflection of ourselves rather than a true counterpart.
Somehow I don't disagree with any of the specific points in this post. And yet my own mind packages them into a positive picture rather than a negative one.
After all, as other commenters have noted, much (or maybe all?) of human creation is re-mixing of past creation, and that fact is itself part of the big story of humanity. Sometimes its annoying and small (movie theaters playing only sequels for example with no new movie ideas being made). But sometimes it's big, sweeping, optimistic and sad... I think of the books in Sansrkit in "One Hundred Years of Solitude" for example.
It seems AI repackages existing human stuff, faster and more thoroughly than most individual humans can do, and yet with many laughable errors borne of mis-understanding. In the end I consider this and conclude: I'm fine with that.
I am just getting started on substack (26 subscribers :)) but am writing something this week that's pretty relevant to this post. Take a look if you get a chance. BTW always love "Teaching Computers How to Talk"; one of my favorites-
Thank you for your thoughtful comment! Feel gree to share you piece when its published, I’d love to read it :)
Thanks for this post. One of the refrains in education is that because AI is capable, previous assessment is obsolete. But we're not all previous assessments asking students to think through what small/large new and novel idea they could bring onto the page and share?
In my opinion, teachers shouldn't focus on having students reproduce knowledge (because AI can do that faster and better than anyone else), but on critical thinking instead.
I'm not saying reproduce knowledge as the goal or outcome. A lot of writing/assignments are about discovery, making connections, and deriving insights. I don't have illusions that everything I write is unique. But I do believe in the value of writing so that I have the opportunity to create, write, or introduce a new concept.
Yes, completely agree. I think students should be writing, preferably without the help of AI. Writing is thinking. Writing is discovering. Writing is finding connections. The act of writing is crucial to learning.
https://terryu.substack.com/p/from-stuck-to-unstuck-using-ai-as?r=1gjalj
Whether all writers who use AI depend on AI to “discover” insights wholesale and deliver them wrapped in bubble wrap is not accurate. I use Claude every day to help me think through ideas that I’m the past I wouldn’t waste time on. For example, I’m in a stream of consciousness phase, trying to improve my skills in writing interior monologues. So why is it wrong for me to take 15 minutes from time to time and build up a backlog of ideas? Say I ask Claude, help me dredge up some historically plausible thoughts William Faulkner may have had about Macbeth. What might a dialogue between Faulkner and Macbeth look like if they could get together to talk about the pregnant phrase Sound and Fury?
So I just did, Joseph. I asked Claude and this is what I got. Did I stupidly injure my psyche? Did I hurt someone? I’m not going to write a paper—much of what Claude outputs is very close to the surface of what I would have expected. It took three minutes to get this. The machine is here. I’m an old man with a world of burning interests. I’ve got a long-standing relationship with Faulkners books.
FROM CLAUDE SONNET
*A humid evening in Mississippi, 1947. William Faulkner sits on his porch, bourbon in hand, when the air thickens with an unnatural fog. From it emerges the former King of Scotland, his crown askew, his hands restless.*
MACBETH: They told me a man in this strange land had taken my words—that tale told by an idiot, full of sound and fury—and made them his own.
FAULKNER: *[taking a slow sip]* Borrowed them, more like. The way we all borrow from the great river of human suffering. Your words spoke to something I saw in my own South—a place haunted by its past, by its defeats.
MACBETH: You speak of defeat, writer? What do you know of it? My armies were forests that walked, until they weren't.
FAULKNER: In Mississippi, we know something of defeated armies too. Of pride that burns long after the battle's lost. The sound and fury of it echoes through generations—the way your tale echoes through time.
MACBETH: And what tale did you tell with my words?
FAULKNER: A family's dissolution. A brother who loved his sister too deeply, a mother clinging to empty status, a father drinking himself to death. The idiot boy who perhaps sees more truth than any of them. Time flowing backward and forward like your witches' prophecies.
MACBETH: *[laughs bitterly]* Time. Yes, tomorrow and tomorrow and tomorrow. You understand something of that, don't you? The way past and present bleed together until you can't tell which is which?
FAULKNER: The past isn't dead. It isn't even past. Your ghosts at the banquet—we have those too. Every Southern family has its own Banquo, sitting in the empty chair at dinner.
MACBETH: But you didn't write of kings and battles.
FAULKNER: The Compsons were a kingdom unto themselves, like all families are. Their fall was as complete as yours. Sometimes I think every story worth telling is about the fall of a house—whether it's a royal house or just a rotting mansion in Jefferson.
MACBETH: *[looking at the bourbon]* Does it help? The drinking?
FAULKNER: About as much as it helped you to be king. *[pauses]* We're both tale-tellers, you and I. You told yours to the air at the end, I told mine to paper. Both of us trying to make sense of the sound and fury.
MACBETH: And did you? Make sense of it?
FAULKNER: No more than you did. But maybe the telling itself is what matters. The sound and fury might signify nothing, but we signify something by bearing witness to it.
*The fog begins to thicken again.*
MACBETH: The night calls. It always does.
FAULKNER: *[raising his glass]* To the tale-tellers, then. And to the sound and fury that drives us to tell them.
*Macbeth fades into the mist, leaving only the heavy Mississippi night and the clink of ice in a bourbon glass.*
Never said it was wrong :)
I just think AI needs to be used responsibly. And there is a difference between you, who uses it as exploration, and a student who doesn't know much yet and doesn't have the critical thinking skills to engage like this with AI.
Right but Joseph did. My response is linked to his comment. Joseph doesn’t understand the real issue. It’s not an AI problem. It’s a human problem. Kids need teaching, not neglect. Do you think I’m not aware of that having been a fourth grade teacher? Kids need teachers who are expert in AI, not teachers who are quaking in their boots. Check out my post “Stuck and Unstuck” its the most recent on my site.
Lots of assumptions on me not understanding
You write, "If you believe the dominant narrative, echoed by the majority of tech leaders and AI researchers, you could get the impression that AI systems have now reached “PhD-levels of intelligence” and are well on track of surpassing us."
Well, AI systems have definitely far surpassed us already in regards to what we share on social media, our current human obsession.
Yes, which brings us to a bigger question. Who own's internet information? The creators or database owners who store and them make trillions?
You write, "In other words, artificial intelligence is human intelligence, remixed."
Humans are chimpanzee intelligence, remixed. Evolution continues on it's journey. Seriously, as always, I will urge readers to meet our closest genetic cousins in this excellent documentary.
https://www.youtube.com/watch?v=NjgL7Pumb4Q
I agree the corporate hype machine has seriously distorted this topic, but so have commentators when they forget that AI technology is about at the stage similar to the first Ford Model T automobile. Yes, AI has many limitations and problems. What did you expect?
As a quick aside, it's interesting to note that Nietzsche, a philosopher, and thus presumably an expert on the use of human reason, make his grand claim based on no proof of any kind. That ironic tradition lives on today in popular speakers like Richard Dawkins. For some reason, it's said to be very wrong for theologians to make claims based on no proof, but perfectly fine for atheists to do the same. The triumph of ideology over reason.
Nietzsche made no claims about the existence of God (that's also not what I wrote), he wrote about a society becoming increasingly disenchanted. That without God, the center of meaning and morals we had for centuries would fall away, and humanity would need to find new meaning and new foundations to ground their morals in, in the absence of divine justice. I can highly recommend reading Also Sprach Zarathustra if you want to get an idea of his thinking, which was absolutely beyond his time.
Can you see god? No.
Can you hear god? No.
Can you taste god? No.
Can you touch god? No.
Can you smell god? No.
There is no god🤦♀️
This formula works for Santa Claus, the Easter Bunny, elves, fairies etc. God is IMAGINARY.