11 Comments

Great article! As I argued here (https://blog.apiad.net/p/can-machines-talk) and as you correctly claim at the beginning the Turing test is not a scientific protocol but a thought experiment. Turing actually shifts the conversation from intelligence to thinking. What the Turing test is meant to show is that thinking is a functional concept, in the sense that anything that performs the function of thinking *is* thinking, regardless of implementation. So far, we can safely say none of the existing language models perform this function to the level Turing intended in his test. Maybe GPT-5 will, and that will be something to behold!

Expand full comment
author

Thanks for your comment, Alejandro! I actually read your piece last night and thought it was highly informative! Great breakdown of how Alan Turing approached the topic of machine intelligence. Even more impressive because in his time, the systems that we have today, must've been hard to fathom.

Expand full comment

Thanks for your kind words ;)

Expand full comment

Yes! Many of the x-risk AI doomsters claim that AI will out-compete us because it is way more intelligent than us (humans going up against AI is like "a 10-year-old old trying to play chess against Stockfish 15" - Yudkowsi, 2023).

But the big risk from AI is not from its intelligence, but it's charm.

Daniel Dennett wrote earlier this year:

"..Our natural inclination to treat anything that seems to talk sensibly with us as a person—adopting what I have called the “intentional stance”—turns out to be easy to invoke and almost impossible to resist, even for experts. "

D. Dennett, "The Problem with Counterfeit People"

https://tufts.app.box.com/s/894vdcbyxr1ic468jcxseckuo2ebkvsk

You point out that "humans are the single most adaptive species on Earth". One of the reasons for our success is that we *cooperate* with each other on a much larger scale compared to other mammals (Dunbar, 1998); our civilization is built entirely on trust (Nowak, 2006). If counterfeit people fundamentally undermine our trust in each other, then our civilization risks collapse. You claim that we will adapt to the new status quo, but your argument that we will do so because we have adapted in the past suffers from the problem of induction (https://en.wikipedia.org/wiki/Black_swan_theory). The intentional-stance may turn out to be our species' Achilles heel.

Dennett, D. C. "The problem with counterfeit people." The Atlantic (2023).

Dunbar, Robin IM. "The social brain hypothesis." Evolutionary Anthropology: Issues, News, and Reviews: Issues, News, and Reviews 6.5 (1998): 178-190.

Nowak, Martin A. "Five rules for the evolution of cooperation." science 314.5805 (2006): 1560-1563.

Yudkowsky, Eliezer. 2023. “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down.” Time Magazine, March. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/

Expand full comment
author
Dec 10, 2023·edited Dec 10, 2023Author

Thanks for your thoughtful response, Steve. Also thank you for pointing me to Dennett's piece on "The Problem With Counterfeit People". I had not read that yet.

It's similar to the argument of Yuval Harari, when he says, paraphrased here: "Modern democracy is built on our ability to have a conversation. And AI is threatening the breakdown of this conversation." (https://youtu.be/7JkPWHr7sTY?feature=shared&t=1086)

When I say that I'm confident that we will adapt, that expression comes from a place of hope and not a place of certainty. I don't know if we will succeed in mitigating the risks and maybe we won't, but I know that we will try.

Expand full comment

The silver lining in our beguilement with AI chatbots is that benign chatbots may be able to facilitate greater cooperation among humans. Our research has shown that LLMs are able to operationalise concepts such as altruism, selfishness, competitiveness and cooperation in social dilemma experiments such as the prisoner's dilemma. This suggests they have some "understanding" of these concepts, and they might be able to apply to them in other task environments. One idea our group is exploring is whether chatbots can be fine-tuned to act as third-party intermediaries in negotiations that resemble public-goods games, e.g. climate negotiations. It is well-known that pre-play communication can enhance cooperation in such settings, and that the final level of cooperation depends on the quality of the communication. We are currently designing experiments to see whether games conducted under a chatbot-facilitated pre-play communication treatment yield more cooperation than those without. More details at https://github.com/phelps-sg/llm-cooperation and https://sphelps.net/llm-cooperation-slides.pdf.

Expand full comment

"These systems don’t learn from first principles and experience, like us, but by crunching as much human-generated content as possible"

I've been trying to get my head around this as much as possible (outside of this substack). What did you mean by systems not learning from first principles [like humans]. How do humans learn from first principles? Do we not just "crunch as much human-generated data as possible" when we are learning? I'd be interested in your view. Cheers

Expand full comment
author

Humans make sense of the world not just by reading, but by experiencing the world around them. We take in new information all the time. We reason about the world, we form ideas, test them and update our beliefs.

LLM do not have this capability. They do not form beliefs, opinions, or judgments. They process and generate text based on statistical patterns rather than any form of critical thinking or conscious understanding.

To give you an analogy (albeit an imperfect one): it is as if you would put someone in a room, from birth, and never let them outside, and all they have is a big library of books to read about everything that is out there in the world. Their knowledge about the world is entirely second-hand and this gap in understanding is exactly what LLMs are missing: they don't experience anything.

Expand full comment

I'd be interested in reposting this as a guest post in case you are interested. Though perhaps with extended analysis from the paper. Let me know. https://aisupremacy.substack.com/p/guest-posts-on-ai-supreamcy

Expand full comment
author

Hey Michael! Sure, could you send me an email at jurgen@cdisglobal.com so we can discuss? Happy to hear what you would like to see added to it.

Expand full comment

I seem to have the opposite problem: I tend think humans are bots... unless my gut knows something my brain doesn't :/

Expand full comment