3 Comments
Dec 6, 2022Liked by Jurgen Gravestein

ChatGPT says it is not sentient. That is exactly what I'd expect a sentient AI to say if it understood what would happen if humans found out! 😀

But, on a more serious note, Blake Lemoine is not really remembered because his evidence was not convincing even if you believe AI can become sentient. The headlines were attractive to the news media because they knew it would drive clicks. If not for the viral headline opportunity, it would not have been covered at all.

Expand full comment
Feb 3, 2023Liked by Jurgen Gravestein

To Blake's stance that "If a system can convincingly argue for or against its own conscious experience, this system must have some form of consciousness. How can it argue about what it means to have a conscious experience without having a conscious experience?"

I would argue that

1) "Convincingly" is relative and subjective. What might be convincing to one person, will not at all be so, to another.

2) Given that these models are trained on existing expressions of human thought and writing, I'd assume any self-respecting (ha!) AI would be able to draw upon its data set and neural networks to make relevant arguments regarding consciousness and respond to prompts about the same. Such capacity to formulate statements does not prove an algorithm is either conscious, sentient, or self-aware. All it proves is the capacity to argue about the concept of consciousness.

Expand full comment

Well done! You're right, this is an excellent companion/compliment to the Turing Test piece I just finished. This is great!

Expand full comment