22 Comments

You write, "I’d even go as far as to say that language alters our perception."

To be more precise, what alters our perception is the nature of what both the speaker and their speech is made of. Thought. I don't mean the content of thought, but the nature of the medium itself, the way it works. Language reflects the properties of what language is made of.

Thought operates by dividing the single unified reality in to conceptual parts. The noun is the easiest example of this. More here:

https://www.tannytalk.com/p/article-series-the-nature-of-thought

It could be interesting to speculate the degree to which AI will also inherit and reflect the properties of thought. As I'm typing this I'm realizing I haven't really thought about that enough.

Hmm.....

Expand full comment

I sympathise with your view, Phil. I've not thought about it in much detail either, but I'm a fan of Wittgensteins view on language, which can be summed up as: "The meaning of a word lies in its use." Everything is context. And only by engaging in a "conversation" with the world, we can learn about it in a meaningful way. AI, of course, does NOT engage in this conversation with the world, it just has access to the words.

Expand full comment

True, but language is used to communicate thought, which can then end up with not necessarily true "truths" like "single unified reality" getting stored as "facts" in millions of models of reality (models which are more commonly known as "the reality").

Expand full comment

AI doesn’t have to be intelligent to actively participate in generating new understanding. https://open.substack.com/pub/cybilxtheais/p/the-hall-of-magnetic-mirrors?r=2ar57s&utm_medium=ios

Expand full comment

How many laws of the universe have you discovered?

Personally, I've discovered 0, and so don't much hold it against AI for having discovered the same number.

As for "hallucinations", we do have a better word for that, it just never caught on. That word is "confabulation."

In either case it has nothing to do with biases in the training data, just with output unwarranted by either the context or the data.

Expand full comment

I’m not sure I like confabulation either. Although it’s probably better than hallucination, it’s still a very human psychological phenomenom and by calling it like that you are also adopting in other connotations and beliefs that come with it.

LLMs cannot lie nor can they be genuine. I personally would opt for something more neutral.

Expand full comment

I am very purposely adopting those connotations, as I strongly suspect approximately the same mechanisms at play between LLM and human confabulation.

Specifically, the mechanism of incorrectly behaving as an entity that has the necessary information to continue generating output which can withstand scrutiny, when you are in fact an entity that does not have that information resorting to filling the void with plausible outputs instead.

Expand full comment

Hmm, not sure if I agree. Why would you compare the two when memory in humans works very differently from memory in LLMs? The mechanisms are not the same. People don’t do approximate retrieval, we don’t plot knowledge on a curve in vector space, brains are not made up of 0’s and 1’s.

Analogies are useful to understand the limitations of certain technologies, but I feel we shouldn’t conflate the analogy with the real thing.

Expand full comment

Why do you think people don’t do approximate retrieval? Certainly you must have had instances of tip-of-the-tongue syndrome in your life, or instances of vaguely recalling something but struggling on the exact details until you get a few more hints/reminders.

I don’t think the 1s and 0s thing is a relevant point, since that’s more a question of substrate than of mechanism.

Expand full comment

When I vaguely recall something or have tip-of-the-tongue moment, I don’t confidently produce an entirely false answer. (Some folks maybe do, but do so intentionally I guess.)

We might agree to disagree here, but in my opinion the mechanism is everything. Again, I think the comparison to human cognitive processes is useful in the context of making something understandable, but we should be careful not to conflate.

Expand full comment

The literature on confabulation is pretty clear on this I think. Even if you personally don’t confidently produce a wrong answer as a result of approximate recall, most people do. To the extent that memes have been born out of things like the Mandela Effect and the spelling of the Berenstain Bears. (But, it’s much more likely that you too do in fact confidently produce wrong answers just like everyone else, and don’t realize you’re doing it for the same reason no one else does)

Expand full comment

I wonder if one of the reasons that thing people say all the time about language limiting creativity never seemed to fit for me is I'm one of those weirdos without an internal dialogue. Words (also numbers, dates, etc. - it's a pain in the arse) don't stick in my head like that.

Expand full comment

Besides intelligence and learning, there is a case to be made around AI not capable of imagination or intuition: https://open.substack.com/pub/unexaminedtechnology/p/the-two-is-we-need-to-include-in?r=2xhhg0&utm_medium=ios

Expand full comment

Pray that someone with even a little depth in set theory doesn't decide to oppose your case.

Expand full comment

I found the passage on the description of people increasingly like machines and vice versa very interesting. in fact, reading different papers I happen to "come across" very similar titles. It is probably also because we want to emphasize a part of the characteristics of these machines that can be more understandable. Furthermore, perhaps, it is always a stylistic choice to reduce friction between new users and chatbots. Do not you think?

Expand full comment

"Most people don’t realize it this, but the humanization of AI was baked into the language from the very start."

How else might it have been done that would have been different in a better way, do you think?

Expand full comment

Is the Ricky Gervais clip to demonstrate how human hallucination exists even within the (so it is claimed) premier human cognitive domain: The Science? ;

This rabbit hole is endlessly deep, and seems to have substantial cloaking abilities!!

Expand full comment

The point of this clip was that science books come back once they are destroyed, because humans are smart enough to figure out nature's principle again, and therefore all the scientific theories that we have today will at some point be rediscovered.

LLMs would not be able to do such a thing. Because they don't learn. They don't discover anything new on their own, in other words: an LLM can't make progress.

Expand full comment

> The point of this clip was that science books come back once they are destroyed

Ricky is speculating (hallucinating, speaking in an epistemically unsound manner, etc), *necessarily*, because counterfactual reality is inaccessible in fact (though "not" *in experience*).

> ...because humans are smart enough to figure out nature's principle again, and therefore all the scientific theories that we have today will at some point be rediscovered.

lol...humans are highly subject to hallucination.

> LLMs would not be able to do such a thing. Because they don't learn.

They can currently reproduce massive amounts of scientific literature, though with some errors. Humans also very often do not &/or cannot learn *certain things*, fwiw.

> They don't discover anything new on their own...

1. Define "new".

2. Present a proof please. The one you read before upgrading this Belief to Knowledge will do (I presume you did read one first?).

> ...in other words: an LLM can't make progress.

See also: Humans, particularly Allistic ones. Have you ever been formally diagnosed?

Expand full comment