Discussion about this post

User's avatar
hexheadtn's avatar

Melanie Mitchell affected me and my pursuit of a Master's degree in genetic algorithms. Her work is recommended.

Expand full comment
Ilia Kurgansky's avatar

I liked the calculator for text analogy. Not sure about now but not for the reason you say, Jurgen.

An LLM is fully reliable as a piece of technology - it never fails to execute the expected calculations to produce the next token. I'd say this is another example of anthropomorphism. "Failure" means something else in the context of a human-like system. (Which an LLM is not)

When we call LLM hallucinations a failure we assume they were built for truthfulness, which they were not and cannot be... My preferred explicit position is that an LLM produce true things by accident, just as confabulations are as much of an accident. The same calculations get performed at the same scale in exactly the same way - the system functions perfectly without fail.

An LLM with no randomness, 0 temperature, is also perfectly predictable in its responses, so even the "reroll" mystique is taken out of the equation.

It is a reliable calculator that does not solve the problems we try to use it for.

Expand full comment
13 more comments...

No posts