Discussion about this post

User's avatar
Shon Pan's avatar

I think the argument that they hallucinate does not in fact mean that they don't build a world model - I mean, look at this:

https://www.anthropic.com/research/mapping-mind-language-model

This is a world model of some sort. Hallucinations just mean that the world model can be wrong, and I think that even in the Claude example, it does some odd things like relating "coding error" to "food poisoning" in conceptual space.

But I think I go with Hinton here that it is grasping some sort of meaning(enough to scare me, obviously) and perhaps this should be seen as a matter of degree, with errors. Of course, there are people who also claim that LLMs are discovering Platonic truth, or at least converging to something(maybe all of the same biases?).

https://arxiv.org/pdf/2405.07987

Expand full comment
Alan Stockdale's avatar

Other works that cover Wittgenstein and AI include:

Graham Button, Jeff Coulter, John Lee, Wes Sharrock: Computers, Minds and Conduct

https://www.politybooks.com/bookdetail?book_slug=computers-minds-and-conduct--9780745615714

Stuart Shankar: Wittgenstein's Remarks on the Foundations of AI (The preface and the first chapter are available for preview):

https://www.taylorfrancis.com/books/mono/10.4324/9780203049020/wittgenstein-remarks-foundations-ai-stuart-shanker

And, of course, there’s also Hubert Dreyfus, who covers both Wittgenstein and Heidegger in his many critiques of AI. In a YouTube video somewhere, Dreyfus comments that the AI people inherited a lemon, a 2,000 old failure. By this he means that from the very beginning AI uncritically adopted assumptions from ancient and early modern philosophy about language, mind, and cognition that are complete nonsense and had already been shown to be such. (This is also true of much of cognitive science, philosophy of mind, neuroscience, etc.) The critique was there long before the famous Dartmouth Workshop. Wittgenstein was debating with Turing at Cambridge in the late 1930s and, after the war, Michael Polanyi was debating these issues with Turing at Manchester.

An engineer in the field of AI who is completely unfamiliar with this literature, might be best to start with Peter Hacker's new intro book:

https://anthempress.com/anthem-studies-in-wittgenstein/a-beginner-s-guide-to-the-later-philosophy-of-wittgenstein-pb

or try his paper on the PLA and the mereological mistake/fallacy:

https://www.pmshacker.co.uk/_files/ugd/c67313_778964f8a7e44b16ac8b86dbf954edda.pdf

Hacker, as far as I know, hasn't directly written directly about AI, but he and Maxwell Bennett (a neuroscientist) have written lengthy critiques of cognitive science (Dennett, Searle, Churchland, Fodor, et al.).

https://www.wiley.com/en-us/Philosophical+Foundations+of+Neuroscience%2C+2nd+Edition-p-9781119530978

It puzzles me that this literature isn't brought up more in current debates about AI. It isn't as if these long-standing critiques have been successfully addressed. The field appears to carry on in either complete ignorance or willful avoidance. "Blah, blah, blah. I can't hear you!" Language is a social institution. A person is not a mind or a brain. Being an AI researcher is like carrying on as an alchemist in a world where the grounds for a science of chemistry have already been laid out for all to see.

Expand full comment
26 more comments...

No posts