Discussion about this post

User's avatar
Rob Nelson's avatar

Nice post. I especially appreciate the pointer to Luciano Floridi, who I have not encountered.

I'm with you on the reluctance to anthropomorphize AI, but I'm not following your point that "this isn’t a helpful frame most of the time." It seems to me that we want to keep in mind that when an LLM confabulates they are not lying because lying requires an intention to deceive.

When dealing with humans, sussing out intentions is helpful. When the Jordan Petersons and Andrew Hubermans of this world say things that are untrue, they are doing so to acquire status and prestige. They are confabulating (if they are unaware that they are speaking untruth) or lying (if they know it to be untrue) with the intention of pleasing and impressing their audience. Understanding intentions helps evaluate human statements.

When an LLM generates an untruth, it is doing the same thing it does when it generates a true statement: attempting to provide a satisfying answer. It has no intention, but the goal it has been given has not been changed. Treating it as if it has intentions will mislead us. What am I missing?

Expand full comment
Stephen Moore's avatar

In your opinion, are agents the genuine next step, or is it more "The Next Big Thing" chat to keep the hype around AI ticking along?

It strikes me that if agents have same issues as LLM chatbots and other tools, and we completely remove human intervention (like, if we hand over tasks to them with no oversight), that seems like a recipe for disaster?

Expand full comment
11 more comments...

No posts