As you may know, the difference between assistants and agents and the evolution to agents has been one of my favorite topics since 2018. I'd caution people to look at the definitions of agent anytime someone uses the term. As you mention, Altman is using it as a repositioning technique. The most basic form of a user agent is that it does something for you without your direct oversight. You may have initiated the activity, but the agent works on your behalf to fulfill a known or predicted goal. An assistant also does so, but operates on a request-response model and limited scope per task. An assistant can fetch things for you or execute tasks, but always has explicit instructions as opposed to standing instructions or capabilities. Many people like to call assistants agents, when they don't actually have agency.
Sometimes agents can be assistants. When R2D2 responds to Anakin Skywalker with information about a planet, it is acting as an assistant. When Luke Skywalker sends R2D2 into a enemy space ship and asks him to find a port where he can download a map of the ship and shut off access to all turbo lifts except for ones that will lead the team to the right location, it is acting as an agent, navigating a complex set of variables to achieve a state goal. There is another discussion here about imperative and declarable programming, but we can leave it as agency.
There are also system agents which work on behalf of systems.
Agents are hard. The broader the scope of responsibility, capability, and variability, the harder. I am a big believer in agents. Simple agents already exist and we will see more of them in 2024. However, the agents that are the extension of assistants will take longer to develop. The improvement of assistants will necessarily precede the improvement in agents. As least, that is the view from this corner.
He writes that LLMs represent the decoupling of intelligence and agency, which is unlike anything we’ve seen before.
A short excerpt: “We have gone from being in constant contact with animal agents and what we believed to be spiritual agents (gods and forces of nature, angels and demons, souls or ghosts, good and evil spirits) to having to understand, and learn to interact with, artificial agents created by us, as new demiurges of such a form of agency. We have decoupled the ability to act successfully from the need to be intelligent, understand, reflect, consider, or grasp anything. We have liberated agency from intelligence. So, I am not sure we may be “shepherds of Being” (Heidegger), but it looks like the new “green collars” (Floridi 2017) will be “shepherds of AI systems”, in charge of this new form of artificial agency.
The agenda of a demiurgic humanity of this intelligence-free (as in fat-free) AI – understood as Agere sine Intelligere, with a bit of high school Latin – is yet to be written. It may be alarming or exciting for many, but it is undoubtedly good news for philosophers looking for work.”
Interesting idea. However, intelligence without agency would remove a key value of intelligence. In addition, intelligence introduced to the non-intelligence has questionable or no value. It still might be information, but it hardly qualifies as intelligence.
As you may know, the difference between assistants and agents and the evolution to agents has been one of my favorite topics since 2018. I'd caution people to look at the definitions of agent anytime someone uses the term. As you mention, Altman is using it as a repositioning technique. The most basic form of a user agent is that it does something for you without your direct oversight. You may have initiated the activity, but the agent works on your behalf to fulfill a known or predicted goal. An assistant also does so, but operates on a request-response model and limited scope per task. An assistant can fetch things for you or execute tasks, but always has explicit instructions as opposed to standing instructions or capabilities. Many people like to call assistants agents, when they don't actually have agency.
Sometimes agents can be assistants. When R2D2 responds to Anakin Skywalker with information about a planet, it is acting as an assistant. When Luke Skywalker sends R2D2 into a enemy space ship and asks him to find a port where he can download a map of the ship and shut off access to all turbo lifts except for ones that will lead the team to the right location, it is acting as an agent, navigating a complex set of variables to achieve a state goal. There is another discussion here about imperative and declarable programming, but we can leave it as agency.
There are also system agents which work on behalf of systems.
Agents are hard. The broader the scope of responsibility, capability, and variability, the harder. I am a big believer in agents. Simple agents already exist and we will see more of them in 2024. However, the agents that are the extension of assistants will take longer to develop. The improvement of assistants will necessarily precede the improvement in agents. As least, that is the view from this corner.
You might enjoy this 2023 paper by Luciano Floridi, AI as Agency Without Intelligence: On ChatGPT, Large Language Models, and Other Generative Models (https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4358789)
He writes that LLMs represent the decoupling of intelligence and agency, which is unlike anything we’ve seen before.
A short excerpt: “We have gone from being in constant contact with animal agents and what we believed to be spiritual agents (gods and forces of nature, angels and demons, souls or ghosts, good and evil spirits) to having to understand, and learn to interact with, artificial agents created by us, as new demiurges of such a form of agency. We have decoupled the ability to act successfully from the need to be intelligent, understand, reflect, consider, or grasp anything. We have liberated agency from intelligence. So, I am not sure we may be “shepherds of Being” (Heidegger), but it looks like the new “green collars” (Floridi 2017) will be “shepherds of AI systems”, in charge of this new form of artificial agency.
The agenda of a demiurgic humanity of this intelligence-free (as in fat-free) AI – understood as Agere sine Intelligere, with a bit of high school Latin – is yet to be written. It may be alarming or exciting for many, but it is undoubtedly good news for philosophers looking for work.”
Interesting idea. However, intelligence without agency would remove a key value of intelligence. In addition, intelligence introduced to the non-intelligence has questionable or no value. It still might be information, but it hardly qualifies as intelligence.
Clippy is over here, like:
https://www.youtube.com/watch?v=CdqoNKCCt7A
That’s hilarious. Part of me still hopes Microsoft is planning to bring Clippy back.
What if Open AI is in on the prank, and GPT5 is really just a fancy version of Clippy? That would melt so many brains.