Key insights of today’s newsletter:
The Association for Mathematical Consciousness Science (AMCS) calls for increased funding for consciousness and AI research, citing its absence in recent AI safety discussions.
A comprehensive study by 19 researchers presents criteria to assess AI consciousness and concluded that no current AI systems are conscious.
Opposite views exists on whether or not AI can indeed become consciousness at some point, but it’s probably safer to assume that it is possible than to disregard it.
↓ Go deeper (8 min read)
In recent comments to the United Nations, the Association for Mathematical Consciousness Science (AMCS) called for more funding to support research on consciousness and AI. They expressed concern about the absence of the topic in recent AI safety discussions, including the AI Safety Summit and the Biden administration’s executive order.
While it’s easy to dismiss or ridicule concerns over AI consciousness, part of being skeptical is being open to the possibility of something being true. So what does the science say?
In a comprehensive paper Consciousness in Artificial Intelligence, published in August, 19 researchers attempt to tackle the subject. As part of their research they came up with a checklist of criteria to assess whether a system has a high chance of being conscious. Looking at the AI systems of today, they concluded:
“…that no current AI systems are conscious, but that there are no obvious technical barriers to building AI systems which satisfy these indicators.”
There’s no one test for consciousness
I’ve read the paper in full, so you don’t have to, and collected the most interesting bits and pieces. Sprinkled with some personal commentary, of course.
The executive summary reads:
The question of whether AI systems could be conscious is increasingly pressing.
Is it? The broad consensus seems to be that today’s systems are not. Also, there’s no reason to assume further scaling could suddenly ‘spark’ AI consciousness. Therefore, it’s debatable if this is a pressing topic. I do think it has captivated the collective imagination, as it has done so for many decades, and we’re seeing a major uplift because of the recent progress made in deep learning and especially large language models.
Progress in AI has been startlingly rapid…
Yes and no. The idea of an artificial intelligence take off has been floated by some as an existential risk. However, a recent study Are Emergent Abilities of Large Language Models a Mirage? suggests the results of scaling LLMs is a lot more predictable than initially thought. The researchers show that emergent abilities do not instantaneously transition from not present to present, and therefore may not be a fundamental property of scaling AI models.
…and leading researchers are taking inspiration from functions associated with consciousness in human brains in efforts to further enhance AI capabilities.
That’s correct. Large language models are huge neural nets, which are loosely inspired by the human brain. When you talking to ChatGPT, you’re essentially communicating with a vast network of artificial neurons that have been trained on a wide range of text data.
Meanwhile, the rise of AI systems that can convincingly imitate human conversation will likely cause many people to believe that the systems they interact with are conscious. In this report, we argue that consciousness in AI is best assessed by drawing on neuroscientific theories of consciousness. We describe prominent theories of this kind and investigate their implications for AI.
Because chatbots like Inflection’s Pi or OpenAI’s ChatGPT are so good at imitating, some folks may jump to the conclusion that these systems are conscious, simply because they give off a lucid impression.
For that reason, the researchers deliberately chose not to use behavioural tests to assess AI consciousness. It’s too easy to cheat the existing tests and benchmarks that we have — in a recent piece on
, even goes as far as to say that LLM evals are nothing more than marketing tools.The consciousness checklist
Instead of putting a model to the test, the researchers suggest that we look at different theories of consciousness and collect a list of “indicator properties”.
Much like how a thermometer is crucial for gauging temperature, each of these indicator properties is assumed to be a necessary prerequisite for consciousness. The more indicators these AI systems possess, the more likely they are to be conscious, the researchers write.
The theories of consciousness they included were recurrent processing theory, global workspace theory, computational higher-order theories and a few others.
The researchers took a look at Transformer-based large language models; the Perceiver architecture, a computer vision model that builds upon Transformers; DeepMind’s Adaptive Agent, a reinforcement learning agent operating in a 3D virtual environment; PaLM-E, which has been described as an “embodied multimodal language model”, and several others.
Upon careful review, some of these systems possess some indicator properties, but none of the existing AI systems is a strong candidate for AI consciousness.
If anything, this means we can safely emotionally manipulate GPT-4 to get better answers out of it, without feeling guilty about hurting its feelings.
The big question remains unanswered
The conclusions may sound definitive, but it’s far from the final word on the topic. The researchers urgently call for further research on the science of consciousness and especially its application to AI, which supports the message of the Association for Mathematical Consciousness Science (AMCS).
I do see the value of such research. If you believe conscious AI systems can be built (and it’s safer to assume that it is possible someday than to disregard that possibility), then we should also take into consideration the moral and social risks of doing so. I’m not sure, however, if I agree on the urgency of the matter.
It’s worth noting that opposing views exist. Believing AI can become conscious requires something called ‘computational functionalism’: the view that our mental states, like beliefs, desires, thoughts and so forth, are ultimately computational states of the brain.
This is not the only view. The Chinese room argument, articulated by John Searle in 1980, was an early objection to the functionalist movement. And mathematician and Nobel Laureate Roger Penrose has argued that subjective experiences cannot be achieved through computation, because the processes of the mind are not algorithmic and no sufficiently complex computer could ever replicate them. He goes as far as to say that consciousness begets understanding, because understanding presupposes having an internal experience.
I happen to sympathize with this view. The defining quality of being conscious indeed seems to be having an internal experience.
But we also have to acknowledge that whether or not it is possible to conjure up something like that in a computer is far from settled science. As such, the elusive question in the back of our heads remains: what if?
Join the conversation 🗣
Leave comment with your thoughts. Or like this article if it resonated with you.
Get in touch 📥
Have a question? Shoot me an email at jurgen@cdisglobal.com.
It's fascinating to see some of the toughest philosophical questions, like consciousness or the nature of our existence, coming to the forefront. Our tech is sparking all kinds of classic thought experiments (and maybe launching some new ones), kind of helping us think in a more abstract way about those problems.
Using ChatGPT frequently for a variety of tasks will convince you that it’s not anywhere near human consciousness.
It’s something inherent in the fact that we can manipulate ChatGPT so easily through prompting. I’m repulsed at the idea of consciousness being so malleable.
That’s what makes ChatGPT a good tool, and it’s what makes humans good friends. They’re solid and different from us, which is why it’s so meaningful when they choose to be with us.