On May 22, 2023, Sam Altman, Greg Brockman, and Ilya Sutskever published a joint statement on the OpenAI’s blog that has been largely overlooked. A shame, because it was revealing in many ways.
The tagline of the piece reads: Now is a good time to start thinking about the governance of superintelligence—future AI systems dramatically more capable than even AGI.
Building safe and beneficial artificial general intelligence (AGI) has always been part of OpenAI’s mission from the start. However, amongst philosophers, scientists and industry leaders there is no common held definition or set of requirements that constitute an AGI-system. Everyone kind of operates under the same notion: we’ll know it when we see it.
This is the first time however that OpenAI mentions anything beyond AGI; and it is unclear what it alludes to. Talking about the emergence of dramatically more capable systems than AGI seems… a little bit dramatic.
OpenAI’s call for regulation
Anyway, in the opening paragraph the writers explain:
“Given the possibility of existential risk, we can’t just be reactive. Nuclear energy is a commonly used historical example of a technology with this property; synthetic biology is another example.
We must mitigate the risks of today’s AI technology too, but superintelligence will require special treatment and coordination.”
In order to mitigate risk, OpenAI advocates for governmental oversight. They believe it is time for regulators to step in and implement things like licenses or audits for companies that develop models above a significant capability threshold. On top of that, they say, leading efforts in developing superintelligence should be aligned, preferably with oversight of an international body similar to the IAEA.
The call for regulation is in line with Sam Altman’s appearance in the US Senate hearing last week and stems from genuine concerns, I believe, and I’d like to echo something Gary Marcus said during that hearing that I think spoke to that:
“Let me just add for the record that I'm sitting next to Sam closer than I've ever sat to him except once before in my life, and that his sincerity in talking about those fears is very apparent physically, in a way that just doesn't communicate on a television screen.”
And FYI, Gary Marcus has been openly critical towards OpenAI and the widespread integration of large language models in society in general and did not bite his tongue during this hearing either. You can rewatch the full recording here.
Arguments (going around online) that OpenAI is only advocating for regulation to suffocate the open-source competition have not been particularly convincing (to me).
Accelerating AI development
Beyond laying out their thoughts on regulatory oversight and global governance, part of the blog post speaks to future potential. In it, OpenAI takes on an openly accelerationist stance. They ask themselves the retorical question of why building artificial general intelligence in the first place, providing two main reasons:
OpenAI believes it’s going to lead to a much better world than what we can imagine today. The economic growth and increase in quality of life will be astonishing, and;
OpenAI believes it would be unintuitively risky and difficult to stop the creation of superintelligence. The cost to build it decreases each year, the number of actors building it is rapidly increasing, so the question is not if it’s going to happen but when.
What they’re basically saying is that if someone’s going to do it eventually anyways, we rather do it ourselves. At least, we’re committed to doing it right.
It is also the strongest argument against alarmists that are calling for a pause on the development of increasingly capable AI systems: slowing down is not an option, because if ‘we’ slow down someone else will push forward regardless.
The unprecedented level of coordination necessary for a truly global governance framework (the only thing I can think of that would come close to it is the Paris agreement on climate change) ultimately is a political endeavour that is up to world leaders to initiate and enact, not the responsibility of one company.
You might ask how OpenAI rhymes their clearly accelerationist stance with their call for regulatory oversight — in all honestly, I think the one does not exclude the other. Although there’s an obvious tension, it is possible to simultaneously hold the position that regulation is necessary and also believe you should keep propelling the field forward.
Don’t panic!
The alarmists and the accelerationists have one thing in common, by the way. They both believe we’re close to a tipping point, convinced AGI (whatever it is that you’re imagining when you hear that word) is inevitable in the medium to long term.
Refreshing takes from scientists like Gary Marcus, Walid Saba, and others challenge that narrative. Although they voice concerns about the short term impact of this technology on society, they tend to focus less on the runaway, out-of-control superintelligence so ingrained in our collective conscience through dystopian novels and movies. They focus less on it is because the current systems and architectures do not exhibit any real intelligence at all and there is no reason to believe we are close to emulating that.
A brilliant paper on the topic was published by Oxford professor Luciano Floridi titled “AI as Agency Without Intelligence”, which can be found here. A recent opinion piece of Walid Saba can be found here.
A final note to end things on: with regards to technology and the future in general it is easy let our imagination run wild. The number of possible future worlds that we can imagine is infinite, but the number of probably future worlds is much smaller and harder to figure out. Maybe, just maybe, it won’t all go as fast as we think.
Jurgen Gravestein is a writer, consultant, and conversation designer. Four years ago, he stumbled into the world of chatbots and voice assistants. He was employee no. 1 at Conversation Design Institute and now works for the strategy and delivery branch CDI Services helping companies drive business value with conversational AI.
Reach out if you’d like him as a guest on your panel or podcast.
Appreciate the content? Leave a like or share it with a friend.
Can you ask OpenAI why they think we are interested in AGI? Lol
Nice post..very disappointing to see the profit motive crowd out the xRisk that they clearly think is there.
Brilliant take on the OpenAI dichotomy.