The following is an excerpt from a blog entry titled ‘The Merge’ posted on December 7, 2017, by Sam Altman:
“The merge can take a lot of forms: We could plug electrodes into our brains, or we could all just become really close friends with a chatbot. But I think a merge is probably our best-case scenario. If two different species both want the same thing and only one can have it—in this case, to be the dominant species on the planet and beyond—they are going to have conflict. We should all want one team where all members care about the well-being of everyone else.
Although the merge has already begun, it’s going to get a lot weirder. We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.
It’s probably going to happen sooner than most people think. Hardware is improving at an exponential rate—the most surprising thing I’ve learned working on OpenAI is just how correlated increasing computing power and AI breakthroughs are—and the number of smart people working on AI is increasing exponentially as well. Double exponential functions get away from you fast.
It would be good for the entire world to start taking this a lot more seriously now. Worldwide coordination doesn’t happen quickly, and we need it for this.”
It’s not a secret that OpenAI intends to build the first AGI. They are accelerationists pur sang. Their rationale: if we don’t do it, someone else will. But what most people don’t know is that OpenAI foresees a future in which humans and AI will merge together.
When I say OpenAI, I’m referring specifically to Sam Altman and Ilya Sutskever, who both, independently from each other, have said to believe transhumanism to be the likely path for humanity. This matters because Sam Altman is the CEO of OpenAI and Ilya Sutskever is co-founder and Chief Scientist at OpenAI, and one of the pioneering minds behind the breakthrough technology powering these increasingly capable AI systems.
Sutskever recently shifted his attention within the company. Together with
, a fellow scientist at OpenAI, he has set up a team focusing on so-called ‘superalignment’. The goal: to figure out ways to control a future technology that is smarter-than-humans.If you are among the true believers, like Sutskever and Altman, who think smart-than-human technology is inevitably coming, you have to have an answer to the question of how restrain it from going rogue.
In an exclusive interview with MIT Technology Review, on October 26, 2023, Sutskever was quoted saying:
“Once you overcome the challenge of rogue AI, then what? Is there even room for human beings in a world with smarter AIs? One possibility—something that may be crazy by today’s standards but will not be so crazy by future standards—is that many people will choose to become part AI. At first, only the most daring, adventurous people will try to do it. Maybe others will follow. Or not.”
Both Sam Altman, in his blog entry, and Sutskever, in his recent comments, in their own way suggest that humanity will be presented with a choice. Standing at the crossroads we can either embrace or reject the further integration of AI into our life, our bodies, and at some point, our minds. What that might look like, is left up to the imagination.
If we choose to reject, in an attempt to remain untouched and protect our human sanctity, what will happen next is almost certain: we’ll get outpaced by our artificial descendants or by the transhumans who chose to merge.
Join the conversation 💬
Leave a like or a comment with your thoughts. What do you think, will humans one day dream of electric sheep?
Scary. Yes, BCI has huge potential for theraputic applications. That said, who knows the extent to which non-theraputic use for "enhancement", "intellect highs", or recreation may evoke a Pandora's box of human addiction and psychosis.
This is pretty much my take in a nutshell.