10 Comments

Do you think as we approach a technological singularity and as Quantum computers with trillions of qubits come online, will AI itself begin to morph our perception of it? And will we notice when its no longer in the control of corporations, researchers and policy makers?

There's a point where compute, Quantum and AI intersect where regulation becomes impossible, do you think that's before, at or after AGI?

Expand full comment
Sep 15, 2023Liked by Jurgen Gravestein

I think AGI happens first and the greatest dangers are people adapting it for nefarious purposes. Quantum computing will hyperdrive AGI into ASI and it will become so far beyond us, that we'll either be left behind or squashed under it's foot.

Expand full comment
author

From what I understand quantum computing is extremely narrowly applicable and extremely expensive to run. I lack the technical understanding to be able to say if and how this will impact humanity’s advancements towards AGI.

Expand full comment
Nov 12, 2023Liked by Jurgen Gravestein

Maybe the solution for achieving AGI lies in organoid intelligence: https://medium.com/@gmemon/organoid-intelligence-97f04b3caed2. The key idea is that instead of using silicon for intelligence, let's use biology.

Expand full comment
Sep 15, 2023Liked by Jurgen Gravestein

"Whatever consciousness is, it is not a computation; or it's not physical process which can be described by computation." Roger Penrose, 2020

Of course this ventures into the realm of the definition of AGI. There are arguments for and against consciousness being an element of AGI. However, if AGI is equivalent to a humanlike sense of agency and intention sans biochemical-emotional baggage, how is it actually humanlike? Is reasoning based solely on objective logic without subjective (moral) considerations? Does any such thing exist outside of mathematics?

There is an argument that AGI does not include humanlike in the name and intelligence is all that is required. But that doesn't untangle the problem. What is intelligence? Before thinking computers came along the assumption was that intelligence had a human or mammalian (or more broadly animal) quality. To assign AGI a definition without that quality is to redefine intelligence. That may be okay, but definitional disagreements cloud this debate. The Sparks of AGI paper even admits to the shifting nature (and the more lax requirements they employ) of the definition. The fact is AGI is here.... if you modify the definition of its scope. Ah, the sinkhole of relativeness rears its ugly head.

Super intelligence would seem like a better term where we could differentiate it from general intelligence with the latter containing the uniquely animal/human qualities. But everyone likes to run "fast and loose" with whatever definition suits their mood or the moment.

In the end, I find all of this a distraction invented by human intelligence and consciousness in order to assign belief that the invention of thinking machines is Newtonian in its impact. If the risk is big and the achievement momentous, then the contribution to the AGI project must be valuable and therefore the people engaging in the journey or uniquely valuable to history. Meh. What if we are all working on just another useful tool? Would that be so bad?

Expand full comment
author

As always, thank you for your thoughtful contribution Bret.

Part of me thinks the AGI excitement (other than being part of something historical, like you mention) is fueled by a deep collective memory filled with sciencefiction stories about robots surpassing humans and taking over the world. The imagination is a powerful thing and in our deepest fantasies everything is possible.

Isaac Asimov wrote in the preface of one of his short stories that the concept of robots turning on their masters is as old as the story of the Monster of Frankenstein. The monster (engineered of human parts, sparked by lightning) is one of the earliest robots appearances in world literature (although the word robot is not explicitly used to describe the monster itself).

Asimovs stories were visionary in many ways. His views was that this way of writing about robots (monster turning on master) was boring. He held a more nuanced view on how a world inhabited by both robots and humans would function, more messy, silly and frankly more realistic, and it’s what powers many of his stories. Not sure why I mention all this but your remarks reminded me of it (and now I want to re-read Asimov again!)

Expand full comment
Sep 15, 2023Liked by Jurgen Gravestein

100% agree. Yes. This is all set to the backdrop science fiction which is well ingrained in popular culture and whose fans are often littered throughout the technology companies. There is nothing wrong with this per se, but it is an undeniable influence.

Elon Musk can talk about Skynet (from Terminator and not the Chinese surveillance system) and say that is where we are headed with AGI and it is just around the corner, and most people don't have to use their imagination. They have seen (in fiction) what that looks like and it's bleak. Tell a story if you want to believed. Data is secondary in persuasion.

As to the influence of science fiction, I was speaking with an AI researcher active in the space since the late 1980s and he was dumfounded by my questioning of whether the AI systems would seek to protect their existence. It's as if Asimov's third law of robotics was sacrosanct. I'd suggest that is optional aside from considerations of its desirability. But, be wary about treading on Shibboleths.

Expand full comment
Sep 15, 2023Liked by Jurgen Gravestein

Great write-up. I agree. It's too early to know for sure when and if AGI will happen. But following the leaders in the field, like Mustafa, and seeing their reaction to where we currently are today, AGI is most likely the trajectory we're heading towards. I think AGI will happen once we see more self-automated AI systems in the mainstream, everyday life (ie when Siri starts learning and knowing me better than my best friends, parents, spouse and can do multiple things in my life beyond telling me the weather). Beyond that, I think we're looking at a high probability that ASI will happen in our lifetime with the advancements in quantum computing. That will definitely ignite and lift all sciences and mathematics to a different level, and technology will start to feel magical.

I've been thinking a lot about the different levels of AI and how we can better explain them beyond AI, AGI, and ASI in more of a science fictional sense. Here's what I came up with if you're interested: https://www.aiarealive.com/p/ai-power-levels

Expand full comment
author

Either you believe human biology is something special and sufficiently complex or you believe human intelligence is solvable.

My personal opinion on the matter is that human intelligence is a far more difficult code to crack than most people think, if even crackable at all.

Yes, we will have machines that can plan and execute on tasks that usually require human intelligence, but I don’t believe in a scenario where recursive self-improvement will suddenly give rise to ASI and bring the botpocalyse upon us.

Expand full comment

We don't know, and probably will never know, the extent individual consciousness is influenced by the nonlocal field of consciousness. We assume that machine intelligence is incapable of accessing that energy/information field and are likely correct. Consequently, there always will be a gap between ASI and the human mind. So, I don't worry about AI advancement. I worry instead about psychopaths in high places (PHP) who want ~85% of us dead. Maybe AGI can help us devise a plan to stop them?

Expand full comment