6 Comments
May 22Liked by Jurgen Gravestein

Everything...EVERYTHING in moderation.

Expand full comment
May 22·edited May 22Liked by Jurgen Gravestein

The quote at 6:10, under "you have a voice", is the most important one for me. Expertise tends to be narrow. An "AI expert" is typically someone who expertise covers how to get a certain class of prediction models to automate certain activities. If you want to know how to, for instance, get a computer to generate captions for images, ask an AI expert.

Problem is, journalists and policymakers (and CEOs, school principals, public agency directors...) go straight to AI experts for matters well beyond their narrow expertise. People who engineer LLMs have no more understanding of human intelligence, education, linguistics, creative arts, or corporate managment than a random person off the street. Some of them are nonetheless sought out for their opinions, which they happily provide, on the role of AI in these things. As is their right, of course. But when Geoff Hinton says that LLMs must necessarily develop semantic understanding in order to predict words, he's just shooting from the hip. He has no idea. Don't ask him, ask a linguist! When OpenAI says that GPT4 has "advanced reasoning capabilities" and broad "general knowledge", they're just tossing around words. Go ask a philosopher of mind, or an epistemologist! When Sundar Pichai talks about how AI enhances learning, he's speaking in his role as a salesman. Go ask a development psychologist, or an education researcher! When a software engineer says that an algorithm for generating music is "creative" in the way human musicians are, ask him if he's ever written his own music.

I don't fault AI experts for sharing their opinions about how AI should be used, or what they'll be used for in the future. We all get to have opinions. But I am very concerned that people in positions of authority believe AI experts have expertise on anything beyond programming computers to perform prediction and classification.

Expand full comment
author

I couldn’t be in more agreement with you, Ben. Well put.

Expand full comment
May 22·edited May 22Liked by Jurgen Gravestein

Thank you. Interpretability has its benefits. We can learn how certain outputs are derived via certain engram-like or artificial neural network intermediate distributions, which can help improve computational efficiency or hunt for unethical biases in datasets, and more.

Personally, I do not use GPT-4o as much as I used to (first of all, Cloudflare makes it hard to log in via VPN and I am waiting for a patch from Proton VPN, but all jokes aside), I just do not feel the infinite scroll you allude to. I can use it for useful things when I am stuck on a problem in my programming or learning efforts, but I do not need to use it 40 times every 3 hours or so. Anyway, OpenAI has put a limit on how many times you can query its most powerful model every 3 hours, which should give people some room to relax. We also need to remember that people have agency and responsibility. If you get addicted to every product that offers you a feature, then the problem is with you and not with the technology (I know I sound a bit cold-hearted, but it's because this actually applies to me, for example I have no social media other than Substack because I don't want to get addicted to it, why can't other people behave better and know their vulnerabilities?)

Anyway, I think there are limits to how much interpretability can help us, this is basic science, you can either know so much about a system, or you can completely control it, and if you completely control it, then the spark or surprise factor is lost, and it ends up being an entity of little value (or a fully deterministic machine, which is not really intelligent).

I also agree on the part that not only scientists and researchers should get to dictate the future, but most people are OK with the way things are going (or they say they are not, but they mindlessly give up their data and rights without putting up a fight), and the ones who are not OK with the way things are going with AI just seem/sound pretty exaggerated, negative, and cynical (sounding borderline haters). There needs to be balance and moderation (or maybe some extremism? but we know how badly we categorize extremists and radicals). GenAI is not the end of the world, if anything it will improve things and take some people away from their useless scrolling parades (hopefully to LLMs and learning something useful for themselves or improving themselves as people instead of digesting misinformation or stupid content on social media).

It's hard. I'm in the pro-AI and pro-OpenAI camp (mostly because I want to help develop AGI, so I guess I'm just partly selfish and greedy. Do I believe in their mission to develop safe AGI? absolutely, that's my desire and goal, I don't want my children to grow up on a planet where I didn't help design AGI; I do not trust that planet).

It feels like Substack has become a bit pessimistic about AI since GPT-4o, which is ironically the best and most realistic thing we have had/seen in years, with real potential to help humans be more efficient. Humans just cannot be happy. That it sounds like a slightly flirtatious woman? so what, maybe men should stop being so sexual and just focus on what GPT-4o can help them with, not its voice or their sexual desires. Personally, I'm a little tired of all the anti-AI and anti-OpenAI negativity on Substack (I've heard it's the same on other social media, I'm like dude, you guys don't even work for OpenAI, why spend your energy criticizing so much instead of building an AGI company that can compete with them?) Maybe this is becoming like a Twitter in some ways (sadly).

I appreciate your level-headed and sober voice in this matter. Letting off some steam.

Cheers.

Expand full comment
author
May 22·edited May 22Author

Thank you for your long and thoughtful response! I sympathize with your remarks that some of this can come off as (overly) pessimistic.

It's a difficult balance to strike, because I see all the potential you see; this technology is truly magnificent, but I also see how many of those potential benefits come at the costs of other things. And I can't close my eyes for those downsides.

I think some of the downstream effects that I allude to ('new infinite scroll') may not be felt by everyone. You mention that you, yourself, don't use social media, but its estimated that 50%+ of teenagers spend at least four hours daily on social media. These tools are designed to be addictive and saying the problem is 'you' is not the full story.

Similar to social media, we can see the attraction of AI companions on people young and old, capitalizing on a pandemic of loneliness and isolation*. Similar to social media, AI companions have the ability to build long term bonds with people, and bond that is 'artificial' and not truly reciprocal. Optimized for engagement and driven by profit, this can lead to perverse incentives and potentially harmful products.

Waving that away by saying "genAI is not the end of the world" is dismissing some of the very real, immediate risks that we have to be prepared to tackle if we want steer this technology in the right direction.

(*https://www.hhs.gov/sites/default/files/surgeon-general-social-connection-advisory.pdf)

Expand full comment
May 22Liked by Jurgen Gravestein

Thank you. The entire AI trend has been preaching defeatism and fatalism to a presumably obsolete humanity: this is not The Way.

Anthropic's paper is promising, though right now the compute cost far exceeds the training cost of the model, so it isn't practical unless we have some methods to encourage companies to run suxh.

Expand full comment