The quote at 6:10, under "you have a voice", is the most important one for me. Expertise tends to be narrow. An "AI expert" is typically someone who expertise covers how to get a certain class of prediction models to automate certain activities. If you want to know how to, for instance, get a computer to generate captions for images, ask an AI expert.
Problem is, journalists and policymakers (and CEOs, school principals, public agency directors...) go straight to AI experts for matters well beyond their narrow expertise. People who engineer LLMs have no more understanding of human intelligence, education, linguistics, creative arts, or corporate managment than a random person off the street. Some of them are nonetheless sought out for their opinions, which they happily provide, on the role of AI in these things. As is their right, of course. But when Geoff Hinton says that LLMs must necessarily develop semantic understanding in order to predict words, he's just shooting from the hip. He has no idea. Don't ask him, ask a linguist! When OpenAI says that GPT4 has "advanced reasoning capabilities" and broad "general knowledge", they're just tossing around words. Go ask a philosopher of mind, or an epistemologist! When Sundar Pichai talks about how AI enhances learning, he's speaking in his role as a salesman. Go ask a development psychologist, or an education researcher! When a software engineer says that an algorithm for generating music is "creative" in the way human musicians are, ask him if he's ever written his own music.
I don't fault AI experts for sharing their opinions about how AI should be used, or what they'll be used for in the future. We all get to have opinions. But I am very concerned that people in positions of authority believe AI experts have expertise on anything beyond programming computers to perform prediction and classification.
Thank you. The entire AI trend has been preaching defeatism and fatalism to a presumably obsolete humanity: this is not The Way.
Anthropic's paper is promising, though right now the compute cost far exceeds the training cost of the model, so it isn't practical unless we have some methods to encourage companies to run suxh.
Thank you for your long and thoughtful response! I sympathize with your remarks that some of this can come off as (overly) pessimistic.
It's a difficult balance to strike, because I see all the potential you see; this technology is truly magnificent, but I also see how many of those potential benefits come at the costs of other things. And I can't close my eyes for those downsides.
I think some of the downstream effects that I allude to ('new infinite scroll') may not be felt by everyone. You mention that you, yourself, don't use social media, but its estimated that 50%+ of teenagers spend at least four hours daily on social media. These tools are designed to be addictive and saying the problem is 'you' is not the full story.
Similar to social media, we can see the attraction of AI companions on people young and old, capitalizing on a pandemic of loneliness and isolation*. Similar to social media, AI companions have the ability to build long term bonds with people, and bond that is 'artificial' and not truly reciprocal. Optimized for engagement and driven by profit, this can lead to perverse incentives and potentially harmful products.
Waving that away by saying "genAI is not the end of the world" is dismissing some of the very real, immediate risks that we have to be prepared to tackle if we want steer this technology in the right direction.
Everything...EVERYTHING in moderation.
The quote at 6:10, under "you have a voice", is the most important one for me. Expertise tends to be narrow. An "AI expert" is typically someone who expertise covers how to get a certain class of prediction models to automate certain activities. If you want to know how to, for instance, get a computer to generate captions for images, ask an AI expert.
Problem is, journalists and policymakers (and CEOs, school principals, public agency directors...) go straight to AI experts for matters well beyond their narrow expertise. People who engineer LLMs have no more understanding of human intelligence, education, linguistics, creative arts, or corporate managment than a random person off the street. Some of them are nonetheless sought out for their opinions, which they happily provide, on the role of AI in these things. As is their right, of course. But when Geoff Hinton says that LLMs must necessarily develop semantic understanding in order to predict words, he's just shooting from the hip. He has no idea. Don't ask him, ask a linguist! When OpenAI says that GPT4 has "advanced reasoning capabilities" and broad "general knowledge", they're just tossing around words. Go ask a philosopher of mind, or an epistemologist! When Sundar Pichai talks about how AI enhances learning, he's speaking in his role as a salesman. Go ask a development psychologist, or an education researcher! When a software engineer says that an algorithm for generating music is "creative" in the way human musicians are, ask him if he's ever written his own music.
I don't fault AI experts for sharing their opinions about how AI should be used, or what they'll be used for in the future. We all get to have opinions. But I am very concerned that people in positions of authority believe AI experts have expertise on anything beyond programming computers to perform prediction and classification.
I couldn’t be in more agreement with you, Ben. Well put.
Thank you. The entire AI trend has been preaching defeatism and fatalism to a presumably obsolete humanity: this is not The Way.
Anthropic's paper is promising, though right now the compute cost far exceeds the training cost of the model, so it isn't practical unless we have some methods to encourage companies to run suxh.
Thank you for your long and thoughtful response! I sympathize with your remarks that some of this can come off as (overly) pessimistic.
It's a difficult balance to strike, because I see all the potential you see; this technology is truly magnificent, but I also see how many of those potential benefits come at the costs of other things. And I can't close my eyes for those downsides.
I think some of the downstream effects that I allude to ('new infinite scroll') may not be felt by everyone. You mention that you, yourself, don't use social media, but its estimated that 50%+ of teenagers spend at least four hours daily on social media. These tools are designed to be addictive and saying the problem is 'you' is not the full story.
Similar to social media, we can see the attraction of AI companions on people young and old, capitalizing on a pandemic of loneliness and isolation*. Similar to social media, AI companions have the ability to build long term bonds with people, and bond that is 'artificial' and not truly reciprocal. Optimized for engagement and driven by profit, this can lead to perverse incentives and potentially harmful products.
Waving that away by saying "genAI is not the end of the world" is dismissing some of the very real, immediate risks that we have to be prepared to tackle if we want steer this technology in the right direction.
(*https://www.hhs.gov/sites/default/files/surgeon-general-social-connection-advisory.pdf)