That Neopeople pricing screen really illustrates the allure to software businesses. You're telling me, instead of creating software that needs to produce output in realtime and I can get 10% of users to pay a $5/month subscription while the rest use a free tier, I can instead dress it up as an "agent", charge 500x more as a "salary", and pass muster with up to 8 hours of latency per "deliverable"?
I find Dr. Author and Lamanna arguments lacking some important aspects of AI. The speed of job disruption for AI seems much more rapid that previous general purpose technologies and the fact AI is not a tool but an agent (able to act autonomously and make decisions and create new content)
It does not touch on how automation of knowledge work could make our lives worse off. https://econ.st/3YEXRhH
Anton Korinek and Daron Acemolgu seem to have better knowledge around AI compared to AI
Even Keynes might disagree with Author's assessment...John Maynard Keynes in his famous 1930 paper titled Economic Possibility for our Grandchildren defined “technological unemployment” as the following: situation where the pace of automation exceeds the pace of new job creation.
Thank you for sharing your thoughts, Dan. I will definitely look into those sources - which may change my mind. I won’t pretend that I know anything with certainty.
I could see a world where the number of jobs automated by AI outpaces any new roles/jobs created. When that happens thay would be bad for society, especially if the wealth generated through this automation isn’t redistributed. (Which given how slow governments are to act is likely to happen too late or not at all; basically evaporating the middle class).
I do want machines to help with brain work! but I would prefer narrow AI tools (like perplexity ai, or notebookLM by google) instead of general, LLM agents that are intended to do all of my work. I do enjoy some of it, haha
The idea of AGI seems to be a desire from the AI community that is being pushed on us knowledge workers imo.
The idea of AGI also seems to be what you need to continuously point to, since that supposedly, maybe undoubtedly, is unlocking the jackpot.
Gotta justify these investments.
Also, OpenAI mentions AGI right in the mission …
"OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity."
vs Anthropic …
We believe AI will have a vast impact on the world. Anthropic is dedicated to building systems that people can rely on and generating research about the opportunities and risks of AI.
from a rationalist perspective (which I have heard Silicon Valley VCs strongly believe in), the asymmetric payoff of AGI can help justify any amount of investment.
I couldn't easily come by data about what people are using ChatGPT for. My hunch is that the majority of users are using it mostly for very trivial stuff. And by trivial, I don't mean stupid (but yes, that is probably another major sub-segment within that majority segment), but, you know, not very productivity-related.
Even those who report to use AI for work, I mean, what are they doing? What is always mentioned is "writing email" – but email is in itself a sinkhole of (or for) productivity. So maybe people are churning out more emails now. Fantastic 🫣
Completely agree and think it’s the only way it will make financial sense for these businesses, and it’s the only way OpenAI etc can ever make it a viable model. Is it right for us as humans and as a society? I doubt they ever stopped to ask that.
That Neopeople pricing screen really illustrates the allure to software businesses. You're telling me, instead of creating software that needs to produce output in realtime and I can get 10% of users to pay a $5/month subscription while the rest use a free tier, I can instead dress it up as an "agent", charge 500x more as a "salary", and pass muster with up to 8 hours of latency per "deliverable"?
I find Dr. Author and Lamanna arguments lacking some important aspects of AI. The speed of job disruption for AI seems much more rapid that previous general purpose technologies and the fact AI is not a tool but an agent (able to act autonomously and make decisions and create new content)
It does not touch on how automation of knowledge work could make our lives worse off. https://econ.st/3YEXRhH
Anton Korinek and Daron Acemolgu seem to have better knowledge around AI compared to AI
https://www.nber.org/system/files/working_papers/w32980/w32980.pdf?utm_source=PANTHEON_STRIPPED&%3Butm_medium=PANTHEON_STRIPPED
Even Keynes might disagree with Author's assessment...John Maynard Keynes in his famous 1930 paper titled Economic Possibility for our Grandchildren defined “technological unemployment” as the following: situation where the pace of automation exceeds the pace of new job creation.
This is already happening with my current job when it comes to task displacement by AI https://dmantena.substack.com/p/is-this-time-different
Thank you for sharing your thoughts, Dan. I will definitely look into those sources - which may change my mind. I won’t pretend that I know anything with certainty.
I could see a world where the number of jobs automated by AI outpaces any new roles/jobs created. When that happens thay would be bad for society, especially if the wealth generated through this automation isn’t redistributed. (Which given how slow governments are to act is likely to happen too late or not at all; basically evaporating the middle class).
Let me rephrase "AI is designed to replace humans."
to
"This AI is designed to replace knowledge workers." (aka white-collar, yup)
But then again, ultimately, we wanted machines to help with brain work, didn't we?
who is "we" in your last sentence? :)
Well, I'm not going to impose this on you, Dan.
Feel free to be excluded from that particular "we"
I do want machines to help with brain work! but I would prefer narrow AI tools (like perplexity ai, or notebookLM by google) instead of general, LLM agents that are intended to do all of my work. I do enjoy some of it, haha
The idea of AGI seems to be a desire from the AI community that is being pushed on us knowledge workers imo.
The idea of AGI also seems to be what you need to continuously point to, since that supposedly, maybe undoubtedly, is unlocking the jackpot.
Gotta justify these investments.
Also, OpenAI mentions AGI right in the mission …
"OpenAI is an AI research and deployment company. Our mission is to ensure that artificial general intelligence benefits all of humanity."
vs Anthropic …
We believe AI will have a vast impact on the world. Anthropic is dedicated to building systems that people can rely on and generating research about the opportunities and risks of AI.
See, Dan, we are pumping engagement double-handedly here.
Jurgen doesn't even have to kick this off.
agreed.
from a rationalist perspective (which I have heard Silicon Valley VCs strongly believe in), the asymmetric payoff of AGI can help justify any amount of investment.
Love the discussion you got going on here!
I couldn't easily come by data about what people are using ChatGPT for. My hunch is that the majority of users are using it mostly for very trivial stuff. And by trivial, I don't mean stupid (but yes, that is probably another major sub-segment within that majority segment), but, you know, not very productivity-related.
Even those who report to use AI for work, I mean, what are they doing? What is always mentioned is "writing email" – but email is in itself a sinkhole of (or for) productivity. So maybe people are churning out more emails now. Fantastic 🫣
Completely agree and think it’s the only way it will make financial sense for these businesses, and it’s the only way OpenAI etc can ever make it a viable model. Is it right for us as humans and as a society? I doubt they ever stopped to ask that.
But we have to generate value for the shareholders, Stephen...