If AI training relies on AI-generated content, the phrase 'Garbage in, Garbage out!' comes to mind. By prioritizing quantity over quality, we may be undermining human creativity and the ability of artists and writers to make a living. This could lead to a future where we are left with only legacy art and literature created by humans who are no longer producing new work or alive.
Moreover, our ability to create art and write is fundamental to human identity. By outsourcing human creativity to machines, we are undermining our unique ability and risking the homogenization of artistic expression.
Furthermore, we can do much worse by relying on AI for decision-making and problem-solving, as it could have long-term consequences for human agency and critical thinking. As we increasingly rely on machines to make decisions, we may lose the ability to think for ourselves and make informed choices/decisions.
As AI pushes further toward homogenization, the value of human creativity increases.
Real paintings, theatre, live music all become more desirable. They deliver a human connection, they mediate our communities in a way that the internet never can.
In my opinion, slop is a feature not a bug.
While AI is definitely going to undermine the ability of artists to make a living on the internet, it is going to increase their value to their real life community.
Let's come back to this in a year, when your social media feeds and search results are filled with the majority AI slop and see if you would still cal it a feature ;)
All jokes aside, I do agree that it could lead, best case scenario, to a re-appreciation of stuff made my humans.
Any social media feeds that are going to be filled with slop are probably already filled with preslop and/or garbage. I don’t follow them now, it’s unlikely I’ll follow them then
It might become harder find authentic and sincere content like yours, but I don’t think you’ll stop writing.
I think AI just gets less interesting to people over time. It’s all novelty and no content right now. Once the novelty wears off, there’s not much compelling about it.
The real threat is the energy and resource usage. Instead of working to create mining, transportation, manufacturing systems that don’t need fossil fuels, we’re enabling the craziest FB memes publishers that don’t want to pay writers.
I personally do not like flaws being considered features when they degrade the overall quality of the product or outcome, but that is a discussion for another day, as I deal with this situation at my work, too.
Let's wait another 3-5 years unless we hit another AI winter where scaling stop working, and we will know where we are heading. If I did not come out in my above message, I am not saying that real art, etc., will not become more popular, but most of the Renaissance artists did not make money in their lifetime, and this may not be any different.
“Sotheby’s will sell its first work credited to a humanoid robot using artificial intelligence (AI) later this month. A.I. God. Portrait of Alan Turing (2024) was created by Ai-Da Robot, the artist robot and brainchild of Oxford gallerist Aidan Meller. The painting is estimated by Sotheby’s to sell for between $120,000 and $180,000 on 31 October. Fittingly, Sotheby’s will accept cryptocurrency for the transaction. Meller told CBS MoneyWatch that his share of proceeds will be reinvested back into the Ai-Da project.“
Some people have found they can upload more than 100 YouTube videos per day. I believe YouTube had to set a limit of 200 videos per day per account. How do you think they're generating all this content? It takes me a week to make one good YouTube video.
What will happen when we all realize that nearly all the content on social media is not from a real human? Will that be the end of social media? Perhaps by that time, social media platforms will be able to generate custom content for each user to fit their preferences and desires. And if AI is digesting its own brain, as you suggest in the article, then we will have to admit that AI is only capable of creating fiction—highly convincing fiction born of madness.
I liked the article, but I didn’t agree with the emphasis on the MAD reference, as it implies that models might 'go crazy' (I know it’s catchy but it’s a bit misleading). In reality, the situation is much less dramatic than what the authors suggested. In their idealized model, precision and recall only saw a slight decrease. In the actual human-AI landscape, performance would likely plateau at worst. By that point, we’ll have developed solutions to address the issue – such as recombining models with human data 'cooked up' in parallel, or temporarily integrating a 'symbolic agent' that structures data, builds ontologies, and creates epistemologies for the large model to draw from. Other iterative solutions may also emerge. As you rightly pointed out, 'You’d be surprised, but training models on synthetic data is actually very trendy right now. Done well, it can actually enhance performance.' That’s exactly where things are headed. :)
That's a fair nuance to the paper I've referenced. However, the premise still holds: if bigger models require exponentially more data, AI companies are forced to rely increasingly on synthetic data (since there is just no unique human data to train on anymore). That doesn't seem sustainable, because "without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease."
You're absolutely right in saying that AI companies would probably not let it come this far, and indeed, the most likely scenario is a capabilities/knowledge plateau.
Synthetic Data sounds great for physics, mathematics, etc. But what do you do about human historical information? For example, if there's a politically polarized divide about what year the War of 1812 was fought, and there's rampant conflicting AI misinformation about the date, what do you do about this type of information? This can't be Synthetic Data, can it?
So right! I have created a new Reddit sub r/croissanthippo to document the rise of AI crap slopped by the gullible.
I will definitely check that out! Thanks for your comment, David.
My 2 cents:
If AI training relies on AI-generated content, the phrase 'Garbage in, Garbage out!' comes to mind. By prioritizing quantity over quality, we may be undermining human creativity and the ability of artists and writers to make a living. This could lead to a future where we are left with only legacy art and literature created by humans who are no longer producing new work or alive.
Moreover, our ability to create art and write is fundamental to human identity. By outsourcing human creativity to machines, we are undermining our unique ability and risking the homogenization of artistic expression.
Furthermore, we can do much worse by relying on AI for decision-making and problem-solving, as it could have long-term consequences for human agency and critical thinking. As we increasingly rely on machines to make decisions, we may lose the ability to think for ourselves and make informed choices/decisions.
Its all part of reducing humans from valuable participants to cosmic baggage. Hardly something we should support.
I found the author and ChatGPT interaction very interesting:
https://billdembski.substack.com/p/arguing-about-taste-with-large-language?utm_source=%2Finbox&utm_medium=reader2&utm_campaign=posts-open-in-app&triedRedirect=true
William Dembski? I am a fan of you!
I think rather the opposite to this.
As AI pushes further toward homogenization, the value of human creativity increases.
Real paintings, theatre, live music all become more desirable. They deliver a human connection, they mediate our communities in a way that the internet never can.
In my opinion, slop is a feature not a bug.
While AI is definitely going to undermine the ability of artists to make a living on the internet, it is going to increase their value to their real life community.
Let's come back to this in a year, when your social media feeds and search results are filled with the majority AI slop and see if you would still cal it a feature ;)
All jokes aside, I do agree that it could lead, best case scenario, to a re-appreciation of stuff made my humans.
Any social media feeds that are going to be filled with slop are probably already filled with preslop and/or garbage. I don’t follow them now, it’s unlikely I’ll follow them then
It might become harder find authentic and sincere content like yours, but I don’t think you’ll stop writing.
I think AI just gets less interesting to people over time. It’s all novelty and no content right now. Once the novelty wears off, there’s not much compelling about it.
I truly hope you're right.
The real threat is the energy and resource usage. Instead of working to create mining, transportation, manufacturing systems that don’t need fossil fuels, we’re enabling the craziest FB memes publishers that don’t want to pay writers.
I personally do not like flaws being considered features when they degrade the overall quality of the product or outcome, but that is a discussion for another day, as I deal with this situation at my work, too.
Let's wait another 3-5 years unless we hit another AI winter where scaling stop working, and we will know where we are heading. If I did not come out in my above message, I am not saying that real art, etc., will not become more popular, but most of the Renaissance artists did not make money in their lifetime, and this may not be any different.
Couldn't agree more with this. :)
What do you think about this?
https://www.theartnewspaper.com/2024/10/22/sothebys-ai-da-robot-auction-alan-turing-portrait-artificial-intelligence
“Sotheby’s will sell its first work credited to a humanoid robot using artificial intelligence (AI) later this month. A.I. God. Portrait of Alan Turing (2024) was created by Ai-Da Robot, the artist robot and brainchild of Oxford gallerist Aidan Meller. The painting is estimated by Sotheby’s to sell for between $120,000 and $180,000 on 31 October. Fittingly, Sotheby’s will accept cryptocurrency for the transaction. Meller told CBS MoneyWatch that his share of proceeds will be reinvested back into the Ai-Da project.“
Great marketing. Other than that, I find it sad and boring.
True. Remind me:
https://www.cnn.com/style/article/beeple-first-nft-artwork-at-auction-sale-result/index.html
An ouroboros to describe ChatGPT
Will be doing a piece on Synthetic Data soon. It's a very rich field
Some people have found they can upload more than 100 YouTube videos per day. I believe YouTube had to set a limit of 200 videos per day per account. How do you think they're generating all this content? It takes me a week to make one good YouTube video.
What will happen when we all realize that nearly all the content on social media is not from a real human? Will that be the end of social media? Perhaps by that time, social media platforms will be able to generate custom content for each user to fit their preferences and desires. And if AI is digesting its own brain, as you suggest in the article, then we will have to admit that AI is only capable of creating fiction—highly convincing fiction born of madness.
Fun fact: in an attempt to curb the number of AI books on Amazon, the company limited publishers to only three books A DAY.
I liked the article, but I didn’t agree with the emphasis on the MAD reference, as it implies that models might 'go crazy' (I know it’s catchy but it’s a bit misleading). In reality, the situation is much less dramatic than what the authors suggested. In their idealized model, precision and recall only saw a slight decrease. In the actual human-AI landscape, performance would likely plateau at worst. By that point, we’ll have developed solutions to address the issue – such as recombining models with human data 'cooked up' in parallel, or temporarily integrating a 'symbolic agent' that structures data, builds ontologies, and creates epistemologies for the large model to draw from. Other iterative solutions may also emerge. As you rightly pointed out, 'You’d be surprised, but training models on synthetic data is actually very trendy right now. Done well, it can actually enhance performance.' That’s exactly where things are headed. :)
That's a fair nuance to the paper I've referenced. However, the premise still holds: if bigger models require exponentially more data, AI companies are forced to rely increasingly on synthetic data (since there is just no unique human data to train on anymore). That doesn't seem sustainable, because "without enough fresh real data in each generation of an autophagous loop, future generative models are doomed to have their quality (precision) or diversity (recall) progressively decrease."
You're absolutely right in saying that AI companies would probably not let it come this far, and indeed, the most likely scenario is a capabilities/knowledge plateau.
Synthetic Data sounds great for physics, mathematics, etc. But what do you do about human historical information? For example, if there's a politically polarized divide about what year the War of 1812 was fought, and there's rampant conflicting AI misinformation about the date, what do you do about this type of information? This can't be Synthetic Data, can it?
I’m a bit more worried about the humans going mad in this world, than AI 🤔
Humans are absolutely superior to AI in going increasingly mad..
Something we can be really proud of, as a species 😁🎉
I loathe AI and I refuse to use it.
Why should MADness be expected to occur slowly when everything else about LLM’s and ‘Ai’ is meant to be exponential?