David Marx (of Ametora fame), draws parallels between the role of polyester in textile manufacturing and its cultural implications and today’s rise of generative AI.
When polyester first emerged in textiles, it was first a way to get clothes to be less wrinkly, but over time came to be associated with clothes that were a bit trashy (I’m paraphrasing). Poly lost its luster as a modern fabric and instead was perceived as cheap and gaudy.
Here’s Marx (highlights his):
I rehash the rise and fall of polyester because I believe it presages what will happen to generative AI art. At the moment, computer scientists are creating software that reduces time-consuming inefficiencies in creating new designs, sounds, and other aesthetic products. Companies are already beginning to use AI-art in their advertising and product design, especially small businesses without design teams. Larger firms are planning to layoff employees or curb hiring in the belief that they can do more with GenAI. While polyester took a few decades to lose its appeal, GenAI is already feeling a bit cheesy.
Anecdotally, this feels right, lots of genAI — especially generated images — feel bland in a highly specific kind of way. This is likely to change, of course, with every major model update, but right now there are certain aesthetic markers that just give genAI vibes.
Please indulge me with this for a moment, because I think it’s important to be clear about the potential for creativity when working with genAI:
One, I don’t believe for a second that AI cannot be used to make “real” art. For well over 100 years (Duchamp’s Readymades, anyone?), we’ve had art that often has been criticized as “anyone can do that! And we’ve collectively agreed that art doesn’t necessarily need a craft component: Conceptual work and intention can suffice. So: AI can be a tool to create art, no doubt about it.
Two, most art is produced not for galleries but for personal enjoyment, and hence most art is amateur art. It is not good art, and that is totally fine. A small percentage of art is created with the intention of making great art, some successfully, some less so. Bad art is still art. Among the good art, for whatever measure of “good”, I’d argue that it can very well be made with AI tools if the artist knows what they’re doing, if their concept is interesting and the execution is well done for whatever their context might be.
For example, “Teddy bear in the style of Picasso” is probably going to be trash, unless the artist has some underlying concept that somehow elevates it tremendously. As a personal artifact it might just be a non-art decoration for an invite card, or it might be some sort of amateurish, low-level expression of creativity: Not high art, but not nothing. It’s a scale, and context is everything.
A whole lot of what we all perceive as AI slop online today is neither one nor the other: It’s not high art and not personal creativity, either. It’s mass-produced content churned out with intentions ranging from selling stuff to just flooding the channels. In many cases it’s just a way to appease the algorithms that demand images or videos for content to be promoted and amplified. And that’s where things get interesting. Back to Marx:
Of course, GenAI is much worse for society than polyester. Synthetic fibers were bad for the environment, but so was the famously diabolical cotton industry. GenAI will first wreck the labor market for design professionals. But moreover, the tools are already being used to undermine the entire information structure of society in assisting the creation of disinformation that looks identical to reality. This, too, will damage its status value. Who wants to wear a T-shirt designed by the same software that powers the fake imagery used in authoritarian propaganda?
And that’s it right there, isn’t it? GenAI is at its most problematic when it undermines our media ecosystem (as I’ve explored here). GenAI floods the zone with shit our digital channels with slop. And what’s worse is that we have to assume that different AI systems elevate and amplify certain types of content disproportionately, which will likely be easier to create with genAI. There’s a good chance we’re headed into a negative loop in which genAI creates slop that is custom-tailored for recommender AI systems to go viral, creating stronger incentives to create more slop — and all the while drowning out human-created as well as higher-quality content. Which we need, because participation and democracy needs us to have good info.
In an alternative universe, I’d think we could amplify human-created content algorithmically and filter out slop, but since the same companies that run search and social media near-monopolies are also the leaders in genAI, this seems unlikely to happen.
Anyway, I highly recommend reading David Marx’s piece. It’s good!