Why vocabulary is important when thinking about AI.
I used stable diffusion to make my Christmas cards last year. It was – theoretically – a quick way to make something throwaway. I actually spent about 6 hours pissing about with the inputs before I arrived at some stuff I liked enough, but that’s besides the point.
Cards are probably a decent vector of AI-generated images. They’re there to show you care in a low-investment but simultaneously personal fashion. You can throw them away and nobody cares. That’s why Hallmarks exists. Dumb snackable sentimentality. AI-generated images paired with personalised card services such as Moonpig just democratises the means of production. If you want something with more worth than that, but is not a whole gift, then a letter is probably a better option.
People may also have noticed that I have started to use AI-generated images in this blog. Arguably I could pay an artist to do images for every post, or I could do it myself, but frankly I have limited money and time, and the cost-effectiveness in both of those resources just isn’t justifiable for something that will be absorbed in less than a second, and is less important than the words.
But I absolutely refuse, point blank, to consider AI-generated images to be ‘art’.

Why even use images? Because they’re good algorithm fodder. Because they’re attention grabbing. The content on this blog is not remotely ‘optimised’ for search engines. If it was, then I’d have about ten different blogs because I stick my fingers in a lot of disparate pies and SEO rewards micro-specialising your content to fit specific themes. So me moving from babbling about dragons to ranting about the books I’ve been reading, is not great SEO optimisation. Suck it. I don’t care. But it also doesn’t mean I’m not going to take the low-hanging fruit.
So, what is the problem with conflating AI-generated images with ‘art’?
The conflation of product with value, regardless of origin is not surprising in a world full of silicon valley tech bros who believe value is derived solely from profit. If profit defines value, then low-cost mass output drives higher profit and therefore higher profit equates to greater value. Loop indefinitely.
One fascinating illustration of this is provided to us courtesy of the cryptoexchange owner recently convicted of Fraud, Sam Bankman-Fried, who furnished us with this pearl of wisdom:
“… the Bayesian priors are pretty damning. About half of the people born since 1600 have been born in the past 100 years, but it gets much worse than that. When Shakespeare wrote almost all of Europeans were busy farming, and very few people attended university; few people were even literate – probably as low as about ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren’t very favorable.”
So if a piece of art has a low profit margin, then how can it be valuable? Because value does not derive solely from the cost of a thing in pounds and pence. Perhaps we need to divide the concept of value and worth: Value being linked to monetary and trade exchange rate, and worth being linked to non-financial attachments.
For example: I can generate a hundred image in an hour, and with a bit of fiddling and prompt tuning I enjoy the results. But it’s not art. It will never be art. It’s just another product. Anybody who does consider AI-generated content to be ‘art’ is fundamentally ignorant of what ‘art’ is as a concept.
Art has intention or impetus behind it. The output is filtered through the fundamentally irreplicable series of experiences, interpretations, personality, and capability of the person producing the art. Bluntly, art contains, in some capacity, ‘meaning’. There’s a lot of philosophising to be done with regards to the relation of meaning to creation and what meaning is and so on and so forth. But that’s not what I’m here for.
You can apply this to books, you can apply to music, you can apply this to any and all output that requires some manner of creativity. As previously noted – AI language models will at some point, be able to whole-cloth generate very simplistic stories – Micheal Bay action films that you can shove popcorn into your face to for an hour and a half and watch things blow up. Cool. When an AI can produce Bladerunner, I’ll reconsider my refusal to call anything it produces ‘art’. When AI can write Crime and Punishment, I’ll think about calling it art. I can ask ChatGPT to write me hundreds of thousands of words in the style of Crime and Punishment, but it will never be art because it has no intent. I don’t actually like Crime and Punishment, but Dostoevsky clearly had intentions to do more than just follow one word with another until he reached a point at which it seemed rational to put a full stop. That’s what makes Crime and Punishment art.
AI fundamentally can’t perform subtext, but silicon valley bros don’t understand subtext. Code cannot make intuitive leaps of the most basic kind. If you’ve ever tried to write a basic script to do a basic task, you’ll be well aware that a seemingly obvious step that a person would be able to perform without any kind of problem, is completely impossible for a computer if that step requires any kind of intuitive or creative thinking. The kind of creative thinking that is so baseline simplistic, that you wouldn’t even consider to constitute creative thinking.
This is the major difference between “AI” and people – AI cannot intuit. It cannot create. You need to be able to intuit to create. To take various memories or images or experiences that you know, and take something semi-tangible out of them to form something completely new – requires the ability to conceive of something that you don’t already have, know, or necessarily understand.
Which may seem like what stable diffusion et al. are doing, but it’s not. If you mess around with these things for about an hour, and you browse other images, you start to see a lot of the same patterns, the same layouts, etc. It’s clearly got things it’s ingesting as a sort of ‘template’ image and then layering other stuff into it – whether intentional or not. It’s a very obvious mechanical process. The result is the creative equivalent of McDonalds – it’s quick and dumb and if you just want an image or some text or some music, etc, then AI can totally give you all of that. But it has zero substance and when you – very quickly – get to the point where you see the cogs churning behind the curtain, you start to understand how limited it is, and how it cannot actually produce anything of value beyond the silicon valley heuristic of mass content output-to-dollar ratio.
This is basically the autotune problem all over again. You can’t sing, you refuse to accept that fact, and you get autotune to do the singing for you. T-pain was famous for this, but T-pain can actually sing, and so his use of it to produce an entire catalogue of ridiculous songs revolving around autotune, can be interpreted as a big satirical slap across the face of a music industry that gave us manufactured artists who didn’t write their own music, had no talent or creative quality, and existed on hype and branding alone. That’s why people look down on the weird boardroom-created “music artists” that come out of Japan. And they should.
AI is effectively a utility. If you need a quick dirty gap filled for minimal cost, then use it. Absolutely. It doesn’t matter. But don’t sit there and call it art. You are not an artist because you taught an algorithm to recognise coronas and mimic highlighting. It still doesn’t understand what those things are, it’s pixel-matching. Pixel matching is not art.
Suffice it to say that if I want to write a noir thriller, an AI might be able to mimic the noir style, but it has no ability to put words down with an intent outside of just completing a sentence from a pool of words that it filters out based on the number of times that they’re likely to appear in any text with the meta tag ‘noir’. Because of this, it can only get at the veneer of any creative output.
Let me make it clear: if you think that a veneer constitutes art, then you are a lamentable excuse for an idiot.
The art is in the subtext, the intention and the intangible stuff.
We need to update the vocabulary we use when referring to AI-generated content, because if we don’t then human-created meaning becomes meaningless purely due to its conflation and agglomeration into an indistinct pool of both meaningful and meaningless output, and losing the distinction between the two is not beneficial to anybody apart coked-up brats in glass towers.
AI is fundamentally incapable of creating meaning. Thus AI cannot create art. Thus AI-generated content is fundamentally not art.
But could you not?
If you are an artist and you’re sick of algorithms stealing your art, just know that the courts aren’t going to help you. However, you can poison your art with a program called ‘nightshade‘. In theory, this adds a distortion layer to an image that is mostly invisible to a human but completely trashes an AI’s ability to make sense of what it’s looking at. For a better explanation, try this video.
Leave a comment