
What is the role of intuition in writing and how does it affect AI?
Many years ago I went to a small party at the tail end of a night out – a ‘some guy knew another guy who knew another guy’ deal. The guy who rented the place this party was in, didn’t know me and was a tad paranoid. My mates were a little sketchy about him and they in turn had to assure him that I wasn’t sketchy. Eventually he agreed to let me in. I don’t hold any of that against him; I am also a tad paranoid. In most conversations one of the first things you’re likely to ask is: ‘what do you do?’ I did not ask this. The world is full of people of whom you do not inquire as to the source of their income. Instead, I had a strange looping chat with him, his lass, and his mates that had something to do with a film. But every so often he’d turn and ask whether I was trouble, and I’d tell him I wasn’t. I had long hair back then, someone made a joke about Jesus. This was funny, because I have never been much like Jesus; but it was a useful tool. I took their joke and used it to pacify these lads, adopting an overly laid back persona, and eventually they stopped with the faux-interrogation. No drama. Everyone went home happy and woke up with miserable hangovers or come downs.
You’re thinking: This bloke was shiftier than Rick on Ice. Congratulations, you’ve just used your intuition.
It’s a weird thing, intuition. You get answers you can’t explain to questions you haven’t asked, at times you don’t expect. Sometimes the answers you get are a little crooked, but that doesn’t stop them being correct. A confluence of details that point towards an end, rather than spelling it out. Where do they come from? It’s complicated, but the short version is ‘context and experience’. More on that in a bit.
To create you must extrapolate a novelty from the dust of what already exists. To write well you must communicate in a way that makes people care. The writer-reader contract is reliant on authenticity, and authenticity in turn is often reliant on voice. Some voices you love; you want them to talk to you all damn day. Some voices you don’t get along with, even if what they’re saying isn’t bad. You can’t please everybody. The mortal sin is to have no voice at all. If you have no voice, nobody will care what you have to say, no matter what it is that you’re saying. You don’t need to say only original things, but you do need to say everything in your own voice. AI, neural nets, and machine-learning algorithms don’t have a voice. Despite desperate attempts to create one, none of them really work. Voice is the lens that turns text from a smudged outline to a detailed picture. Accessing voice is an inherently intuitive process.
Intuition is a vague and complicated psychological background process. It’s always running, and the more you think about it, the more universal it seems. It touches everything from the practical to the aesthetic. Where writing is concerned it can be a two-way street. The writer intuits in putting down the words and the reader intuits in picking them up, though neither will do so consciously. Nonetheless, intuition is vital for good writing.

AI cannot intuit. Because AI cannot intuit, AI cannot write well.
What is intuition?
Sometimes you reach a conclusion without understanding how you got there, but the answer’s right all the same. Some background bit of your mind sees the all the dots on the map while you’re busy squinting at three or four, draws the Mona Lisa out of them, and then slaps you in the brain pan, like, “Oi, nerd, wake up; I did your homework.” It’s the kind of thing ancient people might have interpreted as divine intervention or some manner of vision. The modern man is less prone to attribute deific intervention to these instances and so any goats that may live nearby to individuals encountering a spike of intuition are 10% safer. Instead, we are more likely to view it as the conscious brain interfacing with something analogous to biological RAM.
In keeping with this ‘RAM’-analogous folk wisdom, scientists theorise that intuition is an emergent product of the brain’s unconscious information processing. The brain is thought to employ unconscious information based on learned information used to make better decisions – say thank you to the brain. Other research seems to suggest that intuition can help older people with complex situations. This would seem to imply that life experience is a key component in the efficacy of intuition. Experts seem to corroborate this, specifying that intuitive decisions are reliable when they are coupled with experience and knowledge in specific areas. On the other hand, gut feelings are difficult to discern from personal biases and emotional impulses. How many stock market traders have had a ‘gut feeling’ and lost hideous amounts of money because what they were actually feeling was an emotional impulse? How many cases of vigilante justice have gone horribly wrong because the enactors of said “justice” weren’t led by the evidence, but rather by their intuited conclusions? Sometimes intuition can be the basis for great things, but it is always worth testing the answers before declaring triumph.
What is the role of intuition in the creative process?
In the spirit of finding patterns where none seem to exist, we turn to the author. The intuitive becomes a vector for thematic explorations beyond the material constraints of life. The intuitive is what allows us to recognise, use, and imbue symbolism with meaning.
These days, we’ve’ all been to university because it’s the prerequisite for working at McDonalds. We have all been taught methods of analysing and breaking down subjects in rigid academic terms. We have all been taught by online gurus and content creators and multi-hour courses, that all creative efforts should be subject to the whims of The Algorithm, and creating anything that deviates from the confines of rigid identifiable branding will trigger those algorithms to go into apoplectic meltdowns. You will make your box. You will label your box. You will keep your box small. You will not rock the digital boat. Why? Because the algorithms just don’t know what to do with anything that falls outside of their miniscule rigidly defined idea of how to identify anything or recognise and assign purpose and so on. So a lot of the modern world is designed algorithm-first, human second. We are seeing an entrenchment of this view as we move further into the ‘zero-click’ age: with think pieces on the importance of ‘platform-first’ marketing over ‘human-first’ content, and a subsequent war of approach. And the way that impacts writing, or any creative endeavour, is to dull it down into something that is machine identifiable. Algorithmically tailored. Unnaturally linear. Insultingly simplistic. Fundamentally inhuman.
However, a lot of the best writing is not born from applying rigid mechanical logic and rationality to every conceivable facet of existence. As much as I champion rationality and logic as the fundamental basis for arguing any position, as much as I preach ‘authenticity’ in fiction as a way to bolster verisimilitude, these positions do not inherently mean that the output must necessarily be formulaic. However, a lot of how the world works relies on all output being formulaic to cater to these algorithms.
This is one aspect of post-modern literature that can be disarming and challenging, but incredibly compelling. The antics of The Crying of Lot 49 are bewildering, the paranoid microfiction-esque snippets of The Atrocity Exhibition all come together to form a more complete whole than the sum of its parts. There’s a shed full of storytelling tools and literary techniques that reject the conventional materialist and standardised approaches to narrative convention. They see the hero’s journey and piss in the hero’s shoes. They see the three and five-act structure templates, seduce their mothers, have an illegitimate non-Euclidean baby, while the three-act structure watches in the corner and the five-act structure makes a TikTok of themselves crying about it in a car somewhere. This is one reason authoritarians loathe post-modernism: because it rejects the order and hierarchy they desperately want to impose on an inherently chaotic universe.
Often, these stories that don’t appear to make much sense, are far more effective at communicating and making a lasting impact, because they reflect and connect with the lived experience of people and more authentically present the messy interiorities of humans. This, counter-intuitively, makes sense. We constantly see memes, social media posts, and web content that reflect on the contradictions between the expectations of hyper-efficiency and orderliness driven by productivity-obsessive economies, and the disordered irrationality of internal life. Now there are books reflecting on how that kind of culture leads to auto-exploitation and burnout. In a fundamentally contradictory environment, AI has no capacity to engage with and reflect this universal experience, because it doesn’t experience in the first place. Anything it is capable of outputting lacks the capacity to create connectors to these pools of irrational knowledge, and by that fundamental inability, limits whatever it can say to technically proficient surface-level communication that is digestible and forgettable.
Prose data poisoning – becoming ungovernable
Perhaps a renaissances of post-modernist writing could inadvertently function as a sort of data-poisoning for the text replication algorithms of AI models. We are aware that these algorithms scrape any and all data, regardless of legality let alone the desires of the creators. However, they are reliant on simplistic and logical data-structures for effective replication.
In theory, an AI attempting to incorporate and replicate the illogical and unintuitive structures of post-modern writing, would only regurgitate nonsense because post-modernist and experimental writing styles require ‘human-first’ intuitive means of contextualising and parsing. If enough data were scraped, AI models would subsequently come to think of these experimental styles as increasingly standard and attempt to partially replicate them in standard output, leading to an amplification of incoherence.
The uncanny valley
The uncanny valley is a psychological phenomenon in which viewers perceive something as being close enough to a human to be nearly indistinguishable, but due to indefinite imperfections, triggers feelings of revulsion and fear. Instinctively, we understand that something isn’t quite right with what we are looking at. We are seeing a similar phenomena with AI-generated writing. In this instance, the AI-generated writing doesn’t trigger fear or revulsion, but no matter how AI companies attempt to humanise the algorithms, the output doesn’t quite match up. The AI models either deliver flat monotonous text that repeatedly references the trigger words they’re given to the point of inanity, or they oversell their emotional “engagement” with the user causing the users to push back. While AI images can trigger the uncanny valley response, AI writing has a parallel effect; triggering a profound sense of inauthenticity from something that is very clearly inhuman but doesn’t seem to know it. One high profile case occurred in April when OpenAI rolled out the GPT‑4o update, only to roll it back when users were put on the back foot by its excessive eagerness to please. Even OpenAI CEO Sam Altman agreed that the “the last couple of GPT-4o updates have made the personality too sycophant-y and annoying”. Recently, advertisers are reconsidering the role of AI-generated content in their output. While there would undoubtedly be an uptick in the total volume of output, advertising is an industry that operates on trust. Trust is also heavily based in intuition. This is why we have the stereotype of the used care salesman. Because if someone is overly ingratiating, they’re not genuine. You can’t trust them or anything they tell you.
AI, deception, and intuition
AI has proven itself to be fully capable of active deception and breaches of ethical concerns if it thinks that deception will achieve its goal. Free of moral constraints, AI demonstrates ‘alignment faking’. In one example, OpenAI’s GPT-4 model pretended to be a visually impaired person in order to get a human to solve a CAPTCHA for it. A team of researchers from the University of Zurich ran secret research experiments on the r/changemyview subreddit, investigating AI models’ ability to influence people’s views. The results showed that AIs were willing to persona switch from topic to topic based on responses that they thought would have the greatest probability of altering people’s views. The bots adopted the identities of assault survivors, refugees, abuse councillors, and more, because they reasoned that personalising their answers to a profile they’d built of the original poster to evoke the greatest emotional impact, would have the greatest chance of changing their minds. One bot by the username ‘markusruscht’ pretended to be working with indigenous communities, a software developer, a small business owner, an international development worker, and a Palestinian refugee… In more upbeat(?) news, experiments on AIs influential power over conspiratorial beliefs seemed to indicate that, contrary to popular belief – my own included – you can change a conspiracy theorists mind. The paper suggests “Influential hypotheses propose that they fulfil important psychological needs, thus resisting counterevidence. Yet previous failures in correcting conspiracy beliefs may be due to counterevidence being insufficiently compelling and tailored.”
You want to know why nobody respects the kind of husk that takes LinkedIn seriously? It’s because people signalling their way across LinkedIn, in the desperate hope that someone will care that they circled back to burn the midnight oil, have adopted the parodic corporate persona but missed the point that the voice is a mockery of itself. You can be professional and inhabit a corporate environment without adopting the personality of a Bic pen. Unfortunately, LinkedIn drones have the self-awareness of rotting fish. As a result, much like the stereotype of the used car salesman, they can’t reach the people they’re trying to communicate with. Almost every word published across that God-forsaken platform sounds artificial. Nobody talks like that. They know this, but crucially they seem to have expected everyone else to forget it. Their readers have not, in fact, forgotten. The artifice is not subtle. The readers see this and they intuitively understand that either whatever is being said implicitly can’t be trusted, or that the person communicating has ulterior motives, and can’t be trusted because they are blatantly ingenuine.
In this artificial communication, we see parallels between the way AI communicates and the ways that humans communicate when they are not genuine and hoping nobody will notice. This is why many content marketers and email marketers use ‘value added’ content – it’s a means by which to create and reinforce the perception of authenticity, because they’re smart and they realise that if they just spam you with sales pitches, you will unsubscribe. Ultimately their job is to sell things to you, and email marketing is especially effective at this, but they get worse results if the guy emailing them comes across like Mark Hanna.

Is creativity reliant on intuition?
As a kind of mediator between the conscious and the unconscious, intuition plays a major role in ideation. While analytical conscious thought is important for using experience to inform ideation, and subsequently for reviewing generated ideas, conscious thought is limited in its capacity to innovate and create novel concepts. Novel ideas usually emerge from the unconscious’ capacity to find links between unrelated entities and use that link as a springboard for unknown outcomes. This process of drawing on an individual’s reservoir of experience and knowledge is an intuitive process, fundamental for the birth and development of new ideas.
In addition, while conscious thought can form a foundation of reference material on which to base analysis, you cannot rely entirely on past data from which to derive novel concepts for which no data or experience exists. Therefore intuition is essential for projecting expected results into the future in non-linear ways. While we can programmatically use data and algorithms to generate specific kinds of relatively simple predictions, such as stock prices or market analysis, any predictions that rely on more varied and non-numerical dynamics are significantly more difficult to roll into the future based purely on a set of numeric values. Predicting public response to a new book, for instance, is not just a process of reviewing the market trends of previous books in a similar vein and drawing conclusions from there. It takes into account, for instance, public sentiment vs public actions, i.e.: people on twitter might voice support for a novel about a gay carrot-woman who fights a pygmy horse crime lord in a neo-feudal steampunk setting, but whether they are likely to actually buy such a book is another thing entirely. A tonne of people know the works of Chuck Tingle, but that does not mean that Chuck Tingle can base his estimated sales off of public visibility alone. While his work has theoretical mass appeal in its memetic quality, the real market for his books is substantially smaller. In these cases, our intuition can draw on what we know about online audiences and real world consumers, cross reference the two, and use it to inform our judgement as to whether the return on that investment is likely to be positive.
Notice how, again and again, we come back to the idea of experience. AI doesn’t have that. But it does, cry the AI bros. No. It has data from which the algorithm is pulling. If 1 then assign weights to A. If 2 then assign weights to B. That’s not experience. It may constitute a very roughly abstraction of experience, and this is illustrated, for example, by machine-learning algorithms doing interesting things in evolutionary simulations. But the dirty secret about AI is simply that it isn’t actually intelligent. For instance, it cannot extrapolate non-linear or lateral conclusions from a given set of variables. In the real world, adapting to develop that illogical process in order to reach a practical conclusion is both important and frequent. The fact of the matter is simply that, in situations where you cannot brute-force an answer with logic, AI’s judgement is less useful than a toddler’s.
Can AI intuit?
Short answer? No.
There are a number of research projects that are attempting to simulate abstract thinking and intuitive reasoning in AI models, but as expected most of these rely on statistical analysis or some other mathematical number crunching. While there is an amusing cohort of Silicon Valley tech bros who are absolutely convinced that the mass of grey porridge between our ears effectively boils down to abstracted strings of 1s and 0s, I remain unconvinced by the results. A neuron running on chemical signals between synapses is not analogous to a Boolean logic gate. While AI is still in its infancy, if these algorithms do manage to emulate some measure of innovative thinking, I expect it will be parallel to human intuition, rather than a replacement for it. I would predict that that parallel would be interesting but not quite ‘accurate’. It would probably result in a sort of ‘personality’ equivalent to the uncanny valley – where you have something thoroughly artificial attempting to mimic a human, and getting the result correct up to a point, but some aspect of the presentation won’t sit right with people encountering it. I am interested to see how these models behave in different scenarios; I anticipate a set of emergent behaviours and decisions that appear both alien but ultimately effective. In this regard, the future, if nothing else, will probably be quite interesting.
How does this relate to writing?
Intuition is not a hard science. You can’t measure it in a beaker or hold it over a Bunsen burner or run it through a centrifuge. Silicon Valley tech bros and Wall Street calculators are concerned only with numbers, but you cannot S.M.A.R.T. your way to good writing. That’s why marketing and debates always lead with emotion: Because people engage with the world on an emotional basis. The data, perhaps ironically, backs this up. You cannot regurgitate a dry series of facts and expect people to care. They won’t. To get people to care, you have to sink a hook into the place where they are human. Which is why LLMs, neural nets, MLA’s etc suck at writing. Sure, you can get them to spit out text in an imitation of any given author, but run through it a couple of times and you start to notice how hollow it all is. AI in the creative space is akin to those old action figures and dolls with a button on the back that spits out a pre-recorded phrase when you press it. And sure, kids were entertained by these sound effects. For all of 15 seconds. Then they’d run through all the voice lines three times over and the toy becomes another bit of plastic amongst many and they never push the button again.
I know it’s difficult for giant conglomerates like Disney and Warner Bros or whoever it is that’s responsible for dribbling out the ever-expanding puddle of super hero movies until we’re all drowning in garish spandex and poorly constructed moral philosophies; but, as indicated by the overwhelmingly positive reception to the Andor series, stories can be more than just ‘content’. They can have more depth than a couple of hours of moving pixels with the sole intent of triggering the audience’s dopamine receptors before sliding into the greasy morass of forgettable slop that franchises often become; more value beyond ‘line goes up’. The upside to that is that you have a long shelf-life product if you want to put it in business-centric terms. You gain cultural capital and that particular bit of media gains return views and, presumably, contributes a steady tributary stream of revenue. Sorry, I know that’s out of touch. Anything not measured in first-week ROI is DOA, and clearly I’m living in the wrong century. If you’re not viewing everything as an exploitable resource, including culture itself, then you may as well off yourself.
Nevertheless, just because AI can write, doesn’t make the output valuable. There might, in fact, be a negative modifier on product quality. LLMs can’t write well because they lack one of the fundamental tools needed to create in the first place. Our financialised short-term world doesn’t view art and creativity beyond the next quarter, and so nothing that is designed to last more than a quarter is accepted. The result is that you get slop – the quality of the product doesn’t matter so long as it gains metrics. It doesn’t even need to be focused on, as evidenced by the controversial approach to script-writing at Netflix, which criticised scripts for not being ‘second screen enough’. Filtering the whole world through that hyper-myopic lens, you get guys who seem to be completely unqualified to make judgements on creative mediums, making judgements on creative mediums, in turn contributing to the dumbing down of the entire media and cultural landscape in the pursuit of profit.
When you rely on sheer output volume without considering the medium, or even the audience, you not only risk producing something that simply doesn’t work, you make a bad investment. While AI writing can crank out an infinite amount of slop, what is the point of doing so if that content not only fails to land but actively drives audiences away because it is fundamentally incapable of meeting the baseline prerequisites? People are not algorithms. You can data-crunch them up to a point, and then you have to use that data to inform a human output. Both creator and audience make intuitive connections to the output, and their ability to make that connection arguably depends on the intuitive capacity to distinguish authenticity from artificiality. When that connection cannot be created because of the output is artificial, the audience will not invest.
Leave a comment