Why? Because they were often written – or edited – by AI.
Since ChatGPT was released, the length of LinkedIn posts increased by 107%, according to a study by Originality.ai that came out late last year. Which does not surprise me one bit.
In contrast, an experienced editor usually cuts whatever they’re working on: a basic sub-editing rule is almost any first draft can be halved without losing any meaning.
(Having said that, SEO consultants used to say longer website content is better as it adds more keywords to your page – but now they’re saying AI search prefers shorter content, with the word of the moment being “chunks”. I might make this the topic of a future post – but for now I’ll simply say you should give the reader what they need and nothing more, which is also what Google recommends.)
But I digress.
Without rhyme or reason, AI tends to make content too long when left to its own devices.
That’s because it has no judgement. Generative AI systems such as ChatGPT and Claude use large language models – and these are not capable of critical thinking. If they were, humans would probably be extinct by now.
Instead of thinking, LLMs string words together in recognised patterns, not knowing what to keep, what to cut, and when to stop.
For example, I asked ChatGPT to tighten an early draft of this article. In all fairness, it did improve half of one sentence – but I had to ignore the rest of what it created because it didn’t sound like me. Or human, for that matter. It also changed some words and phrases without realising it lost meaning by doing so, and some of its changes – while initially sounding okay – didn’t make logical sense once I took an extra moment to think about them.
I can imagine people arguing that I needed a better prompt, or that AI improves the more you train it on your own writing. But AI’s pattern matching is not thinking, no matter how much you train it. I suspect people keep forgetting that because it gives the illusion of intelligence. And you need to think, reason and understand in order to know what to put in and what to leave out of each sentence, much less an article.
There are several ways you can use AI as an editing tool, but that only works if you already know how to write and edit well in the first place – and only if you use it in limited ways, such as finding simpler words and checking for grammatical issues. Even then, I find that on average only one out of 10 “mistakes” it spots are actual mistakes – and so you need to know and be confident enough to make your own judgement calls about what it suggests.
For example, earlier in this article I used the phrase “without rhyme or reason”. AI said that was wrong, suggesting I replace it with “Not out of judgement, but out of pattern continuation”.
I rest my case.
People keep saying it’s ok to use AI because they then review what it creates afterwards, or it’s their initial idea that matters and not who (or what) expresses it – but that’s not how good writing works. You think as you write and you write as you think. You can’t outsource half that equation to a technology that can’t think.
That’s why half the points in this article only came about after I started writing. If I had simply fed my initial story ideas into AI, I wouldn’t have thought of them.
Quite frankly, the only time you can use AI to write is if it’s to create something simple and unoriginal that’s already been written before – such as step-by-step instructions on how to upload a video to YouTube. Or if you feed it so many instructions it’ll spit out generic copy that feels like it came from a template. And I might write a future post on what I think about most templated content.
But this article was originally about the hazards of long articles, and I’ve just written one myself. So I’ll stop here. It’s not the nice and neat conclusion AI likes to add, and that works just fine for me.