AI is a tool. Not a replacement.
- Sam Schofield
- 7 days ago
- 4 min read
Updated: 3 days ago
AI has been a hot topic for a while now. ChatGPT burst onto the scene at the back end of 2022 with great fanfare.
Beyond a few headlines, the first I heard of this game changing technology was from a family friend. He was using the bot to create bed time stories for his two small children. He'd ask the kids for a few plot points - is the story about a knight on a mission to slay an evil dragon, a young explorer rescuing his teddy from the wilds of the back garden, or a princesses tea party - and in seconds it would magically write a beguiling story to a specified length.
There were immediate concerns ChatGPT and its ilk were more than just a tool for parents tired of reading the same bedtime story every night. Professionals who write for their supper (copywriters, PRs, authors, journalists, or even coders) were seemingly in the immediate firing line. This bot could do their jobs faster, cheaper, better. If not now, then in the very near future, months or a year at most.
However, severe limitations were soon discovered, expectations tempered, and progress slowed. ChatGPT and others got a reputation for making things up. Sheer, bald-faced lies. Statistics plucked from thin air. Supposed "facts" conjured on the spot. When caught in a lie, the bot would merely apologise and make up another lie, all the while pushing it with utmost confidence.
Not that this obvious issue saved everyone. Stories emerged of the technology being incorporated by news organisations and businesses to cut corners... and jobs. AI output is now prevalent across the web and, most blatantly, social media. Most responsible organisations have humans checking the output, making its usefulness questionable, but concerns of accuracy remain to this day, almost three years since ChatGPT's introduction.

In Cision's 2025 State of the Media report, 3,000 journalists were quizzed on a variety of topics, one being concerns they have over AI generated press releases and pitches from PR professionals. 72% of them said they worry about potential factual errors. One was quoted: "I try it from time to time but its inaccuracy concerns me too much." Another said: "I use it to help summarise some research, but have found it to be mind-bogglingly inaccurate on multiple occasions, requiring further research, making it counterproductive."
The factual errors inherent in AI content are not the only issue, of course. To the trained eye, AI produced content is obvious almost immediately. Left to its own devices, with little in the way of rewrites and edits from a competent copywriter, it produces banal, unimaginative, regurgitated, plagiarised, formulaic drivel - completely lacking in authenticity or creativity. It's hallmarks are obvious and there is accompanying reputational damage for those who rely on it heavily.
If a company doesn't value their customers enough to take the time to produce well thought-out, meticulously crafted, authentic content, why should their customers spend their time reading AI generated output they can find on a million other websites or get directly from the AI horse's mouth themselves? If you don't value your customer, they will not value your business or services.
That is why professional copywriters and PRs are essential. ChatGPT and other AIs have their uses, of course. There is no denying it's a remarkable technology, and I've talked about how it can and should be used as a tool by professionals previously.
Beyond the quality of written output, tools like ChatGPT need professional human oversight to avoid inherent risks and reputational damage. Similar to the advent of search engines before it, they are great for ideation and elements of research (providing this is rigorously cross referenced with trustworthy sources), but greater reliance can lead to damaged trust and, ultimately, a negative impact on results.
There is more to writing than just "content". It's strategy, positioning, risk management, and brand protection. It's about connecting with your customers, human beings, in a way that sounds human. It's about originality. It's about thought leadership, not thought mimicry. It's about being a brand that stands out, not a faded carbon copy.
AI can rehash content it read elsewhere and serve it up as original thought but it will never be as presented. AI can't pitch an idea to a journalist, pre-empt a nuanced reputational risk in a line of copy, spot a strategic opportunity in an offhand comment during a Teams meeting, offer its experienced opinion on a topic of contention, or put itself in your customers' shoes while considering the implications to your business.
But don't just take my word for it. There are numerous examples of businesses now frantically backtracking on widespread AI adoption. "Over half of UK businesses who replaced workers with AI regret their decision," according to an article on Tech Radar. Swedish fintech giant Klarna is one of the most high profile proponents of AI in business, having made 700 redundancies in favour of OpenAI solutions, according to Vice.com. The company made $10m in savings slashing their customer service and marketing departments - and now they're rehiring humans after a disastrous drop in quality.
Here at Schofield Communications, we are not averse to AI tools (for example, as part of the aforementioned ideation process), as we believe in using everything at our disposal to get results for clients. It's exceedingly difficult to avoid AI in its entirety these days anyway, as integration continues apace, but the vast majority of what we do is still human-led, human-produced, human-researched, and human-written. That’s the part that builds trust with our clients and their customers - and drives the desired results.
AI is a tool and, like any tool, it's only as effective as its operator.