“𝘐𝘧 𝘰𝘯𝘭𝘺 𝘩𝘦 𝘩𝘢𝘥 𝘶𝘴𝘦𝘥 𝘐𝘯𝘧𝘦𝘳𝘦𝘯𝘤𝘦𝘊𝘭𝘰𝘶𝘥, 𝘪𝘵 𝘸𝘰𝘶𝘭𝘥𝘯’𝘵 𝘩𝘢𝘷𝘦 𝘣𝘦𝘦𝘯 𝘨𝘢𝘳𝘣𝘢𝘨𝘦”
First, read this article from the Atlantic, “At Least Two Newspapers Syndicated AI Garbage“ (or have ChatGPT summarize it for you), and then come on back for some analysis.
AI is here to stay. Deal with it.
The first main challenge – which is not unique to the article but is well represented by it – is that much of the media landscape treats AI as if it’s anathema. That somehow if it wasn’t here the world would be a better place. I guess the media had similar reactions to cars replacing horses, or nuclear power replacing coal power, or even guns replacing arrows. But technology goes in one direction – up. And if it doesn’t go up, that’s the definition of an apocalypse. I learned that from virtually every TV show about the apocalypse since the 80s.
All that to say: AI is here to stay. We need to learn to live with it. Unless you’re actively rooting for an apocalypse. In that case you can stop reading.
The real question is: what does the future of AI look like? In that framework, one of the lessons to be learned from the Atlantic article is that we need to ensure the output from AI is governed by humans, factual and high quality.
The problem with using generic tools to solve specific problems
ChatGPT, Google Gemini, Claude and other such “generically” powerful AI tools are all amazing resources – and if you’re not using them on a daily basis you’re really missing out on some superpowers.
Just in the last few days I had ChatGPT research how to build a soundproof fence, generate an award-winning smoked brisket recipe, answer a truly obscure question about the central pit in Roman-era arenas, translate a long and complicated German legal document to English and then output it as a PDF, generate a timeline of Michelangelo’s work after the Sistine Chapel, and help me determine the exact amount of mass needed to overcome buoyancy for a specific weight of wood and plastic. That research would have taken me ages on my own.
Awesome. But. All such generic AI tools share one big weakness – as evidenced in the Atlantic article. Using them to solve specific problems doesn’t always work well, because humans are largely out of the loop and AI hallucinations are a fact. In addition, generic AI tools offer generic solutions, meaning all AI-generated content starts to sound the same. This has come to be known as AI slop.
Generic AI solutions work much like a hive mind. Hive minds are great when you want to integrate and analyze all data known to humans, but when it comes to human conversations they are a terrible solution. Humans are independent thinkers, and human conversations are as individual as each human engaged in them. If you want to use AI to communicate with humans and retain some individuality in that communication, you need a different solution.
The risk to brands and individuals is real
There is huge risk to both brands and individuals in using generic AI tools to solve specific problems. It’s great until the audience starts to realize some of it is completely made up or that your content sounds so generic as to make it pointless – just another muddy brick in the muddy, sloppy wall of content. It’s great until you get labeled as an AI slop factory.
I’m sure the author in the Atlantic article was just trying to do his job (his SECOND job, by the way), happy that he was scraping by as a content creator, thrilled he had established a relationship with one of the largest syndicators in the US. He had found a system that seemed to work flawlessly: do the work of 10 people by supplementing his own research and writing with that of ChatGPT. But perhaps he got lulled into his own success, assuming the system was flawless – until it wasn’t. Unfortunately for him, his world has just come crashing down. As a former content creator who, a long time ago in a galaxy far, far away, was scraping by, I feel for him.
But the reality is this is the stage all AI-generated content is currently at. Almost all brands and a huge portion of individuals trying to make it work in the media and communications landscape are using AI already, whether they say it or not. And if you’re not using it, you should be. It’s quickly becoming a fundamental requirement to survive in our industry.
Along with this shift comes the onslaught of AI slop. If nothing changes in the current trajectory, almost all content you read online will become essentially pointless and homogenous. From a messaging and communications standpoint, brands will become undifferentiated from other brands in the same segment. And then what’s the point of communicating at all?
On top of that, once audiences start to realize what’s going on, it will largely be too late to fix the damage done to your image and trustworthiness. We need to do something now. If you are a brand or an individual working in content creation, YOU need to do something now.
Building your brand without AI slop
The real problem isn’t AI, per say, it’s the AI tools most people are using. Moving beyond generic tools to solve specific business problems is the first, and largely the ONLY step, you need to take. It’s that simple, really. Purpose-built AI tools aimed at providing concrete business value to solve specific problems are the future.
In other words for communications professionals and brands: stop using generic AI tools to communicate individualistically.
When done right, these purpose-built tools remove the risk of AI hallucinations, properly source materials, keep the human in charge, are traceable, offer much higher quality for a specific type of output, ensure consistency of output and deliver unique-to-you content.
I guess it should be obvious, but InferenceCloud is just such a tool aimed at media, marketing and communications professionals. It works by maintaining a source material library cultivated by you and your team, using that as the foundation for your talking points and messaging, combining it with externally sourced talking points, building communications strategies created by our AI but guided by you to achieve specific objectives with target audiences, analyzing billions of points of data to map all human conversation on the internet, making connections and inferences to objectively decide which topics and strategies will most likely resonate with your target audience, building content and content plans that are loaded with your own trusted talking points and well-sourced materials, and then delivering it in formats that are engineered to connect with your audience and can be used in a wide variety of media channels.
And all that takes just minutes. Is it more effort than asking ChatGPT to craft an article on “Summer Fun”? Yes. But it’s orders of magnitude less effort than doing all that yourself and the result is that you don’t get labeled as AI slop. The result is that your reputation remains intact. The result is that you can remain competitive and differentiated in a world that has suddenly developed superpowers.
So, if that sounds like a future you want to live in as a marketing and communications professional, please drop us a line, watch a short video of how InferenceCloud works, book a demo, and see how simple the transition to superpowers can be – without descending into a world of AI slop.