When I was teaching English in a local language school I used to ask my students the question, ‘Let’s imagine you’re using a learning platform, a website or a resource to study English, how do you feel if you suddenly discover that the content you’re consuming has been created entirely by generative AI?’ It’s a question that has become increasingly salient in recent months, not least because the quality of AI-generated content has improved so dramatically that it is now virtually indistinguishable from content generated by humans.
Interestingly, all my students gave more or less the same answer. It was something along these lines: ‘The moment I find out that what I’ve been reading, watching, or listening to was generated by AI, I immediately stop.’ Students described feeling angry, frustrated, cheated, disappointed and disgusted. The list of negative adjectives goes on.
When I decided that I was going to start a podcasting business and shared my vision with other people, it raised a few eyebrows. I knew what people were thinking and – to some extent – I thought it myself. What earthly point is there investing thousands of hours writing my own content when I can ask an AI to populate a complete branded network on my behalf, replete with technically perfect lessons, all read by a digitised voice clone that can perfectly replicate my voice with every flaw and quirk.
Almost on a daily basis, I read some scaremongering article about how we’re sleepwalking into complete catastrophe; how the vast majority of jobs currently done by people will soon be fully automated and that, therefore, most white-collar workers will become redundant and lose their jobs. It’s ironic, of course, that the writers of these articles (or warnings) are often software engineers who have been partly responsible for programming the very AIs that will take their jobs.
But I digress.
Let’s return, for a moment, to the reactions of my students. Their negativity about being duped was so strong it felt almost militant, but it gave me a little hope for my own business. You see, I was polling my students because I had a hypothesis about how parts of this situation might play out in the long run. Perhaps it wouldn’t be the total doomsday scenario that these software engineers were envisaging.
Not too long ago, I felt I needed to brush up on my Finnish, which had become rather rusty from lack of speaking and listening. I stumbled on a podcast that looked perfect for my needs and I subscribed. At first it seemed fine: an inoffensive and informative podcast at native-speed Finnish. But as I marched through the episodes a seed of doubt began to grow within me. Something was not quite right. The voice narrating the episodes had all the usual human flaws, but there was something too perfect – almost robotic – about those flaws. Then, there was the diction. Sometimes the voice rushed through sentences and phrases, when pauses would have been appropriate. Furthermore, at the end of each episode there was a ‘language focus’ that looked at the ‘key’ words from the episode, but there was something quite strange – baffling even – about the words that were chosen for the focus. They seemed to have been picked entirely at random, and certainly did not represent sensible or rational choices based on the content of the podcast. Most of them were either elementary, or very low-frequency, words. I started to have a sneaking suspicion that the entire podcast (content and voice) was AI generated. Then, when I was in the middle of a particular episode, the female voice narrating the content suddenly degenerated into gibberish and started reeling off a series of random numbers and code that made absolutely no sense. Not only had the podcast been generated by AI, but whoever had devised the prompts hadn’t even bothered to check the integrity of the content by listening back to it before publishing. I was really furious.
It took me a while to get a handle on my feelings. It’s not often that I get disproportionately angry about something as innocuous as a podcast, but I’ve learned the lesson that when I feel enraged about anything, it’s definitely worth digging into why. That’s a basic application of metacognition. So, I took a step back and analysed the situation rationally and dispassionately. Why did I feel the way I did? Sure, there was the usual feeling of having been cheated by some hidden human grifter looking to capitalise on my gullibility. I think everyone would feel that; I certainly see a lot of anger and frustration in the comments sections beneath sensationalist AI-slop photos of car accidents or natural disasters. What intrigued me was a deeper intangible and elusive feeling behind the anger. I can only describe this feeling as a combination of longing for real human connection and a vague sense of panic that the internet – previously a space inhabited by genuine human voices – was fast becoming a digital wasteland; a place where exponential volumes of content were being pumped out by generative AI and then commented on by AI bots, creating an endless feedback loop of ever-diminishing quality; huge echo chambers inhabited by millions upon millions of bots discussing real-life situations that they had never experienced.
And that was it – I had successfully identified that elusive feeling. It was a fear of losing the opportunity to share the experience of being human with another person. Now I understood why my students were enraged. At the core of our soul we long for connection with other people because, of course, other people have experienced real life. Until now, no other generation in human history has had the privilege of being able fully to understand this because no other generation has been faced with a massive proliferation of content that purports to have experienced life but, in reality, has no understanding of what life really is. It’s both sobering and terrifying when you think about it. Whether we are capable of articulating it or not, we are angry when people foist AI-generated content on us because it is fraudulent. It is fraudulent because it claims human experience when it has no human experience. It’s a deliberate, scurrilous deception.
It’s wrong for us to be angry with AI per se. In my daily scrolling I see a lot of vague anger channelled towards AI and I think it’s misdirected. As yet – to my knowledge – AI has not become fully sentient, nor has it launched some Terminator-like offensive against humanity. As far as I know, AI does not currently have a vendetta against the human race.
I do, however, believe there is a fairly clear dichotomy between, on the one hand, AI that people use in order to assist them in doing things that will generally build the user’s skillset and, on the other, AI that bad actors exploit to generate content that they pass off as having been created by people when it was not. The problem here is the people who harness AI to deceive others. That’s not the fault of the AI, it’s the fault of the people who prompt it. Taking this further, even AI bots that act autonomously, flooding comment sections with fake, agenda-driven or deliberately misleading comments, are only there, and only exist at all, because they have been programmed and initiated by people with bad intentions. We have every right to be furious with the fraudulent bad actors who attempt to trick us into believing that the AI content we’re seeing, hearing and reading is founded on human experience. An extreme example of this is a female writer with the pseudonym ‘Coral Hart’ (anonymous, of course), who boasted that she was making a six-figure salary annually by getting AI to write novels in as little as 45 minutes, which she then published, passing them off as written by humans.
On the other side of the dichotomy is what I would call ‘good’ or ‘beneficial’ uses of AI. I wouldn’t describe myself as a particularly technical person and, when I was a child, I used to dream of a time when it might be possible simply to use plain English to ask a computer to help me solve problems and learn new things. That time has now come. Well, almost. When I bought the domain name linguacade.com I decided to host it with a company that offered to create a free website in minutes using its own proprietary AI. It failed spectacularly. I realised that, in order to create a learning platform with its own functioning ecosystem, I would have to roll up my sleeves and start learning some technical things. So, I set about building the site myself, this time with AI as a guide. I asked it dumb question after dumb question about the most basic elements of the WordPress block editor. I imagined it rolling its eyes at the stupidity and repetitiveness of my questions, which it certainly would have done if it was sentient. But – and here’s the thing – at the end of the process, I had actually learned how to build a basic website and would arguably be able to do it again more quickly and without so much hand-holding from AI.
I went through almost exactly the same process in creating a font for the Linguacade brand. Initially, I was wooed by the clickbait ads offering an AI solution to create a font on my behalf in a matter of minutes. Then, when I realised that I would never really own or control the AI-generated font, I set about designing it myself but this time using AI as a guide. FontForge, the open source software I used to create my font is not for the faint of heart and it’s certainly not designed for people without technical expertise. But, by the end of the process and with the endless patience of my AI assistant, I had managed to create and publish a font of my own, which now appears in the podcast cover art and on my site. If I were asked to design another font in the future, I would probably be able to do it with significantly less help from AI.
That’s when I finally understood the fundamental dichotomy in AI use. It is the difference between AI used to develop a prompter’s skillset and AI used by a prompter to deceive for personal gain. In the first case, a user leverages AI for personal betterment. They want to assimilate the knowledge provided by AI so that they improve as a person and do it themselves next time. In the second case, a user exploits AI to deceive others in order to gain some kind of advantage, which is usually financial. In this case there is no learning, no self-improvement, and no added value for anyone in the process.
Getting AI to do something for me when I could, with some guidance, learn how to do it myself is a bit like deciding to go on a hike with a guide only to change your mind at the last minute when you see the challenging terrain ahead and tell the guide, ‘Listen, why don’t you do this on your own? I’ll wait here and have a coffee and a doughnut while you climb those steep hills. Take some photos and when you come back we can just pretend that it was I who did the hike. We’ll photoshop me into a few of the images, make them look like selfies and I can post online about how great the hike was.’ What’s the point of doing that? It completely defeats the object, as anyone who has handed in an AI-generated essay with their name on it, will know. But if I decide to do the hike anyway, acknowledging that (a) I’m doing it with a guide and (b) my guide will always be there to help and support me when it gets hilly, then I return from the experience stronger and probably more willing to repeat the experience in the future.
I know that, for many people, AI is the guide they always wished they’d had, and that their experience with it has genuinely helped them learn new skills that they can use in the future. But, unfortunately, it doesn’t seem like that’s the general trend for AI use. Time and again, our online experience shows us that many people have abdicated and handed over all creative control to AI. In other words, a lot of people are choosing to send the guide on the hike, pretend that they did it themselves and go out of their way to lie about it on any platform that will publish the resulting slop.
And the statistics seem to back this up.
There’s a conspiracy theory that started to gain traction about ten years ago. It’s called the theory of the dead internet. It postulates that, whereas in the past, the vast majority of online content was generated by people and only a tiny proportion was created by bots, now, with the advent of hyperintelligent generative AI, exactly the reverse is true. But not only that; now the relative proportion of human content online is shrinking exponentially. It’s as though the current evolution of the internet reflects one of those videos that presents the sun in our solar system as something enormous and majestic and then zooms out until it becomes an almost-invisible dot alongside red hypergiant stars such as VY Canis Majoris or UY Scuti.
We currently have no way of knowing for certain how much of the internet is a dead wasteland of artificially-generated content. According to some reports we have already reached a tipping point at which approximately 51% of total web traffic is generated by bots. Allegedly, this figure is predicted to rise to 99.9% of all internet traffic in the not-too-distant future. That is to say, it’s entirely possible that human online traffic may account for only 0.1% of all internet activity, and synthetic AI traffic may account for the other 99.9%. By the time our children have grown up, human-generated content may represent the equivalent volume of our sun next to artificial content of such unimaginable volume that it could be likened, in relative size, to the ultramassive black hole Ton 618.
So, on the one hand, you have utter frustration from people who want genuine connection with content that tells a human story and is the fruit of personal sacrifice and, on the other, you have the likelihood that, within a few years, virtually all online content will have been generated by bad actors using AI to deceive directly or programming bots to do it autonomously. It’s crystal clear what this means: content created by people will become a very rare commodity. People want human content. Ipso facto, people will not become redundant in every area.
When I say there is ‘hope’, I don’t mean that we will somehow avoid the doomsday scenario of AI taking the vast majority of clerical, administrative and data jobs. But I am saying that there is still a conversation to be had about how much people value something personal, even if it’s flawed and imperfect. Considering the fact that publicly-available AI models have only very recently been able to produce something sufficiently close to digitally-perfect representations of reality that we can barely tell the difference, it’s nevertheless surprising how quickly folk have grown tired of such representations – and how angry they are when they are duped by them. Again, I must stress that I am talking specifically about those uses of AI that falsely claim to be human experiences or to represent human creativity.
We need to talk more about how we can leverage the motivation that is generated by this anger and frustration to find ways that flawed, meaningful content, made by real people who sacrificed their time to make it, can be encouraged and enshrined. Some of my friends tell me that attempting to distinguish fact from fiction has caused them to become so discombobulated with their social media feeds that they have switched off in disgust and now often think twice about accessing social media at all. Good! It’s ironic and somehow satisfying that all these deceptive attempts to monopolise our attention may actually backfire on those who devised them, to the point that swathes of people stop giving them any attention at all.
It is a strange and disruptive time to be a digital creator but at least our remit is 100% clear. Even if the volume of our content turns out to be infinitesimally small in the grand scheme of things, and even if the hike to create it takes hours, we owe it to our fellow human beings to be genuine: to go the extra mile and do it ourselves.
Thank you for listening. You can find the full transcripts for every episode at linguacade.com.
If you want to completely transform your English language expression, join me on Patreon for the Linguacade Deep Dives. In these sessions, I break down the meanings, nuances, applications and etymology of the phrases highlighted in bold throughout the transcripts. Subscribers get full access to the complete masterclass archive across all levels. I look forward to seeing you there.
Unlock the Deep Dive: If you enjoyed this lesson, join the Linguacade Patreon for just £4/month. Get instant access to exclusive masterclasses covering every highlighted word and phrase in the Beginner, Intermediate, and Advanced episodes.

Leave a Reply