Brace Yourselves: The AI Winter is Coming

AI is here to stay. But it won't revolutionise your workday or help you do more with less. It will get some people fired though. And then it will more or less fade from the collective consciousness.

Sean Bean as Ned Stark in Game of Thrones (photo: HBO)

Artificial intelligence, or AI, is the latest in a long line of technologies coming out of Silicon Valley that have been hyped up to change the world. The internet was supposed to revolutionise how society functioned to then unite and liberate us all. The mobile phone (“the supercomputer in your pocket”) was going to completely change how we live. VR goggles, and later AR glasses, would merge the real with the digital and dominate all kinds of entertainment. Voice recognition and digital assistants would change how we interact with computers forever. Self-driven, electric cars would be the future of transport. Everything was going to run on blockchains … I’ve probably forgotten some shorter-lived, supposedly revolutionary tech in this list. The point is: All of this made a couple of Silicon Valley bigwigs the richest men on the planet and made their companies successful beyond what was thought possible before. But none of it has ever really revolutionised how we live. Sure, lots of little things have changed quite drasticallty, but in the grand, historical scheme of things, our day-to-day lives are remarkably similar to what they were like in 1925, or even 1825.

Today, we are at a point in history where replicating technology from 1968 (which was 57 years ago, by the way) is considered an ambitious, and even historic, achievement.

The latest technology that will change absolutely everything forever — this time for real! — is AI. A misnomer if there ever was one. There is nothing intelligent about AI. It is a marketing term that has, in the past, been used to sell everything from office software to the latest real-time strategy game. Nowadays, what’s branded as AI is a wide range of technologies from machine learning, to neural networks and, the latest fad, large language models. What it really is has nothing to do with intelligence.

What is AI?
At its heart, so-called “artificial intelligence” is using a very old method, something that computers are traditionally good at, coupled with a new resource. AI is a bunch of rather dumb algorithms that perform brute-force computations on huge data sets that the tech companies have been harvesting for decades. In many ways, AI is simply fulfilling the promise of “big data”. The large tech companies have been collecting every scrap of information they can for more than twenty years, but until recently, there were only limited ways to use all of this stuff. What the new AI approach brings to the table is the implementation of some rather old statistical algorithms, coupled with advances in parallel computing hardware to finally make use of all of this data. Having that specialised computing power now enables us to supercharge this brute-force statistical analysis to generate results that, on first glance, seem rather amazing. Whether this is dressing up a simple web search as a human conversation (think ChatGPT), using massive amounts of images scraped from the web and mushed together to be able to generate pictures or video of almost everything imaginable (Midjourney), doing that with music (Suno) or even specialised applications like slurping up decades of programming Q&As on Stack Exchange to solve relatively simple programming problems (GitHub Copilot). It looks revolutionary, but mostly just delivers on things the industry told us were possible decades ago. And like a good circus performer, it does it with enough panache to make you think you’re seeing something extraordinary when that is really not the case.

The flashy results of these applications of mass statistical analysis, in the case of LLMs cleverly couched in human-seeming interactions, have blinded many users to what this content is, at heart: uninspired slop. Yes, it’s incredibly handy if you want to illustrate your Dungeons & Dragons world that you’ve created for your friends and it neatly solves the issue of getting teaser images for blog posts and podcasts if you can’t afford a stock photography subscription. And it can be good for a laugh among friends to create songs about your video game characters, for example. But let’s be honest here: none of this is inspired or real art. Sure, you can save effort at work by having AI write some of your code for you or have it create that pesky spreadsheet your boss wants so desperately but that you think is a total waste of time. But does that say something about how great of a technology AI is? Doesn’t it rather reflect on how much redundant tasks middle management does have us slave over that are really not necessary? When people talk about all the jobs that will go away because of AI, what they are talking about are largely jobs that weren’t necessary even before the AI hype came around.

You can’t replace an actual artist with AI, because AI can only mimic what it finds on the internet. If you are creating something sufficiently new to be considered remarkable, you need a human for the job. Same with writers. Sure, you can turn out the ten-thousandth James Patterson-like thriller. Or have it churn out some generic crime novel set in your small town in Idaho nobody has ever heard of. But it won’t write the next series to rival The Song of Ice and Fire or Harry Potter. AI can write a passable mashup of a news story that has already been covered by other outlets, but it can’t do research or actual reporting. It doesn’t have ethics and it can’t have a human gut feeling for what makes a story important to society. You can therefore pass off its writings as journalism, but that’s basically fraud. You can use an AI assistant to bounce ideas off of and have it counsel you on your life’s problems, but be honest: Wouldn’t an actual human with experience in whatever you are trying to do be better? The bottom line is: You can make do with AI in a pinch, if you can’t afford the real thing or you aren’t terribly concerned with its results being of generally low quality or at least derivative. But a human who knows what they are doing is a better choice every time. The tragedy of our times is that we aren’t using technology to connect with these humans, but to replace them with a computer that does kind of the same thing, just badly. And cheaper. Well, kind of…

The big promise of the AI companies is that their technology is on the verge of actually becoming intelligent — marketing bullshit with no actual basis in reality that they can point to — and then the actual revolution is about to begin. Being interested in all kinds of technology, and trying to be on the forefront of it, since I was six years old in 1989 means I have heard all of this again and again from people like this. Bill Gates and Steve Ballmer told me Windows would be crash-proof pretty soon. Video game AI would actually become intelligent some time in the 2000s. Steve Jobs promised apps would be revolutionary and not just a flashy but cheap UI over web app functions that suck all the data from your phone, do every calculation in the cloud and have less features than programs running on MS-DOS back in the day. VR headsets would be in every home and most video games would use them by default. Amazon’s Alexa was supposed to be intelligent and basically replace your PC and your smartphone, run your complete household, do all your shopping for you and educate your kids. I am still waiting for every single promise from these people to yet be fulfilled. What exactly makes people think that the AI hype will have a different outcome? In a couple of years, when Silicon Valley has found its next hobby horse, that will be the greatest thing ever and all of this AI stuff will go by the wayside — like all the other promises have.

Tech-journalism luminary and irreverent Silicon Valley critic John C. Dvorak, who if anything, has gotten even more sharp-witted with age, said in a recent podcast episode about AI:

This is, to me, the never-ending battle between personal computing and mainframes. This is just an attempt to get us, get the desktop computer, hooked up to something else. So you don’t have any real autonomy. The autonomy is what you want. You want desktop AI! I mean, this is no good!

I completely agree with him on that. Companies have latched on to this supposedly new and revolutionary technology, not because its so great, but because they see great ways to make money of it. By blinding users with parlour tricks, they aim to finally extract value from all the data they have been collecting on everyone. And instead of giving users the power of this technology, they are trying to again and again dumb their computing devices — and the users themselves — down enough so they accept having to use a service they have no power over in exchange for some flashy, but actually rather mediocre results. This is the service economy equivalent of using Instagram to market cheaply produced, crappy Chinese products to people. They look great in the ads. But then they take ages to reach you from China and the real stuff never even resembles the flashy ads you saw online. But because you’ve been brainwashed by society into thinking doing this shit is a great idea, you don’t learn from your mistakes and keep buying it.

The good news is this will all come to an end, sooner or later. Sure, people will lose their jobs, but some of that would just have happened anyway, with AI just being the excuse. Let’s face it: Middle management is hopelessly bloated in any company you’ve ever seen from the inside with a critical eye. If your job entails pushing figures around in spreadsheets and being in meetings the rest of the day, of course it can be done more efficiently by a computer. If you’re in HR and all you’re doing is using AI to get summaries of résumés that applicants have slapped together with the same AI tools, what the hell is the point of your job? If you’re a writer who can be replaced by AI, you’re simply not a good enough writer. Sorry, but it’s the truth. Could AI write this column? Maybe. Mimicking another writer’s style, not with some unique character of its own, though. And will it give you the idea to write this column if you didn’t have an inkling of writing it in the first place? No chance.

The tech industry is now slowly waking up to the fact that AI may not all it’s cracked up to be, too. It uses an enormous amount of energy. Last year, data centres in the US used about as much power as all of Thailand. A quarter to almost of half of that is estimated to have been used by AI companies. Enough power for 7.2 million households in the US. The much-feared carbon footprint, or equivalent in nuclear waste, of this not withstanding, the actual problem for the companies behinds the AI boom is that this is very expensive. The big tech companies have invested huge amounts of money in this technology, but have no chance of recouping this from service fees alone. They will have to come up with alternate ways of monetising it, like going the advertising route as Google did with search, which reduced the quality of their service so much that it caused things like ChatGPT to become popular in the first place. Going the advertising route will just perpetuate that cycle. ChatGPT is currently trying to avoid this fate by becoming the new Amazon. But this just seems like trying to delay the inevitable. The problem that these companies have is that they are already too big and the usual Silicon Valley trick of holding out by operating at a loss long enough to get bought by a much bigger company won’t work for them. No, people are now realising that their product is shit and that the technology has no realistic chance of getting much better any time soon. It’s simple enough: The “intelligence” part of the branding is a lie and they have already sucked up all of the internet, so they have no chance of getting better data somehow.

A recent articled in the Harvard Business Review has it about right when describing the resulting workplace impact from low quality AI content as “workslop”:

The number of companies with fully AI-led processes nearly doubled last year, while AI use has likewise doubled at work since 2023. Yet a recent report from the MIT Media Lab found that 95% of organizations see no measurable return on their investment in these technologies.

In collaboration with Stanford Social Media Lab, our research team at BetterUp Labs has identified one possible reason: Employees are using AI tools to create low-effort, passable looking work that ends up creating more work for their coworkers. On social media, which is increasingly clogged with low-quality AI-generated posts, this content is often referred to as “AI slop.” In the context of work, we refer to this phenomenon as “workslop. We define workslop as AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task.

As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.

According to our recent, ongoing survey, this is a significant problem. Of 1,150 U.S.-based full-time employees across industries, 40% report having received workslop in the last month. Employees who have encountered workslop estimate that an average of 15.4% of the content they receive at work qualifies. The phenomenon occurs mostly between peers (40%), but workslop is also sent to managers by direct reports (18%). Sixteen percent of the time workslop flows down the ladder, from managers to their teams, or even from higher up than that. Workslop occurs across industries, but we found that professional services and technology are disproportionately impacted.

Cognitive offloading to machines is not a novel concept, nor are anxieties about technology hijacking cognitive capacity. In 2006, for instance, the technology journalist Nicolas Carr published a provocative essay in The Atlantic that asked “Is Google Making Us Stupid?” The prevailing mental model for cognitive offloading—going all the way back to Socrates’ concerns about the alphabet—is that we jettison hard mental work to technologies like Google because it’s easier to, for example, search for something online than to remember it.

Unlike this mental outsourcing to a machine, however, workslop uniquely uses machines to offload cognitive work to another human being. When coworkers receive workslop, they are often required to take on the burden of decoding the content, inferring missed or false context. A cascade of effortful and complex decision-making processes may follow, including rework and uncomfortable exchanges with colleagues.

The most alarming cost may be interpersonal. Low effort, unhelpful AI generated work is having a significant impact on collaboration at work. Approximately half of the people we surveyed viewed colleagues who sent workslop as less creative, capable, and reliable than they did before receiving the output. Forty-two percent saw them as less trustworthy, and 37% saw that colleague as less intelligent.

I don’t see this as alarming. I would think it’s good that these people are identifying their colleagues who do this kind of thing as less creative and less trustworthy. I feel that is an accurate assessment. The people who pick up on that are probably the people you want to keep at your company. AI is certainly changing our lives. But if reports like these are accurate, it’s hardly for the better. And I think they are accurate, because I don’t think you need a study to realise that this is what’s going on. If you look rationally, and without being overly influenced by political or sales propaganda, at where this technology came from, how it works behind the scenes and how the results compare to the alternatives, you could have figured this out by yourself a while ago. And many people did. It’s just that it was almost taboo to talk about it and you’d risk being branded a luddite, or even worse, a boomer (ie. old white man), as the new trendy insult would have it.

Funnily enough, hyping “artificial intelligence” is such an old concept that the phase that’s coming up soon has happened before, at least three times: in the ’60s, the ’70s and the late ’80s. The phenomenon has occurred so often, there is even a generally accepted term for it: the AI winter. So brace yourselves, the next AI winter is coming. And after having stayed away from hyping this technology on a large scale for three decades, this one is going to be a doozy. After all the investment sunk into the current generation of AI technology, it’s massive operational cost and energy footprint and the unbelievable hype and hyperbole, there will be a very rude awakening for those who think these technologies have anything to do with actual intelligence. Or that they will fundamentally change our society. For better or worse, AI is here to stay. It will integrate into everything we do to a lesser or bigger extend, like Alexa and the smartphone, and it will also provide the excuse to cut a lot of jobs, but your life will probably be more or less the same afterwards. Some day, further generations will laugh at this as much as we laugh at Microsoft’s vision of the future from 1995 today.

– 30 –