No AI — Time to Take a Stand

I will be removing all AI generated images from my website. My writing, as was the case before, will not be utilising any AI tools.

Eule auf schmucklosem Baum, painting by Caspar David Friedrich (1834)

No writing I have ever published has been produced with the help of any artificial intelligence tools — neither neural networks, machine learning, large language models or any other algorithms that do mass statistical analysis or the like. I don’t use AI technology to write texts, or parts of texts, nor do I use it to give me inspiration for things I write.

In the past, I have experimented with AI. I am a tech journalist and need to stay current on new technologies, after all. I have used AI to translate texts from English to German (or vice versa), for handwriting recognition and to generate images to illustrate articles. I will continue to experiment with AI tools, but I have decided I will not publish any of this content on my website anymore — with the possible exception of articles where I report on artificial intelligence and need to illustrate something I am writing about.

As of December 2025, there will be no AI content published on this website.

I am working to replace most of the AI generated images on this site with drawings or photos I have created myself. Additionally, I will continue to use photos and graphics from sources that specifically prohibit AI-generated content. Obviously, this process, of replacing the limited amount of AI-generated images that are on the blog as of writing this, will take a while to accomplish. But why am I doing this?

Well, the technologies collectively referred to these days under the moniker “artificial intelligence” (or AI) are fascinating. But they are not, contrary to what Silicon Valley salespeople would have you believe, new or even particularly revolutionary. What we call AI today is simply the application of decades-old algorithms and programming techniques to the mountains of “big data”, vast collections of information, that companies have amassed in the last twenty years. Hitherto, this mass of information — on our habits, our private lifes, our creative endeavours and also on the world around us — had been largely unusable. Mass statistical analysis of this data by relatively dumb algorithms now enables the “wonders” AI companies have been hyping up in recent years.

Some of the things created with AI are useful, a lot of it is simply silly and some of it is either hilariously or horribly bad, depending on the context it is used in. I begrudge nobody the use of this technology. And I am under no illusion whatsoever that it can be stopped. AI is here to stay. I simply wish that people made an effort to understand even the most rudimentary basics of how this technology works.

Artificial “intelligence” is not intelligent. Most crucially, it utterly lacks one of the most important features of human intelligence: creativity. It is also not magic and should not be portrayed as this revolutionary paradigm shift in technology that is almost impossible to understand. AI can produce astonishing results, but the genesis of these can be well understood by learning how the algorithms actually work. As can the limits of the technology, which are often mischaracterised as “hallucinations” — probably to intentionally downplay the very real hard limitations of the technology as “bugs” that can be easily fixed.

The lack of creativity of AI cannot be fixed — at least not until someone comes up with a radically different technology that replaces what we understand under the term today. And it is precisely this lack of creativity, and also the lie that AI companies propagate to the contrary, that I object to. Because it is dangerous. It means that what AI companies are essentially doing is leeching off products of human ingenuity and creativity that are freely available on the internet to produce the systems they are selling.

This is ultimately a self-destroying practice, because it removes the incentive for humans to be creative in mediums accessible by these AI companies. When humans stop being creative on the internet, digital works of art and other outlets of human ingenuity on the web will be replaced with a torrent of derivative, mediocre, uninspired slop. Even now, only a few years into the widespread adoption of AI technologies, we are already seeing this effect everywhere. And at the moment, this problem is growing at an almost exponential rate.

And this is not only due to the generative uses of LLMs and other AI technologies. A similar effect can be observed as a result of agentic AI. As the use of these technologies replaces classic web searches, a destructive effect can be observed here as well.

Where Google pioneered the business model of using the mass of content produced by other people on the internet to make money, its service also rewarded creators by making their content visible to, and thus accessible by, the public. And by building an advertising network around this service, Google even created a whole new way for creators to make money from their work on the web. Granted, this was less money than these people could have earned in the times before the widespread adoption of the internet, but it also opened up these possibilities to many people who would have roherweise been kept from ever finding an audience.

While the old Google model had plenty of downsides, it pales in comparison to what AI companies are doing to creators on the web. ChatGPT and its competitors do not incentivise their users to click through to original content. These products are designed to pass off other people’s work as results of their own “intelligence”. Even when these chat bots provide source links, these are often limited to a tiny fraction of the actual sources the algorithm used in assembling its answer. And users are being intentionally trained by the user experience of these systems to simply accept what the AI says without doing their own reading and research. Bypassing this step is one of their selling points, after all. These things are built to have “conversations” with their users, which is another way of saying they are meant to keep the user within their own ecosystems.

The result of this is, once again, of deincentivising people from putting their work on the internet. Since it will just be ripped off shamelessly by AI agents passing it off as their own product, while at the same time preventing users from finding the actual creator and thus preventing the creator from finding an audience, let alone earning some money from their work.

In the last thirty years we thus went from creatives selling their work offline to forcing them to sell it online for much less money. But whereas that at least came with possible exposure to an audience whole numbers of magnitudes bigger than in the offline world, and the possibility to flip this into making even more money than before, we’re now taking their money and their audience away with AI. As it stands right now, creatives will either have to work for free, give up being creative altogether, or move their work completely offline.

While I have been thinking about the implications of these technologies for years now, I have been finally galvanised into action after I recently discovered this article.

The [Economist lead article] states crisply that “the danger is that, as answer-engines take readers away, they are removing the incentive for content to be created,” concluding that “if nothing changes, the risk is of a modern-day tragedy of the commons. The shared resource of the open web will be over-exploited, leading to its eventual exhaustion.”

The problem lies in what it so appropriately refers to as Google’s change from being a search to an answer engine, a subject further explored by the Pew Research Center’s timely report of their study summarised here by humans. That demonstrates that Google “users are less likely to click on links when an AI summary appears in the results”.

Nothing that I publish here has come from AI or answer engines. Every word that is written comes from this human. Where there are errors or mistakes, they are mine and not the product of an LLM. I do not use AI such as Writing Tools for proofreading, to summarise articles, or to write comments. The same applies to content that I write for publication elsewhere, including my regular sections in MacFormat and MacLife magazines, whose publisher explicitly forbids the use of AI to produce content.

The time has come to stand up and make it clear that none of the content of this blog is prepared with the aid of Writing Tools, Apple Intelligence, Large Language Models, or Artificial Intelligence. What you read here is researched, composed and corrected by a human.

To make this distinction clear, this site will now display an owl emoji. Owls have long and deeply ingrained associations with the night, and with wisdom and learning, going back to the classical civilisations of the Mediterranean […] And I encourage others who write blogs and online content free from AI to do likewise. It’s time to stand together.

I agree with many of the points raised there and love the author’s idea of using an owl and a “no AI content” label. I will therefore be starting to adopt this label myself going forward.

Many people will probably see this as a hopless last stand. “You can’t stop progress” and all that jazz. But since I have never used AI in my writing before, and never had the intention to do so going forward, it really doesn’t cost me much to make this attitude explicit. It’ll make it a bit harder to source teaser images for posts and that’s about it.

It was always a matter of personal integrity and professional ethics for me, a writer more by calling than by profession, to never use these technologies. After all, if you can paint like Simon Stålenhag, you wouldn’t be using AI to generate images. If you’re Sabaton, you don’t need algorithms to write a power metal banger. And if you consider yourself a good writer, which I guess I do these days, your aspirations should be along the same lines. Writing, as I see it, is not about making the process as easy as possible. It is about producing the best result. And AI never produces the best result, only human creativity can do that.

From an ethical standpoint, even though the atom bomb was inevitable ever since we figured out how to split the atom, I’d rather not be Oppenheimer. Fame like that comes at too high a price, if you ask me. Just because a development is inevitable doesn’t mean you have to be part of it. From how I can see the future developing from here, I’d rather wash my hands of this technology. I might use it when creating D&D settings for my players and in other private applications where I can see no harm in doing so, but on this website, where I showcase journalism and my own creativity, I see the harm and I don’t want to be part of the problem.

So I guess I’m taking this last stand right here: NO AI CONTENT


It’s the nature of time that the old ways must give in
It’s the nature of time that the new ways come in sin

– 30 –