My stance on AI

Attempting to chart an ethical course through impossible waters

One of the most interesting, and in hindsight inevitable, developments in the world of AI in the past year or so is how tribal it has become.

I love Bluesky. I spend an uncomfortably large amount of time on it, and I follow mostly left-of-centre arty folks. Within that group, the standard view of AI is that it’s a “planet-destroying, lying plagiarism machine”. AI is immoral, because it’s trained on copyrighted works without permission and because it’s being used to replace human creativity. It’s useless, because it’s just “fancy autocomplete” and will happily hallucinate anything it fancies. It will never have any useful applications. It’s being pushed hard by a VC-backed Silicon Valley elite to justify the investment they’ve already made in it, and at some point soon the bubble will burst. Anyone who uses it or advocates for it is a con artist, a sociopath, an idiot, or some combination of all three.

I think that’s a reasonable characterisation of what is a fairly mainstream viewpoint. I also don’t think any of those points is entirely without merit. Nevertheless, I think there is something deeply unhelpful – on both an individual and a societal level – about shunning AI entirely, or only engaging with it at the level of cynical, Zitron-esque sniping.

I worry that by taking an absolutist and refusenik stance on AI, we leave the questions of how to implement one of the most interesting technological developments of my lifetime to poorly socialised techbro libertarians, bonkers accelerationist post-humanists, and venal and amoral venture capitalists. I do not think their vision of the world is one I particularly want to live in, and so I do not think we should leave this technology in their hands.

I think the refusenik attitude is partly driven by the sense that AI will, at some point, destroy itself; that, because it’s fundamentally useless, we can ignore it and wait for it to collapse under the weight of its own inutility. But I don’t think that’s going to happen. AI is demonstrably useful now, even if it doesn’t improve further – which I’m certain it will.

So I find myself left with this heterodox view of AI that will probably please absolutely no one, since I think the boosters and the doomsters are both fundamentally mistaken. But, partly to get my own thoughts straight, partly in the interests of potentially helping others to tread a less extreme path through this complex issue, partly because I think there’s a moral urgency to doing something with AI that is for the common good, and partly just to shout into the void, I share my views here.


My beliefs about AI, as things stand in the middle of 2025, are:

AI has original sin • AI is overhyped • AI is useful • AI is useful even without AGI/superintelligence • Leaving AI to the techbros is a strategic error • AI is not that bad for the environment • AI is economically viable • ChatGPT was a mistake • Using AI for creative work is uninteresting • AI video generation tools should be banned • Many applications of AI won’t work • AI will boss maths, science and software engineering • Implementing AI will be hard and slow

AI has original sin. How AIs were trained – by consuming the entirety of the human digital “commons” – is by itself immoral. As Robin Sloan says, this transgression can be excused by society only if AI leads to enormous social benefits, benefits that far outweigh this original sin of AI. We must hold AI companies and their investors to this high standard, and not trade the commons for slop.

AI is overhyped. It can’t do many of the things its boosters claim it can do, and it might never be able to. I don’t think we’ll achieve artificial general intelligence (AGI) via large language models (LLMs). Loads of applications have insurmountable practical obstacles, even if AI is theoretically suitable for them. Some of the boosterism is completely embarrassing and incorrect.

AI is useful. Despite the hype, AI is not useless. It’s not NFTs, the metaverse, or blockchains. We shouldn’t dismiss it out of hand as we correctly did with those technologies. I understand why people have this dismissive instinct: they’ve been burned by a decade of over-hyped, bullshit technologies and have grown weary of a tech industry that has incinerated any trust it once had. But I think AI is not like those other technologies.

AI is useful even if we don’t get artificial general intelligence or superintelligence. If advancement in LLMs stopped today, they would already have the power to transform countless parts of countless industries. We made machines fluent in the symbolic layer of language! We used this to approximate reasoning in ways that are genuinely useful at problem solving! We’ve made huge leaps forward in the translation of human languages, sentiment analysis, transcription, summarisation, and the general ability of computers to deal with natural language! This is a remarkable achievement. A refusal to admit this is probably the thing that cheeses me off the most about a default-cynical approach to AI.

Leaving AI to the techbros is a strategic error. I remember well enough the early days of the internet, full of competing visions of what it could be and the human connections it could foster. Sure, lots of those things faltered; social media became a black hole of awfulness and a small number of platforms ended up controlling everything. But why speed-run to that conclusion with AI? Charities, universities, libraries, museums, and other forces of cultural good in the world should be releasing models. We should be finding ethical sources of training data. We should be thinking up new interfaces for AI that don’t have engagement and monetisation as their primary goals. We should be making machinery helpful to commonality. But nowhere near as much of this is happening as should be, because so many of the people who have a humanistic temperament have rejected AI wholesale, leaving it to the techbros and the sociopaths.

AI is not that bad for the environment. Last year you couldn’t move for breathless claims of how awful AI was for the environment – “each chat with ChatGPT uses a bottle of water!” and so on. I think we now know this to be false. Training has a high carbon footprint, but inference has a very low footprint – low enough that we don’t need to think about it, just like we don’t think about the carbon footprint of searching Google. The average query to AI uses one fifteenth of a teaspoon of water and the same amount of energy as running a domestic oven for one second. All of this is entirely electrified and so can be powered easily by nuclear and renewables. It’s not perfect – and the energy use for generating images and especially video is obviously much higher than for generating text – but I don’t think AI is a significant contributor to our energy usage or to climate change, and there are much bigger priorities to fix.

AI is economically viable. A central part of the critique of people like Ed Zitron is that somehow the AI industry is a bonfire of investor cash and that it is structurally and inescapably impossible to do AI profitably. I think that’s already fairly nonsensical and will only become more so in the future. Training a new foundational model costs single-digit millions of dollars; inference costs are falling all the time. Many AI-based business models are already perfectly profitable and will continue to be so. The unit economics of AI make sense. That doesn’t mean every big AI company will survive, or that everything that’s been invested in the industry has been invested wisely, or that company valuations make sense. But this is not something that only works when it’s being subsidised by VCs, as some people claim.

ChatGPT was a mistake. Both making LLMs a consumer product and making them available through a chat interface was a profound mistake. I’m sceptical about some of the more lurid accusations of its impact, e.g. that ChatGPT is unleashing a wave of psychosis. (Scott Alexander has a good take on this.) But the amount of misinformation and misunderstandings already created by this will take years to undo. Even just the practical one, that “AI” means “chatbot”, is an annoying drag and a constraint on both imagining new use-cases and getting people to adopt AI.

Using AI for creative work is uninteresting. It’s immoral, because of how AI was trained and because it generally puts human creatives out of work, but moreover it’s not very good. Beyond generating, like, clipart levels of imagery, I’ve decided after much experimentation that I have zero interest in creating or consuming AI-generated creative work. It’s a shame that this is what most creative people think AI is used for.

AI video generation tools should be banned. I cannot see how the pros outweigh the obviously enormous cons around misinformation that are layered on top of the existing immorality of generating static images. I and everyone I know has been taken in by at least one AI-generated video that we know of, and presumably many others that we don’t. It’s not possible to eliminate this – the genie is out of the bottle – but I would like to see some sort of prohibition on the commercialisation of AI video generation, or at least the major labs agree not to promote this as a product.

Many applications of AI won’t work. Simon Willison defined the “lethal trifecta”: LLMs that have access to private data, exposure to untrusted content, and the ability to communicate externally are profoundly, and potentially inescapably, insecure. So we’re not going to have AI personal assistants any time soon, and Siri probably isn’t going to suddenly become good because of LLMs. What LLMs can do well and badly is often unintuitive, and often more to do with security constraints than raw capabilities.

AI is and will be great at maths, science, and software engineering. AI excels at things with clearly and immediately self-verifiable answers and tight feedback loops. I don’t care that it isn’t technically “reasoning”; what it does is so effective an approximation of human reasoning that it’s already extremely useful at many cognitive tasks within these domains. The experience of pair programming with, for example, Claude Code – especially in a technology that’s new to you – is a truly remarkable one. I think it’s perfectly reasonable to think that LLMs will lead to breakthroughs in scientific research and that they will transform software development.

Implementing AI is a hard, human problem even where it’s obviously viable. Businesses will be slow to adopt AI because they’re slow to adopt everything, but also because replacing human processes with either AI or AI-generated software is something that takes alignment, understanding of the processes, and spotting of the opportunities. Organisations will differ significantly in their ability and willingness to do this, and will experience wildly differing competitive pressures on them to do so. I suspect “software consultancy” will feel intense pressure to adopt and will adopt quickly; I suspect “widget manufacturer” or “care home” will not.


So, that’s that. It leaves me feeling medium-term bullish on LLMs as a technology, moderately sceptical about the current crop of large AI companies’ valuations, excited about the potential of AI within business and especially software development, worried about the impact on the creative industry and the media, and desperate for the future of these technologies not to be shaped by amoral narcissists.

I’m interested in anyone who’s trying to build LLM machinery that’s helpful to commonality rather than hurtful, who’s focusing on human-scale things with localised impacts, and who’s generally approaching this from an ethical and humanist perspective. If that’s you, send me an email.