AI's Cultural Battlefield
Art, consent, and power in the age of probability
Many artists that I respect and admire argue that AI is undermining creativity and corroding culture. They see mimicry without meaning, scale without care, and extraction without consent. I recognise the validity of their critiques, but I also remain constantly interested in the power of AI and what I perceive as an infinity of ways that it can be deployed.
The extraction of “content” for training data (at scale, for profit, without permission) seems to provoke the biggest outrage. It is framed as theft. My instincts come from a different place. I grew up relatively poor, so buying books, music and films was rarely an option. They were something you accessed through workarounds: books came from the library, music arrived by taping it off the radio, films were recordings from TV onto VHS. That early experience of constant borrowing and copying left me with a default belief that culture should be free and widely accessible. It resists neat ownership because it spreads when people connect with it and share it. Technology granted me access by lowering the cost of entry.
I know that instinct clashes with the fact that creators need rights and income, and I do not resolve that neatly here. The question of consent or compensation for training data remains open to me. But it does explain why I find it hard to feel a clean moral outrage at the idea of culture being “taken” in the abstract, even while I worry about who profits, on what terms, and with what accountability.
The tension makes me ask other questions instead. Are other artists failing to summon interesting outputs from this new tool that grants access to a latent cultural space of potential and probability? Are they aggrieved by the lack of effort needed to generate work using this new method, just as painters were when photography first appeared? Or are they simply reacting to the abundance of low-quality, high-volume “AI slop” on social media?
I think a major part of the friction lies less in the capabilities of AI, and more in the corporate terms attached to the interface. I wouldn’t want to buy a notepad that I had to keep at WHSmith, one that I was only allowed to write in when I was in their shop, with the nagging suspicion they might be reading my notes as soon as I left - even though I ticked the box telling them not to. That is the current reality of AI platforms. And because a handful of firms control the interfaces, they also shape who benefits from the value created. But is that enough to justify boycotting note-taking itself?
One of the most common objections I read about concerns influence. Critics say it is misleading to compare large language models (LLMs) and their training to a creator being shaped by predecessors. The difference, they argue, is qualitative. A person studies, interprets, and transforms. An LLM maps patterns and probabilities. Human influence is interpretive; machine influence is computational.
That difference matters, but it is also beside the point. What matters most is not the origin of a work but whether it has aesthetic force. Rejecting a work solely because of its provenance risks replacing individual taste and judgement with blanket rules and preconceptions. We should acknowledge that LLMs lack subjectivity, yet still allow that their outputs can be judged on their aesthetic qualities. The harder question is where the authorship sits.
An LLM does not mean anything in the human sense, and meaning is integral to art. But audiences already ascribe meaning beyond what artists consciously intend. A workable standard is to locate authorship in the human decisions that set the aim, shape the material, and accept responsibility for the result. The machine is a tool, albeit one with extraordinary generative power. The human remains accountable.
Economic questions run deeper. Much of what trains these models is the cumulative labour of billions of people. Artworks, captions, reviews, and compositions contribute to a vast archive. This is not only raw material; it is uncompensated contribution. Treating it as a neutral resource highlights market power imbalances. Individual creators cannot negotiate on equal terms with large platforms and model developers. The remedy - whether through licensing models, data trusts, collective negotiation, or public-interest frameworks - is not just a technical fix but a question of policy and governance. Without safeguards, the likely trajectory is further concentration of wealth and more uncertainty for creators.
When critics describe the collapse of creative markets, they often present it as the end of art. It is perhaps more accurate to say that value will move. If the cost of copies falls close to zero, scarcity will be found elsewhere: in live performance, direct engagement, trusted curatorship, or the craft of long-form projects where the author is accountable. History shows that art has always been tied to economic structures. The task is to design the next form of support.
In lazy hands, LLMs tend to reproduce statistical averages, risking a flattening of culture. The role of the human creator is to push against that gravity, using their tools - AI or otherwise - to deliberately move beyond the average and bring surprise back into creative culture.
Currently, expertise and knowledge of craft remains vital. Long-form writing, for instance, still demands control of voice, pacing and theme. Models can now retain vast amounts of text, but long-form work depends on more than recall. Sustaining thematic tension, subtext, pacing and restraint over hundreds of pages remains, for now, a human-led craft. Systems can track a plot point, but they do not reliably weigh the emotional accumulation of a scene, or decide what to withhold, compress, or let echo across a whole work. The difference is not information. It is authorship: sustained judgement about emphasis, pacing, feeling, and meaning - and accountability for the result.
The most pressing threat is perhaps not creation but discovery. In an age where the cost of generation and the time it takes to generate is near-zero, attention becomes the bottleneck and curation becomes a scarce good. The trusted guide grows more valuable than the manufacturer. Without funded, credible forms of curation, distinctive human work will not fail on merit - it will fail quietly, unseen, under the weight of abundance.
So, my question is what fair looks like when culture becomes automated, commercially captured and abundant. A workable cultural settlement rests on transparency and value. Perhaps it requires machine-readable disclosure to distinguish synthetic output from accountable human work, alongside shared frameworks that treat training data as a valuable input rather than a free quarry. Crucially, I think we must produce discovery mechanisms that reward distinction and difference.
The strongest critiques of AI-generated work point to real risks: misattribution, market concentration, and homogenisation. Meeting them does not mean rejecting the tool. It means building rules, norms, and institutions that make its use fair. Cameras and samplers once provoked similar objections. Each time, art and culture adjusted. What is different now is the speed and opacity of the instrument. That raises the stakes. But if we judge works on merit, align incentives, and check excessive market power, AI can widen the field rather than narrow it.


