The Ghost in the Archive
Art, Value, and the Algorithmic Turn
[Deep Dive Disclaimer: These deep dives are a mixture of original drafting, AI deep research and editing. I have been creating a rough draft of an op-ed style article, using that rough draft as a prompt for deep research, using that deep research to improve the draft, then using the improved draft as a deep research prompt - to create a more stylised and readable ‘deep dive’. I send out the improved op-eds through my Substack email list. These deep dives are the endpoint of the process. I was creating them for my own enjoyment but realised other people might enjoy them too. I’m not sure I can claim authorship of them - though I don’t think anybody else could have created them either…]
The Anxiety of the Image
It is a peculiarity of the present moment that the most fervent discussions about the metaphysics of the soul are taking place not in the hushed seminar rooms of the Sorbonne or the coffee houses of Vienna, but in the server farms of Northern Virginia and the subreddit threads of Silicon Valley. The prompt for this renewed philosophical panic is the sudden, vertiginous rise of generative artificial intelligence – systems capable of producing text, images, and sounds that are, at a cursory glance, indistinguishable from the products of human labour. To many, this capability signals not merely a technological shift but an existential crisis, a “corrosion of culture” where the very mechanism of meaning-making is outsourced to a statistical probability engine. The objections are fierce, deeply felt, and articulated with the desperation of a class that sees the ground dissolving beneath its feet: that AI represents mimicry without meaning, scale without care, and extraction without consent. It is a critique that sees in the Large Language Model (LLM) not a tool, but a tomb for the human spirit.
Yet, to dismiss these technologies merely as engines of cultural collapse is to miss the far more interesting, and difficult, terrain they reveal. The question is not whether AI is “good” or “bad” in the abstract – moral categories that technology has a habit of dissolving in the acid bath of utility – but how we might judge its outputs, organise the power it consolidates, and design the incentives that will allow culture to survive its own automation. The defence of the human in the age of the algorithm does not require us to deny the machine its power, nor to retreat into a Luddite rejection of the inevitable, but to locate, with greater precision than ever before, where the human actually resides in the creative process.
The anxiety is rooted in a specific and unsettling recognition: the “black box” of the neural network is uncomfortably similar to the “black box” of the human mind, at least in its output. When we ask an artist to draw a landscape, they draw upon a lifetime of seeing landscapes, studying other artists, and internalising the rules of perspective and light. When we ask an LLM to describe a landscape, it draws upon a dataset of billions of descriptions, mapping the statistical probability of a “shimmering” lake following a “golden” sunset. Critics argue that this similarity is superficial – a trick of the light. They say it is misleading to compare the training of a neural network to the education of a human creator. The difference, they insist, is qualitative. A person studies, interprets, and transforms; they impose a biography, a trauma, a specific and unrepeatable “I” onto the material. An LLM maps patterns and probabilities. Human influence is interpretative; machine influence is computational. That difference matters.
But does it matter to the viewer? Or does it only matter to the maker? The point of defence for the new technology is not to insist the processes are the same – they clearly are not – but to shift the question entirely. What matters most, the proponents argue, is not the origin of a work but whether it has aesthetic force. Rejecting a work solely because of its origin risks replacing judgement with provenance – with the question of where a piece came from standing in for whether it is any good. We should acknowledge that LLMs lack subjectivity, yet still allow that their outputs can be judged on their aesthetic qualities. The harder question is where authorship sits when the model has no intention of its own.
I. The Mechanical Eye: A History of Aesthetic Panic
To understand the scale of the disruption, and to find our footing in this shifting sand, one must first look backward. The history of art is, in many ways, a history of technology inducing panic in the established order of creators. In the spring of 1859, the poet and critic Charles Baudelaire visited the Salon in Paris – the grand, state-sponsored exhibition that defined the boundaries of acceptable taste in the French art world. He found himself confronted by a new “industrial madness” that had begun to encroach upon the sacred territory of the image: photography.
In a letter to the Revue Française, later published as “The Modern Public and Photography,” Baudelaire unleashed a vitriolic attack on the medium that echoes, with uncanny precision, the critiques of AI today. Photography, Baudelaire argued, was the “refuge of every would-be painter, every painter too ill-endowed or too lazy to complete his studies”. It was a technology that rewarded exactitude over imagination, a mechanical process that threatened to supplant the “indefinable” quality of the soul with a brutal, material truth. If photography were allowed to encroach upon the domain of the impalpable and the imaginary, he warned, “art is lost”.
Baudelaire’s terror was rooted in a specific conception of provenance: the value of an image lay in the human hand that crafted it, the “spirit” that infused the pigment, the struggle of the artist to capture the ideal. A machine that could capture the world without the mediation of the artist’s interpretative eye was, to him, a “mortal enemy” of art. He saw the public’s fascination with photographic realism as a sign of their “stupidity,” a degradation of taste where the “exact reproduction of nature” became the only creed.
Yet, history did not vindicate Baudelaire’s apocalyptic forecast. Photography did not destroy painting; it liberated it. Freed from the obligation of strict representation – the need to capture the exact likeness of a merchant’s face or the precise rigging of a ship, tasks the camera performed better and cheaper – painting was forced to move inward. It moved into Impressionism, capturing the subjective quality of light; into Expressionism, capturing the subjective quality of emotion; and finally into Abstraction, abandoning the object entirely. Photography, meanwhile, developed its own distinct aesthetic language, its own masters, its own “aura.”
This concept of “aura” was central to the work of Walter Benjamin, the German Jewish philosopher who, writing in 1935 under the gathering storm clouds of fascism, grappled with similar anxieties in his seminal essay, The Work of Art in the Age of Mechanical Reproduction. Benjamin argued that mechanical reproduction (photography, film) withered the “aura” of the original work – its unique presence in time and space, its connection to ritual and tradition. A photograph of a cathedral is not the cathedral; it lacks the specific “here and now” of the stone.
But Benjamin also saw in this destruction a democratic potential. By detaching the reproduced object from the domain of tradition, the mechanical age substituted a “plurality of copies for a unique existence”. It brought art out of the church and the palace and into the living room and the cinema. It politicised the image. Today, we face a disruption of a different order. The AI does not merely reproduce; it generates. It does not copy the Mona Lisa; it infers the statistical probability of a smile and the pixel distribution of sfumato to create a billion new Mona Lisas, none of which ever existed, yet all of which haunt the viewer with a sense of familiarity.
This is not the loss of the aura; it is the hyper-inflation of the aura. It is a flooding of the market with the “style” of the master, severed entirely from the master’s hand. When an image generator can produce a “Rembrandt” in seconds, the “aura” of the Rembrandt style – the heavy impasto, the dramatic chiaroscuro – is detached from the historical figure of Rembrandt and becomes a floating signifier, a data token that can be applied to a cat, a hamburger, or a politician. This feels like a violation because it commodifies the one thing the artist thought was inalienable: their subjectivity. But if we follow Benjamin, we might see that this “loss of aura” forces us to locate value elsewhere. If “style” is cheap, then what is expensive? Perhaps it is context, provenance, and intent.
II. The Synthesiser Wars and the Definition of the Musician
If the 19th century gives us the visual parallel, the late 20th century gives us the auditory one. In the early 1980s, the music world was convulsed by the arrival of the synthesiser and the sampler. These were machines that could mimic the sounds of acoustic instruments – strings, horns, drums – with increasing fidelity.
In 1982, the UK Musicians’ Union, alarmed by the sight of pop stars touring with banks of keyboards instead of full orchestras, took a stand. They passed a motion to ban the use of “synths, drum machines, and any electronic devices” in live performances and recording sessions. The specific trigger was Barry Manilow, who had the temerity to tour the UK with a synthesised orchestra, replacing the dozens of unionised string players who would normally have accompanied him. The Union’s argument was economic – saving jobs – but it was also aesthetic. They argued that the machine was producing a “fake” sound, a soulless mimicry of the real vibration of wood and wire.
The ban, of course, failed. It was impossible to police. But more importantly, it missed the point of the technology. The synthesiser did not simply replace the orchestra; it created entirely new genres of music that were unimaginable within the constraints of the orchestral tradition. Techno, House, Hip-Hop, Synth-Pop – these forms relied on the specific, “artificial” timbre of the machine. They turned the “defect” of the machine – its rigid timing, its lack of human “swing” – into a feature.
However, the analogy between the synthesiser and the AI is imperfect, and the difference is instructive. The synthesiser was an instrument; the AI is an agent. The synthesiser required a player; the AI requires only a prompt. This shift from “playing” to “prompting” represents a compression of skill that is deeply unsettling. It suggests that the technical mastery of the craft – the years spent learning to draw a hand, or voice a chord, or structure a paragraph – can be bypassed.
But as the initial novelty of AI generation fades, we are rediscovering that “expertise remains vital”. A system can track a plot point, but it cannot weigh the emotional accumulation of a scene. It can generate a million words, but it cannot decide what to withhold. The machine produces text; the human produces literature. The difference is not information. It is taste, intention, and responsibility.
III. The Chinese Room: Meaning, Mind, and the “Slop”
The philosophical root of the “mimicry without meaning” objection is best captured by John Searle’s famous “Chinese Room” thought experiment, first proposed in 1980. It is a counter-argument to the idea of “Strong AI” – the notion that a computer can truly understand.
Searle imagined a person locked in a room with a vast manual of rules for manipulating Chinese symbols. People outside the room slip in questions written in Chinese; the person inside, who knows no Chinese, follows the manual (if you see symbol X, produce symbol Y) to produce answers. To the observer outside, the room understands Chinese perfectly. It passes the Turing Test. But to the person inside, it is merely symbol manipulation – syntax without semantics. There is no understanding of what the symbols mean.
Generative AI is the Chinese Room writ large, scaled to the petabyte. It processes the “syntax” of human creativity – the brushstrokes, the rhythmic cadences, the narrative tropes – without ever accessing the “semantics,” the lived experience of love, grief, or the specific quality of light on a wet pavement that motivates the human artist. This absence of “intentionality” is what leads critics to dismiss AI art as “slop” or “gibberish”. It is form without content, a “hallucination” of meaning.
However, this critique runs headlong into a major strand of 20th-century literary theory: the “Death of the Author.” In his 1967 essay, Roland Barthes argued that the author’s intention is not the definitive anchor of the text’s meaning. “The birth of the reader must be at the cost of the death of the Author,” Barthes wrote. Meaning is created in the act of reception, not in the act of production. If a reader is moved to tears by a poem generated by an AI, is that emotion “fake”? If an audience finds meaning in a synthesised image, does the lack of human intent invalidate that experience?
We are perhaps witnessing the “Birth of the Prompter,” where the locus of authorship shifts to the human decision to set the aim, shape the material, and accept responsibility for the result. An LLM does not mean anything in the human sense, and meaning is integral to art. But audiences already ascribe meaning beyond what artists consciously intend. A workable standard is to locate authorship in the human decisions that frame the output. The machine is a tool, albeit one with extraordinary generative power. The human remains accountable.
Slavoj Žižek, the Slovenian philosopher, offers a twist on this. He argues that the “meaning” of a work of art is often a retroactive illusion. In a recent commentary on AI, he suggests that what we fear is not that the machine will be intelligent, but that it will reveal that we are not as intelligent as we think – that much of our “creativity” is also just pattern matching and symbol manipulation. The AI holds up a mirror to our own robotic nature.
IV. The Great Enclosure: Surveillance Capitalism and the Digital Commons
If the aesthetic questions are thorny, the economic ones are brutal. The foundational sin of the current AI boom, according to its critics, is “extraction without consent.” The large language models that power ChatGPT, Claude, and Gemini were not born from the ether; they were born from the “cumulative labour of billions of people”. The “training set” is nothing less than the digital record of human civilisation – books, articles, code, photographs, forums, and emails – scraped, cleaned, and ingested by private corporations.
This process represents a massive transfer of value from the public commons to private equity. It is a modern-day Enclosure Movement. In 18th and 19th-century Britain, the “commons” – land that was collectively used by villagers for grazing and subsistence – was systematically fenced off by the landed gentry, backed by Acts of Parliament. This “primitive accumulation” created the conditions for industrial capitalism by forcing the peasantry off the land and into the factories. The land, once a shared resource, became capital.
Today’s “Digital Commons” – the open web – is being similarly enclosed. For decades, we operated under a tacit social contract: we posted our work online to be shared, read, and remixed by other humans. We did not consent to have it ingested by a machine that would then sell a statistical simulacrum of our work back to us. The “Paradox of Open,” as described by researchers at Open Future, is that the very openness of the web made it vulnerable to this strip-mining. The “open internet” became the “free quarry” for the closed AI model.
This extraction is justified by the language of “fair use” and “democratisation,” but it operates on the logic of Surveillance Capitalism, a term coined by Shoshana Zuboff. Zuboff argued that tech giants mine “behavioural surplus” – the digital traces of our lives – to predict and modify our behaviour for the benefit of advertisers. With AI, the commodity is not just our behaviour, but our creativity. The “prediction” is not just what ad we will click, but what image we would have drawn, or what sentence we would have written.
The result is a profound imbalance of power. Individual creators – the illustrator in Jakarta, the fan-fiction writer in Ohio, the journalist in London – cannot negotiate with a trillion-dollar company on equal terms. Their data has already been taken. Redistribution – whether through data trusts, collective bargaining, or public institutions – is not a “technical fix” to be patched in the next software update; it is a political struggle. Without reform, the likely trajectory is greater concentration of wealth in the hands of the model owners, and greater precarity for the creative middle.
V. The Swing Riots of the Digital Age
The resistance to this dispossession has been visceral. We see it in the development of tools like Glaze and Nightshade by researchers at the University of Chicago. Glaze applies a subtle “cloak” to an image that tricks AI models into misinterpreting the style (e.g., making a charcoal drawing look like an oil painting to the AI), while Nightshade “poisons” the training data, causing the model to break down if it ingests too much of it.
These are the digital equivalents of the “sabots” (wooden shoes) thrown into the gears of industrial machinery, or the sledgehammers of the Swing Riots of 1830. Across southern England, agricultural labourers rose up to destroy the threshing machines that were displacing them during the winter months. Under the mythical banner of “Captain Swing,” they burned hayricks and smashed machinery, not because they hated technology in the abstract, but because they hated the poverty it brought in the absence of a social contract.
The threshing machine, like the AI model, was a labour-saving device that directed all the “savings” to the capital owner while leaving the worker destitute. Recent economic history suggests the Swing rioters were rational: the introduction of the machines did indeed correlate with higher unemployment and lower wages in the affected areas. The rioters were brutally suppressed – hanged or transported to Australia – but their rebellion forced a recognition of the “social cost” of efficiency.
Today’s digital Swing Riots – the lawsuits, the data poisoning, the strikes by the Writers Guild of America – are similarly attempting to force a renegotiation of the terms of trade. They are asserting that the “raw material” of AI is not raw at all; it is cooked, crafted, and owned.
VI. The Entropy of Plenty and the Ecology of Attention
Even if we solve the economic problem, we face the problem of discovery. We are moving from an economy of scarcity to an economy of infinite abundance. When an AI can generate a passable novel in an hour, the value of “generating” drops to zero. The bottleneck shifts to “attention.”
In The Ecology of Attention, Yves Citton argues that attention is a limited resource that must be cultivated, not just mined. The “attention economy” of the last decade – driven by likes, clicks, and outrage – has already degraded our cognitive environment. AI threatens to drown it completely. We face the prospect of the “Dead Internet Theory” becoming real: a web populated by bots talking to bots, generating content for algorithms to rank, with no human consciousness involved.
This flooding of the zone has a thermodynamic consequence. Claude Shannon, the father of Information Theory, defined information in relation to “surprise” (or entropy). A message that tells you what you already know contains zero information. A message that disrupts your expectation contains high information. LLMs are, by definition, probabilistic engines designed to predict the most likely next token. They are machines of the “average.” While they can be prompted to be “creative,” their default tendency is to revert to the mean – to produce the most statistically probable, and therefore the least surprising, output.
If we feed AI-generated content back into AI models – a phenomenon known as Model Collapse – the result is a degradation of quality. Researchers have found that models trained on synthetic data eventually lose the “tails” of the distribution – the rare, quirky, distinctively human variances – and collapse into a “gibberish” of sameness. The “grey goo” of the average consumes the system. In one study, a model trained on its own outputs forgot the nuances of Medieval architecture and began spewing nonsense about jackrabbits within nine generations. This is the “heat death” of culture: a state of maximum entropy where everything is a variation of the same statistical mush.
The role of the human creator, then, is to push against that gravity. To use AI tools deliberately to move beyond the average and bring surprise back into the system.
VII. Towards a Workable Settlement: Provenance, Compensation, Curation
How, then, do we organise this new reality? If we reject the Luddite fantasy of smashing the servers, and the accelerationist fantasy of surrendering to the machine, we are left with the hard work of building institutions. A workable settlement rests on three pillars: Provenance, Compensation, and Curation.
The Pillar of Provenance
We need a “truth in labelling” standard for digital reality. The Coalition for Content Provenance and Authenticity (C2PA) has developed a technical standard known as “Content Credentials”. This acts like a digital nutrition label, embedding cryptographically secure metadata into a file that records its origin and edit history. It does not prevent someone from making a fake image, but it allows the viewer to see that it is fake – or at least, that it lacks the “digital signature” of a trusted source like the BBC or the New York Times.
This is a shift from “detecting” fakes (which is an arms race the detectors will eventually lose) to “authenticating” truth. It allows us to distinguish “synthetic output” from “accountable human work”. It is the digital equivalent of the hallmark on silver – a guarantee not of quality, necessarily, but of origin.
The Pillar of Compensation
The “free quarry” model of training data must end. We have solved similar problems before. In 1914, the American Society of Composers, Authors and Publishers (ASCAP) was formed to address a new technology that was “stealing” music: the phonograph and the radio. Before ASCAP, songwriters had no way to track or collect payment when their songs were played in dance halls or on the air. One famous anecdote tells of composer Victor Herbert hearing his own song played in Shanley’s Restaurant in New York; when he asked for payment, the owner refused, arguing the music was merely “ambiance”. Herbert sued, the case went to the Supreme Court, and the concept of the “public performance right” was upheld.
ASCAP created a “blanket licence” model – venues paid a fee to the collective, which then distributed royalties to members based on usage. We need a “Data ASCAP” – or, more precisely, a network of Data Trusts.
A Data Trust is a legal structure where individuals pool their data rights into a collective, which is managed by a fiduciary (a “trustee”) who is legally bound to act in their interests. The Aapti Institute and legal scholars like Salome Viljoen have championed this “Democratic Data” model. Instead of every artist trying to sue OpenAI individually (a hopeless asymmetry), they would assign their training rights to a Trust. The Trust negotiates licences with the AI companies. If the companies refuse to pay, the Trust withholds the data – using tools like Glaze/Nightshade as leverage.
Viljoen argues that we must move beyond the “property” view of data (my data is my house) to a “relational” view (data is a flow that affects us all). A Data Trust allows for “collective bargaining” with the algorithm. It shifts the power dynamic from “extraction” to “negotiation”.
The Pillar of Curation
Finally, we must recognise that the market alone will not support the kind of “inefficient” human creativity that culture requires. If the cost of copies falls to zero, the market price of content will collapse. We need “funded discovery.” This means public investment in the institutions of curation – public broadcasters, libraries, arts councils, and non-profit platforms – that reward “quality and distinction” rather than “engagement metrics”.
Just as we subsidise national parks to protect them from industrial development, we must subsidise “human attention parks” – spaces where the logic of the algorithm is suspended. This is not about preserving the past in amber; it is about creating the conditions for the next avant-garde. As Boris Groys reminds us, the curator’s power is the power to “valorise” the useless, to make space for the “sacred” in a secular world. Hans Ulrich Obrist, the artistic director of the Serpentine Galleries, envisions the curator as a “change agent” who builds bridges between art and technology, rather than a gatekeeper. The “trusted guide” grows more valuable than the manufacturer.
Conclusion: The Human Signal
The strongest critiques of AI art point at real risks: appropriation, oligopoly, and homogenisation. Meeting them does not mean rejecting the tool. It means building the institutions that make its use fair. Cameras and samplers once provoked similar objections. Each time, art adjusted. What is different now is the speed and opacity of the instrument. That raises the stakes. But if we judge works on merit, realign incentives, and re-balance power, AI can widen the field rather than narrow it.
Many artists fear that AI represents mimicry without meaning. Yet what will matter is not the origin of the work but how it is used. The question is less “Is AI bad?” and more “What conditions will allow AI to become good?” As Byung-Chul Han observes, “Beauty neither conveys itself to direct empathy nor to naïve contemplation... The only way to view beauty as a secret is through knowledge of the veil”. The AI generates the veil; it is up to us to imbue it with the secret.
The path forward is not a return to the past, but a “new deal” for the digital age. It requires us to be as creative with our institutions as we are with our prompts. The artist is dead; long live the artist.


