The Digital Enclosure
A Constitutional Sleepwalk
[Deep Dive disclaimer: These deep dives are a mixture of original drafting, AI deep research and editing. My current process for writing a non-fiction article is: creating a rough draft, using that rough draft as a prompt for deep research, using that deep research to improve the draft, then using the improved draft as a deep research prompt - to create a ‘deep dive’. I send out the improved drafts through my Substack email list. These deep dives are the endpoint of the process. I was creating them for my own enjoyment but realised other people might enjoy them too. I’m not sure I can claim authorship of them - though I don’t think anybody else could have created them either…]
I. The Chaos of the Ether
In the winter of 1922, the British state found itself staring into the abyss of a new medium, paralyzed by a peculiar form of administrative dread. The technology was “wireless” telephony—soon to be christened “broadcasting”—and to the sober minds of the General Post Office (GPO), it represented a terrifying unravelling of order. For decades, the GPO had maintained a rigid monopoly over the King’s communications, viewing the electromagnetic spectrum as a utilitarian conduit for point-to-point messages, military signals, and imperial governance. But suddenly, the “ether” was crowded with ghosts.
Across the Atlantic, the United States had surrendered its airwaves to the chaotic energies of the free market. The result, viewed from the refined corridors of Whitehall, was a cacophonous warning: a “chaotic expansion” of competing stations, unregulated advertising, and a “race to the bottom” in cultural standards.1 American radio was a babel of commerce, where the citizen was reimagined as a consumer to be sold soap and tonics. In Britain, however, a different impulse stirred—a desire to enclose this new technological commons not for profit, but for the “national interest”.3
This historical juncture, now a century old, serves as the only adequate mirror for our current predicament. We are once again sleepwalking into a constitutional crisis, though we persist in sanitizing it with the banal language of a “technology upgrade.” Artificial Intelligence (AI) is hardening into the bedrock of our national infrastructure. It is beginning to mediate our healthcare, adjudicate our welfare claims, and curate our public knowledge. Yet, unlike the wireless revolution of 1922, the foundations of this new infrastructure are being laid not by a sovereign public corporation, but by a handful of private, transnational entities—the “Cloudalists”—whose economic and political interests are orthogonal to our own.4
To understand the magnitude of what we are ceding, we must first revisit the moment we chose differently. The creation of the British Broadcasting Corporation (BBC) was not inevitable; it was a deliberate, ideological intervention. In 1922, the GPO, anxious to avoid the “unregulated scramble” seen in America, issued a single broadcasting license to a consortium of radio manufacturers, the British Broadcasting Company.2 This initial entity was a commercial compromise, funded by a tariff on wireless sets and intended to stimulate hardware sales. It was a mechanism of the market.
But the market failed the medium. Listeners, displaying a characteristic British ingenuity for thrift, bypassed the tariffs by constructing their own “home-made sets,” tinkering with crystals and wires to conjure “speech and music out of the air” without paying the corporate dues.2 The company floundered, and the state was forced to intervene. Two seminal inquiries—the Sykes Committee of 1923 and the Crawford Committee of 1925—were convened to determine the metaphysical status of broadcasting.3
The Crawford Committee’s report, delivered in 1926, remains a radical document. It concluded that the power of broadcasting was too great to be left to the caprice of commerce. It recommended that the private Company be liquidated and replaced by a public corporation, a “Trustee for the national interest,” established by Royal Charter.3 This was an act of enclosure, certainly, but it was a public enclosure. It asserted that the infrastructure of the public mind—the distribution of news, culture, and education—must be insulated from both the profit motive and the direct control of the government.10
The architect of this moral monopoly was John Reith, a austere Scottish Calvinist who famously regarded the commercial model as a contagion. “Somebody introduced Christianity into England and somebody introduced smallpox, bubonic plague and the Black Death,” he told the House of Lords. “Somebody is minded now to introduce sponsored broadcasting”.11 For Reith, the “ether” was a sacred trust. To sell it was a desecration. The Royal Charter of 1927 codified this vision, creating an institution that was funded by a hypothecated tax (the license fee) and mandated to “inform, educate, and entertain”—in that strict order of priority.1
Today, as we stand on the precipice of the AI age, the Reithian impulse is conspicuously absent. There is no Royal Commission debating the public ownership of “compute.” There is no proposal for a “British Artificial Intelligence Corporation.” Instead, the UK government’s strategy is one of “sovereign capability” that paradoxically relies on renting infrastructure from American giants.12 We are attempting to build a sovereign state on rented land.
The “ether” has been replaced by the “cloud,” but the dynamics of enclosure and control remain. The difference is that while the GPO in 1922 feared the chaos of the market, the modern state seems to fear the responsibility of ownership. We have accepted the “Americanization” of our digital infrastructure as a fait accompli, retreating to the role of a regulator—a safety inspector for a building we do not own.
II. The Rise of the Cloudalists
To grasp the economic reality of this new order, we must look beyond the gleaming interfaces of chatbots and see the physical and economic structures that underpin them. The Greek economist and politician Yanis Varoufakis has proposed a provocative thesis: that we have moved beyond capitalism entirely and entered a phase of “Techno-feudalism”.4
In classical capitalism, profit is derived from production—the making and selling of goods. In a feudal system, wealth is extracted through rent—the ownership of land upon which others must toil. Varoufakis argues that the “Cloudalists” (Amazon, Microsoft, Google) are the new feudal lords. They have enclosed the digital commons, transforming the internet from a decentralized network into a series of private fiefdoms.4
The “land” in this analogy is “cloud capital”—the vast server farms, the oceanic datasets, and the proprietary algorithms upon which the modern economy depends. When a startup trains a new AI model, when the NHS processes patient records, or when a government department analyzes welfare claims, they are doing so on infrastructure owned by these entities.15 They are vassals, paying rent for the privilege of existence.
This is not a metaphor of mere dominance; it is a description of a specific economic mechanism. The production of “Foundation Models”—the massive AI systems like GPT-4 or Claude—is characterized by a “make once, rent forever” structure.17 The barrier to entry is financial. As Table 1 illustrates, the cost of training a frontier model has exploded from a few thousand dollars to hundreds of millions in less than a decade.
This exponential rise in capital expenditure creates a “moat” that no start-up, university, or even mid-sized nation-state can easily cross.21 The “compute” required to train these models is becoming a scarce resource, akin to uranium or arable land. Amazon Web Services (AWS) alone controls 31% of the global cloud market.16 By controlling the physical infrastructure, these firms exert a form of “algorithm exclusion” or “rentiership” that dictates the terms of innovation.5
Critics of the techno-feudal hypothesis argue that these firms are simply successful capitalists leveraging economies of scale.5 They point out that Google sells advertising and Amazon sells logistics—traditional services. However, this misses the structural shift. When the UK government speaks of “sovereign AI,” it is often referring to software that resides physically and computationally within the “digital manors” of US firms.17
The “rent” is not just financial; it is epistemic. By relying on proprietary models like GPT-4, we accept their “view from nowhere” as the baseline for our own intelligence.24 We import the biases, the censorship, and the cultural assumptions of the Californian ideology directly into the British public sector. A Reithian analysis would see this as a catastrophic surrender of “editorial independence.” If the algorithm that triages NHS patients or marks A-level essays is a trade secret owned by a foreign corporation, in what sense can we claim to have a “national” infrastructure?
The current UK strategy, the so-called “Third Way,” attempts to navigate this by focusing on regulation rather than ownership.25 The government’s “AI Opportunities Action Plan” and the creation of the AI Safety Institute are laudable attempts to set “rules of the road”.13 But a safety inspector cannot dictate the architecture of a building they do not own. As Peter Kyle, the Secretary of State for Science, Innovation and Technology, announces partnerships with OpenAI to “share technical information,” the dynamic is clear: the UK is petitioning the lord of the manor for a look at the books.13 It is a relationship of vassalage, disguised as diplomacy.
III. The Battle for the Body Politic
If the “ether” was the battleground of the 1920s, the “body” is the battleground of the 2020s. Specifically, the data-body of the British citizen, aggregated within the National Health Service (NHS). The NHS holds one of the most valuable datasets on the planet: the longitudinal, cradle-to-grave health records of 65 million people.28 In the bio-economy of the future, this data is the primary resource—the oil, or perhaps the soil—from which value will be extracted.
The conflict over this resource has crystallized around the “Federated Data Platform” (FDP), a massive IT infrastructure project intended to integrate the fragmented data silos of the NHS into a single, operational brain.29 The contract for this nervous system, worth £330 million, was awarded to Palantir, a US data analytics firm founded by the libertarian venture capitalist Peter Thiel, with deep origins in the CIA-backed In-Q-Tel fund.29
The selection of Palantir provoked a fierce immune response from civil society. Campaign groups like Foxglove, the Doctors’ Association, and Just Treatment launched legal challenges, arguing that the contract represented a stealth privatization of the NHS’s most valuable asset.32 The critique was fundamentally Reithian: the NHS is a trust, and Palantir, with its background in “spy tech” and aggressive corporate ethos, was viewed as a violation of that trust.31
The government’s defense of the Palantir deal rested on a single, powerful argument: efficiency. The NHS is drowning in administrative friction. Clinicians spend hours navigating archaic, disconnected systems—a “Time Tax” that subtracts from patient care.29 Palantir promised a solution: a “single pane of glass” that would allow hospitals to manage beds, waiting lists, and supplies in real-time. Pilot programs at the Chelsea and Westminster Trust reportedly reduced waiting lists by 28%.35
This is the siren song of the Cloudalists: surrender your data sovereignty, and we will give you the efficiency you crave. We will fix the plumbing, but we will own the pipes.
However, the tragedy of the Palantir affair is that it presented a false dichotomy between “inefficient public privacy” and “efficient private surveillance.” There was, and is, a third option—a path not taken that aligns with the democratic values of the NHS.
Research led by Dr. Andrew Soltan at the University of Oxford has demonstrated the viability of “Federated Learning” (FL) within the NHS.36 In a traditional AI model (like Palantir’s approach), data is extracted from local hospitals and pooled in a central “lake” to train the model. This centralization creates the privacy risks and the “honey pot” for hackers.
In a Federated Learning system, the logic is inverted: the AI model travels to the data. The model is sent to the local hospital server, learns from the patient records in situ, and then sends only the mathematical “updates” (the learnings) back to the center, leaving the sensitive data behind.
The Oxford pilot, deployed across four NHS Trusts during the COVID-19 pandemic, used inexpensive Raspberry Pi micro-computers (costing £45-85 each) to train a screening tool for the virus.37 The results were stunning. The federated model improved performance by 27.6% compared to models trained on single-hospital data, and crucially, “patient data never left the hospitals’ premises”.37
Why, then, did the government choose the heavy, centralized, private option over the agile, decentralized, public one? The answer lies in the erosion of state capacity. The NHS, after decades of outsourcing, lacks the internal engineering talent to build and maintain a “sovereign” federated network.38 It is easier to sign a cheque to Palantir than to rebuild the technical competence of the state.
The result is that the NHS is becoming a tenant in its own house. The “Federated Data Platform” is federated in name only; in practice, it is a consolidation of power in the hands of a vendor. The legal victories by Foxglove—forcing the government to admit it cannot grant Palantir a long-term role without consultation—are significant, but they are rear-guard actions.32 The infrastructure is being laid, cable by cable, contract by contract.
IV. The Administrative Gaze and the Time Tax
The “Time Tax” is not merely an inconvenience; it is a mechanism of inequality. As the behavioral scientists Sendhil Mullainathan and Eldar Shafir have demonstrated, poverty imposes a massive “cognitive load” on the mind.39 The poor are forced to constantly calculate trade-offs that the wealthy can ignore. When the state adds to this load with complex bureaucracy—40-page forms, endless hold music, opaque eligibility rules—it is effectively levying a tax on the cognition of its most vulnerable citizens.34
The promise of “Public Service AI” is that it could repeal this tax. Generative AI has the potential to act as a “universal concierge,” guiding citizens through the labyrinth of the state. It could pre-fill forms, translate jargon into plain English, and triage demands on public services.41 The UK government’s “Blueprint for Modern Digital Government” explicitly aims to reduce the “time and energy” citizens spend accessing services, envisioning a future where services are “designed around people”.43
In a survey by Global Government Fintech, more than half of UK public servants in digital roles reported already using Generative AI in their work.42 The efficiency gains are real. But there is a dark side to this automation.
If the AI used to repeal the Time Tax is a “black box” owned by a private vendor, we risk replacing a “bureaucracy of humans” with a “bureaucracy of algorithms.” A human civil servant, however weary, has a moral agency; they can bend a rule, or at least explain it. An algorithm, trained on historical data, simply executes a probability.
We have already seen the catastrophic consequences of “algorithm exclusion” in the public sector. In Brazil’s “Universal Basic Income” trials, an automated review system wrongly excluded 23% of rural low-income applicants because the algorithm, trained on urban data, did not recognize their consumption patterns.22 In the UK, the DWP and the Home Office have faced repeated scandals over algorithmic bias in fraud detection and visa processing.45
When the state outsources its cognition to private AI, it outsources its accountability. A citizen can petition their MP about a bad law; how do they petition a neural network? The “Time Tax” may be reduced, but it is replaced by a “Justice Tax”—the cost of fighting a decision made by an opaque system that the government itself does not fully understand.46
The “British Third Way” in AI regulation—championed as a “pro-innovation” alternative to the EU’s heavy hand and the US’s laissez-faire—risks becoming a hollow middle.25 It focuses on “safety” (preventing existential risk) and “ethics” (bias frameworks) but ignores the fundamental question of power.47
A true Reithian approach would not just regulate the algorithm; it would own it. It would demand that the AI systems used to adjudicate welfare or healthcare be “Public Service Algorithms”—open-source, transparent, and optimized for the citizen’s well-being rather than the vendor’s engagement metrics. It would recognize that in the 21st century, the code is the policy.
V. The Illusion of Sovereignty
The government’s rhetoric on “Sovereign AI” betrays a fundamental misunderstanding of the supply chain. Sovereignty in AI is not a declaration; it is a stack. It requires independence at three layers: the physical (chips and data centers), the model (the weights and training), and the application (the interface).
The UK possesses significant strength at the application layer (fintech, biotech) and retains academic prestige at the model layer (DeepMind, though US-owned, is London-based). But at the physical layer—the “compute”—we are destitute.
The “National AI Strategy” and the “Compute Roadmap” acknowledge this deficit but offer tepid solutions.12 The government has announced “AI Growth Zones” and small pots of funding (£33m here, £100m there) to stimulate infrastructure.12 But compared to the CapEx of the Cloudalists—Microsoft alone spends more on data centers in a quarter than the entire UK science budget—these are rounding errors.
The “Sovereign AI” discussed in Whitehall is often a “fine-tuned” model: a British skin draped over an American skeleton. We might train a model on British laws or NHS data, but if that training happens on AWS GPUs and the base model is GPT-4, the sovereignty is illusory.27 If the US government were to impose export controls on “inference” (the running of the model), or if OpenAI were to change its terms of service, the UK’s “sovereign” capability would evaporate overnight.
This dependency has geopolitical implications. In the 1920s, the GPO feared that foreign control of the wireless spectrum would compromise military communications.6 Today, the Ministry of Defence is exploring AI for “sovereign capability,” yet the very hardware it relies on is part of a global supply chain choked by US-China tensions.49 The “Third Way” attempts to thread the needle between these superpowers, but without its own “compute” capacity, the UK is not a player; it is a playground.
VI. Towards a Digital Commons
Is there an alternative? Can we imagine a 21st-century equivalent to the Royal Charter of 1927?
The seeds of such a future exist, scattered in the margins of the state. The “Digital Public Infrastructure” (DPI) movement, pioneered by nations like India (with its “India Stack”) and Estonia (X-Road), demonstrates that the state can build and own the core rails of the digital economy.50 These systems are “open, interoperable,” and publicly governed, serving as a counter-weight to the “walled gardens” of Big Tech.
In the UK, we see glimpses of this potential in the “Living with Machines” project—a collaboration between the British Library and the Alan Turing Institute.52 Here, AI is used not to extract rent, but to “enrich” the public commons. Researchers are using computer vision to digitize millions of 19th-century maps and newspapers, creating a “data commons” that belongs to the nation.54
The National Archives is similarly experimenting with “AI as Infrastructure,” using machine learning to make vast legal and historical records accessible to the public.55 These projects embody the Reithian spirit: using the latest technology to “Inform and Educate,” treating the citizen as a scholar rather than a user.
The Tony Blair Institute has proposed a “National Data Library” (NDL) as a central asset to drive growth.57 If constituted correctly—as a public corporation with a Charter, rather than a government department subject to political meddling—this NDL could be the kernel of a “British Digital Corporation.”
Imagine a National Compute Cloud, funded by the state, free at the point of use for researchers and public bodies. This would break the rentier cycle of the Cloudalists. Imagine a “Public Service Algorithm” for the NHS, built on the Federated Learning model, owned by the Trusts, and transparent to the patient. Imagine a BBC for the AI age—not a broadcaster, but a builder of “Digital Public Goods.”
VII. The Sleepwalking State
In 1952, looking back on the creation of the BBC, Lord Reith wrote with characteristic melancholy: “I am... disinclined to proceed: as to myself, whatever any others might do, letting the issue go by default”.58 He feared that apathy would allow the commercialization he despised to creep back in. He was right to worry.
Today, we are letting the issue go by default. The “constitutional problem” is not that the technology is evil, but that the governance is absent. We have ceded the power to design our digital environment to entities that do not share our interests, nor answer to our laws.
A “technology upgrade” implies a faster version of the same thing. But AI is not a faster typewriter; it is a new mode of cognition. It is infrastructure in the deepest sense—it determines what is visible, what is efficient, and what is possible.
The lesson of 1927 is that the market will not deliver a public service spontaneously. The “chaos of the ether” did not resolve itself into the BBC; it required an act of political will to carve out a space that was “not for profit, but for the public.”
We need a new Crawford Committee—not just to discuss “safety” or “ethics,” but to discuss ownership. We need to ask whether the “compute” that runs our hospitals, schools, and courts should be a public asset or a private service. We need to decide whether we are content to be tenants in the cloud, or whether we have the ambition to build our own castle.
If we fail to act, we will find that we have traded the chaotic freedom of the early web for the ordered servitude of the digital manor. We will have swapped the “ether” for the “cloud,” only to find that in both, we are merely ghosts in someone else’s machine.


