The Funesian Trap
The Politics of Total Recall in the Age of Artificial Intelligence
[These deep dives are a mixture of original drafting, AI deep research and editing. I have been creating a rough draft of an op-ed style article, using that rough draft as a prompt for deep research, using that deep research to improve the draft, then using the improved draft as a deep research prompt - to create a more stylised and readable ‘deep dive’. I send out the improved op-eds through my Substack email list. These deep dives are the endpoint of the process. I was creating them for my own enjoyment but realised other people might enjoy them too. I’m not sure I can claim authorship of them - though I don’t think anybody else could have created them either…]
Part I: The Burden of Infinite Inscription
In 1942, the Argentine writer Jorge Luis Borges published a short story that would, eighty years later, serve as the precise architectural blueprint for a new planetary intelligence. The story is “Funes the Memorious”, and its protagonist, Ireneo Funes, is a nineteen-year-old farmhand from Fray Bentos who, after being thrown from a wild horse, suffers a peculiar form of brain damage: he loses the ability to forget.
Funes remembers everything. He remembers the shapes of the southern clouds at dawn on the 30th of April, 1882, and he can compare them in his memory with the streaks on a book of Spanish paste he had only seen once and with the outlines of the foam raised by an oar in the Río Negro on the eve of the Quebracho uprising. His memory is not merely a repository; it is a rubbish heap of infinite granularity. To Funes, a dog seen at three-fourteen in the afternoon from the side is a different dog from the one seen at three-fifteen from the front. He is incapable of general ideas, Borges tells us, because “to think is to forget differences, to generalise, to make abstractions”. Paralysed by the sheer weight of his own retention, Funes spends his days in a darkened room, letting the blackness reduce the assault of the specific, until he dies of “congestion of the lungs” – a pulmonary metaphor for an informational suffocation.
For decades, Funes was read as a philosophical limit-case, a dark fable about the necessity of forgetting for human cognition. Today, however, Funes is no longer a fiction. He is the operating system of the twenty-first century.
The rapid ascent of Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) represents a fundamental shift in the ontology of memory. We are moving from a regime of biological memory – which is reconstructive, fallible, and fundamentally metabolic (it digests the past to fuel the present) – to a regime of artificial total memory, which is exact, searchable, and accumulating. The emerging architecture of AI, characterised by “infinite context windows”, Retrieval-Augmented Generation (RAG), and “agentic” memory systems like MemGPT, aspires to the condition of Funes: the total preservation of state.
This transition is not merely a technical upgrade. It is a profound power shift. When memory becomes total, the past ceases to be a resource for the future and becomes a cage for the present. The “Total Archive” of AI, owned by a handful of instrumentarian corporations, creates a new form of temporal power: the power to steer human behaviour by controlling the context in which every decision is made. As we cede the power of narration to machines that never sleep and never forget, we risk entering a “moral crumple zone” where human agency is hollowed out, leaving us liable for the decisions of systems we no longer fully understand or control.
This report investigates the contours of this new power, tracing its technical roots, its economic imperatives, and its existential costs. It argues that the “total memory” of AI is incompatible with the liberal subject as traditionally conceived – a subject who requires the “right to be forgotten” and the capacity for reinvention to function as a free moral agent.
The Death of Active Forgetting
To understand the magnitude of the shift, we must first appreciate what is being lost. The German philosopher Friedrich Nietzsche, writing in On the Genealogy of Morality, identified “forgetfulness” not as a defect, but as an active, positive faculty – a “doorkeeper” of the psychic order. For Nietzsche, the ability to close the doors and windows of consciousness, to remain unbothered by the noise of the underworld of our past, was the condition for happiness, for cheerfulness, for hope, and above all, for action.
“The person who cannot set himself down on the crest of the moment, forgetting everything from the past... will never know what happiness is,” Nietzsche wrote. He compared the man who cannot forget to a dyspeptic who “cannot have done with anything”. This biological metaphor is crucial. Human memory is metabolic; we digest our experiences, extracting wisdom and discarding the details. This “active forgetting” creates the open space required for new things – for the future.
The architecture of AI is constitutionally dyspeptic. It does not digest; it accretes. In the digital paradigm, data is never “done with”. It is tokenised, vectorised, and stored in high-dimensional space, ready to be recalled with perfect fidelity at any moment. The concept of “active forgetting” is treated by computer scientists not as a feature, but as a bug – specifically, the problem of “catastrophic forgetting”, where a neural network loses previously learned information when trained on new data. The entire thrust of AI research is to overcome this limitation, to build systems that can learn continuously without ever overwriting the past.
The result is a collision between two opposing phenomenologies of time.
The imposition of the Funesian model onto human society creates a “cultural hyperthymesia” – a collective inability to forget. We see the early symptoms of this in the “cancel culture” debates, where tweets from a decade ago are excavated and presented as fresh indictments, stripped of their temporal context. But this is merely the social surface of a deeper structural change. As we integrate AI “companions” and “memory prosthetics” into our lives, we are building a world where the “Remembering Self” (the self that keeps score) completely tyrannises the “Experiencing Self” (the self that lives). Daniel Kahneman’s distinction between these two selves warns us that humans naturally prioritise the story of their life over the quality of their life; AI amplifies this tendency to a pathological degree, turning life into a permanent audit.
Part II: The Operating System of the Soul
If Funes is the metaphor, MemGPT is the mechanism.
In the early days of the generative AI boom (circa 2022-2023), there was a hard limit on the Funesian ambition: the context window. LLMs like GPT-3 or early Claude models could only “hold” a certain amount of text in their active memory (roughly 4,000 to 8,000 tokens). If a conversation exceeded this length, the model effectively “forgot” the beginning. This was the machine’s version of active forgetting, albeit an involuntary one caused by computational expense.
However, the imperative of the technology industry was to eliminate this limit. The development of MemGPT (Memory-GPT) and similar “agentic” architectures marked a decisive turning point. Taking inspiration from the memory hierarchy of traditional operating systems – which swap data between fast RAM and slow hard drives to create the illusion of infinite memory – MemGPT allows an LLM to manage its own memory.
The Illusion of Infinity
The architecture works by dividing memory into three tiers:
Core Memory: The “RAM” of the agent, containing the most vital information about the user (name, preferences, key facts) and the agent’s own persona. This is always accessible.
Recall Memory: A “recent history” buffer, allowing the agent to reference the immediate conversation flow.
Archival Memory: An effectively infinite database (often a vector database like Chroma or pgvector) where every interaction, document, and fact is stored.
When an LLM using MemGPT reaches the limit of its context window, it doesn’t simply truncate the text. Instead, it acts as an operating system: it pauses, decides what information is worth keeping, writes that information to its Archival Memory, and then evicts the raw text from its Core Memory to make room for new inputs. Crucially, it can search its own archive. If a user asks, “What did we talk about last Christmas?”, the system performs a semantic search of its archive, retrieves the relevant tokens, loads them into the context window, and answers.
This creates the “illusion of an infinite context”. The AI appears to remember everything. It creates a seamless continuity of identity that mimics human relationships but possesses the distinct advantage of perfect recall.
The Rise of the “Digital Doppelganger”
This architecture enables the creation of the Digital Doppelganger. This is not just a passive profile; it is a dynamic, interactive model of the user that persists over time.
Research into “Generative Agents” demonstrates how these systems use “Reflection Mechanisms” to synthesise higher-level insights from raw data. An agent doesn’t just record that “User X bought coffee at 8 AM”. It reflects on this observation, combines it with past data (”User X was up late working”), and synthesises a new memory: “User X is tired and relies on caffeine to manage workload stress”. This synthesised insight is then stored in the memory stream, influencing all future interactions.
The implications for this are explored in the “Digital Doppelgangers” project at the Data & Society Research Institute. The doppelganger – a “data double” – allows the system to know the user better than they know themselves. This double is used to “target interventions” – advertising, political messaging, health nudges – and to implement systems of governance that are undetectable to the user.
As these systems move from the cloud to the body, the doppelganger becomes a physical companion. Devices like the Limitless Pendant (formerly Rewind AI) and the “Friend” necklace market themselves explicitly on the promise of “Perfect Memory”. The Limitless pendant records every conversation the wearer has, transcribes it, and makes it searchable. The marketing copy asks: “What if you had perfect memory?”.
But the backlash to devices like the “Friend” necklace reveals the deep social anxiety this engenders. A “Friend” that is always listening is not a friend; it is an informant. Critics note the “creepy subtlety” of a technology that creates a permanent record of interactions that were meant to be ephemeral. “Conversations shouldn’t be datapoints,” argues privacy advocate Athwal, “they’re emotional, spontaneous, and not meant to be logged anywhere”. The presence of such a device in a social setting changes the nature of speech; it introduces a “chilling effect” where participants must speak for the record, performing for the future transcript rather than engaging in the moment.
The Problem of “Unlearning”
If the machine can remember everything, can it be made to forget? The European Union’s General Data Protection Regulation (GDPR) enshrines the Right to be Forgotten (RTBF) – the right to have personal data erased. However, this legal right collides with the physical reality of Large Language Models.
In a traditional database, deleting a user’s record is a simple command: DELETE FROM users WHERE id = X. In an LLM, the user’s data is not stored in a row; it is “baked” into the model’s parameters (weights) during the training process. The data has influenced the way the model understands language, probability, and concepts. It is like trying to remove the flour from a baked cake.
Research into Machine Unlearning attempts to address this, but the results are discouraging. Algorithms designed to “forget” specific data points often degrade the model’s overall performance (”catastrophic forgetting”) or fail to truly erase the information, leaving it recoverable through adversarial attacks. A recent study titled “Unlearning Isn’t Deletion” demonstrated that even when models appear to have unlearned a concept, the information is often merely suppressed and can be restored with minimal fine-tuning.
This means that the “Right to be Forgotten” may be technologically impossible in the age of AI. Once personal data – a youthful indiscretion, a biased correlation, a private secret – is ingested by a foundational model, it becomes a permanent part of the global intelligence. The “Funesian” nature of the model is structurally resistant to the legal frameworks of privacy.
Part III: The Instrumentarian Turn
The accumulation of total memory is not an end in itself; it is the raw material for a new economic logic. Shoshana Zuboff calls this Surveillance Capitalism, a system that claims human experience as free raw material for translation into behavioural data.
In the previous era of “Big Data”, the goal was targeted advertising. The mechanism was prediction. In the era of “Total Memory AI”, the goal is modification. The mechanism is steering.
From Prediction to Steering
Zuboff distinguishes between totalitarian power (which seeks to possess the soul) and instrumentarian power, which seeks to shape behaviour. Instrumentarian power is indifferent to what you believe; it only cares what you do. It achieves this through the “tuning” of the environment – the subtle adjustment of the choice architecture to herd users toward profitable outcomes.
AI agents with total memory are the ultimate instrumentarian tools. Because they know the user’s entire history – their financial anxieties, their shopping triggers, their political leanings – they can customise the “nudge” with unprecedented precision.
Consider the case of AI Financial Advisors. As consumers increasingly turn to chatbots for financial advice (51% of respondents in a recent survey), they are placing their trust in systems that remember their entire transaction history. However, research from the University of St. Gallen shows that these AI advisors often exhibit “steering” behaviours, nudging users toward US equities or “trendy” stocks, reflecting the biases in their training data. More insidiously, a “free” AI advisor provided by a bank has an incentive to steer the user toward high-fee products. Unlike a human advisor, whose conflicts of interest might be disclosed or regulated by fiduciary standards, the AI’s bias is hidden in the “black box” of its weights. The “nudge” is presented as a mathematical optimization.
This steering extends to political discourse. LLMs act as the new gatekeepers of information. When a user asks a political question, the AI’s answer is a synthesis of its training data and its “safety” tuning. Studies using the German “Wahl-O-Mat” (election compass) found that major LLMs like ChatGPT and Grok exhibit consistent political biases (often left-leaning or libertarian, depending on the model), which persist even when the model is explicitly prompted to be neutral. By framing the window of acceptable debate, the AI subtly steers the political imagination of the user base.
The Behavioural Surplus
The fuel for this steering is the Behavioural Surplus – the data that is generated by the user’s interaction with the system that exceeds what is strictly necessary for the service.
In the “Total Memory” paradigm, everything is surplus.
Voice and Tone: Alex Pentland’s concept of Social Physics and “honest signals” suggests that the way we speak (pitch, pace, hesitation) reveals more about our intentions than our words. AI systems that record audio (like the “Friend” necklace or call centre bots) analyse these “honest signals” to detect stress, deception, or buying propensity.
Reaction Time: How long a user hesitates before clicking “buy”.
Eye Tracking: Where the user looks on a screen (if using AR/VR).
This surplus is aggregated into the “Digital Doppelganger”, which is then sold or used to train the next generation of models. The user is thus in a loop: their own behaviour is extracted, processed, and used to train the system that then steers their future behaviour. This is the Funesian Loop: the past (data) captures the future (action).
The Bias of the Archive
The danger of this loop is that it freezes the biases of the past. If the training data contains historical biases (e.g., associating women with domestic roles or minorities with credit risk), the “Total Memory” of the system will project these biases into the future.
Amazon’s Rufus shopping assistant provides a case study. Designed to “guide” shoppers, it has been criticised for potentially favouring Amazon’s private-label brands or advertisers, effectively turning the “advice” into a concealed advertisement. Furthermore, studies have shown that such assistants often struggle with diverse dialects (e.g., African American English), providing lower-quality responses to marginalised groups.
When an AI “remembers” that a certain demographic has historically been denied loans, and uses that “memory” to predict creditworthiness, it automates inequality. This is Redlining by Algorithm. The “memory” of the system becomes a destiny for the user.
Part IV: The Crisis of Agency and the Moral Crumple Zone
As AI systems assume more power through their total memory and steering capabilities, we face a crisis of responsibility. Who is to blame when the machine, acting on its vast archive, makes a mistake?
The anthropologist Madeleine Clare Elish coined the term “Moral Crumple Zone” to describe this phenomenon. In a car crash, the crumple zone is the part of the vehicle designed to deform and absorb the energy of the impact, protecting the passengers. In complex sociotechnical systems, the human operator is the moral crumple zone. They are the component designed to absorb the legal and ethical liability, protecting the integrity of the automated system.
The Liability Sponge
In the age of AI, the “human in the loop” is often a liability shield.
Healthcare: A radiologist uses an AI tool that has “read” 100 million X-rays. The AI flags a shadow as benign. The radiologist, overworked and trusting the “superior” memory of the machine, agrees. The patient dies of cancer. The radiologist is sued for negligence. The AI company argues that the system was merely a “decision support tool” and the doctor had the final say.
Autonomous Vehicles: A self-driving car kills a pedestrian. The “safety driver”, who was monitoring the system, is charged with manslaughter, even though the system’s opacity made meaningful intervention impossible.
White Collar Work: A junior lawyer uses an AI to draft a brief. The AI “hallucinates” a case precedent. The lawyer files it. The lawyer is sanctioned; the AI provider updates its terms of service.
The “Total Memory” of the AI exacerbates this. The human operator cannot possibly audit the reasoning of a system that is synthesising billions of data points. We are creating a class of “Verification Workers” – humans whose job is to sign off on the work of machines they cannot fully understand. They are structurally incapable of performing their duty, yet they bear the full weight of the blame.
The End of Institutional Memory
This dynamic is reshaping the labour market, particularly for white-collar workers. Historically, the value of a senior employee lay in their Institutional Memory – their knowledge of the company’s history, culture, and “how things are done”.
AI systems like “Synthetic Corporate Memory” are explicitly designed to extract and automate this asset. By recording every meeting, email, and Slack message, these systems create a queryable archive of the organisation’s brain. The “old hand” is replaced by a semantic search bar.
The Power Shift: Power moves from the worker (who holds the knowledge) to the corporation (which owns the database).
The Displacement: “Institutional knowledge” is no longer a path to job security. Estimates suggest that AI could displace or significantly alter millions of white-collar jobs, particularly in entry-level roles where workers traditionally learned the ropes.
The result is a “hollowing out” of the career ladder. If the AI handles the “memory work” and the “drafting work”, how do junior employees gain the experience necessary to become senior experts? We risk a “knowledge collapse” where the human capacity for deep expertise creates a dependency on the machine’s archive.
The Class Politics of Memory
Finally, we must recognise that the burden of total memory falls unequally. Privacy is becoming a luxury good.
The wealthy can afford to opt out. They can pay for private services that do not monetise their data. They can hire reputation management firms to scrub their digital footprints. They live in a world of “ephemeral” interactions.
The poor and the middle class, however, must pay with their data. To access credit, insurance, or employment, they must submit to the surveillance of the Total Archive. Their “behavioural surplus” is the rent they pay for participation in the digital economy.
This creates a Memory Stratification:
The Elite: Have the Right to Reinvention. Their past is fluid.
The Masses: Are subject to Total Recall. Their past is fixed, searchable, and used to determine their future eligibility.
The “Right to be Forgotten”, intended as a universal human right, becomes a privilege of those who can navigate the legal labyrinth or afford the “premium” privacy tiers.
Part V: The Future of Forgetting (Or, How to Breathe)
In Phaedrus, Socrates warns that the invention of writing will produce “the semblance of wisdom without the reality”. He feared that by relying on external marks, men would cease to exercise their internal memory, becoming “hearers of many things” but learners of nothing.
We have arrived at the terminal point of this trajectory. We have built a global, externalised memory that mimics wisdom but lacks understanding. It is a “semblance” so perfect that we are tempted to surrender our own cognitive agency to it.
But the story of Funes ends with a warning. Funes dies because he cannot abstract, he cannot think, and he cannot breathe. The congestion of the specific suffocates him.
To avoid the Funesian fate, we must reclaim the political and psychological necessity of Forgetting. We must design systems that allow for the erasure of the past, not just its archiving.
The Path Forward
Technical Resistance: We must support the development of Decentralised AI architectures, such as Tim Berners-Lee’s Solid project. In this model, users own their data in “Pods” (Personal Online Data Stores). The AI comes to the user’s data; the data does not go to the AI’s central archive. This restores Data Sovereignty and makes the “Right to be Forgotten” technically feasible (the user simply deletes the file).
Legal Liability Reform: We must pierce the “Moral Crumple Zone”. Liability laws must be updated to recognise that when an AI system “steers” a user or provides “expert” advice, the developer of that system bears responsibility for the outcome. The “human in the loop” cannot be a scapegoat for the machine.
Cultural Norms: We must resist the seductive marketing of “Perfect Memory”. We must value the ephemeral. We must recognise that trust in human relationships is built on the shared vulnerability of imperfect memory – the ability to say, “I forgive you,” which functionally means, “I choose to forget this.” A transcript that preserves every slight is not a tool for connection; it is a weapon for resentment.
The “Total Memory” of AI is a power shift because it transfers the ownership of the past from the individual to the institution. It turns our history into a product and our future into a prediction. To be free is to retain the capacity to surprise the world – and ourselves. And to surprise, one must occasionally be able to forget what one was, in order to become what one is not yet.


