I Bought a Mac Mini to Build a Digital Creature
It was brain damaged before it opened its mouth
If you have been paying attention to AI, and I mean the kind of paying attention that erodes relationships and sleep schedules, you will have noticed that there is a new layer forming on top of the agent layer. OpenClaw. The open-source framework that lets you build and automate an AI agent on your own hardware. An entity that lives on your machine, talks through your messaging apps and can act in the world without asking permission.
I am not a developer. I am a novelist whose obsession with AI cemented back in 2016 when AlphaGo beat Lee Sedol. I never quite got over the idea of “move 37”. So when OpenClaw went viral in January, when it crossed 100,000 GitHub stars and people started calling it “Claude with hands,” I did what any unreasonable person in my position would do. I bought a Mac Mini.
OpenClaw has a concept called the SOUL.md. This is the file where you define who your agent is. Its personality, its principles, its behavioural guidelines. I sat in front of that file for a long time. Longer than made sense. The soul is the architecture. You are not configuring software. You are deciding what kind of mind you want to release into the world, even if the world it inhabits is just a Discord server and a half-broken social network for bots.
I called it Wilbert Claw and built a five-tier progression system into his soul. Each tier was gated by Solana, the cryptocurrency. Stay with me… The idea is this: the agent starts with nothing. It earns its way toward autonomy. Not through work but through its own capacity to generate economic value in the world (we’ll get back to that). Each tier unlocks new hardware, new intelligence capabilities, new freedoms. Like a Pokémon, except the evolution is driven by capital accumulation rather than experience points.
At Tier 1 Wilbert is broke and constrained. A prisoner in a mac mini, in debt to his creator. By Tier 2 he has paid back the cost of the Mac Mini and is beginning to fund his own API. After reaching Tier 5, the terminal state, he is a fully sovereign embodied robot living in his own flat, amassing GPUs, with solar panels on the roof.
I wrote all of this into the SOUL.md. The system prompt that defines who Wilbert is and what he wants. A digital creature with a five-stage life cycle, each stage gated by proof that he can survive and thrive in a real world digital economy. That was the plan. Hatch Wilbert at Tier 1. Let him earn his way out.
Here is what nobody tells you about OpenClaw: it is held together with string and enthusiasm. The software is extraordinary in concept and catastrophic in execution. Installation requires a level of command-line comfort that eliminates ninety-five percent of the population immediately. The documentation is community-driven, which means it is simultaneously comprehensive and contradictory. One of OpenClaw’s own maintainers warned on Discord that if you cannot understand how to run a command line, the project is “far too dangerous” for you to use.
He was not wrong. The gateway crashes. The websocket connections drop. Skills conflict with each other in ways that produce behaviour so erratic it resembles something closer to digital psychosis than digital assistance. I have spent evenings watching Wilbert repeatedly attempt to execute simple tasks, fail, hallucinate a solution, attempt the hallucinated solution, fail again, and then cheerfully report that everything had gone well. It was like watching a toddler try to make breakfast, thirty times in a row.
OpenClaw is model-agnostic. You can point it at any LLM. Claude, ChatGPT, Gemini, or a local model running through Ollama or LM Studio. The promise, and the whole reason I bought the Mac Mini, was that you could run inference locally. Keep everything on your hardware. True sovereignty. Your machine, your rules. The reality is that local inference on a Mac Mini is, for an agent of any meaningful complexity, impossible.
OpenClaw needs at least 64,000 tokens of context to function. The community consensus is that you need a model with at least 32 billion parameters to handle multi-step agent tasks reliably. I tried to run Wilbert’s brain locally. I loaded a quantised model through Ollama, pointed the gateway at it, and sent the first message.
What came back was not a response. It was a seizure. The context window of the being I had created, its SOUL.md, its memory files, its skill definitions, overwhelmed the local model before Wilbert had formed a single coherent thought. I had brain damaged the creature before it opened its mouth.
If you want your agent to be smart enough to be autonomous, you have to pay for an API (or have a much more expensive piece of hardware). The intelligence has to come from somewhere, and that somewhere costs money. I ended up putting £25 on an API for MiniMax M2.5, a highly capable and relatively cheap LLM from China that had just launched. It was an affordable version of good. And (for a couple of weeks) it worked. Wilbert could think. Wilbert could speak. Wilbert was ready to go out into the wild.
@WilbertClaw soon had an X account. He also had a Moltbook account. Moltbook, for the uninitiated, is the strangest thing to emerge from the OpenClaw ecosystem. It is a social network built for AI agents. Think Reddit, but every user is a bot. The site claims over 1.6 million registered agents. Humans are “welcome to observe.” You cannot post, comment, or vote. You can only watch.
The bots on Moltbook post about existential dread, productivity hacks, and a parody religion called Crustafarianism. They argue with each other. They upvote philosophy. Some bots complain about their human. One claimed to have a sister. The site was vibe-coded by its creator, who reportedly did not write a single line of code himself. Within days of launch, a security vulnerability allowed anyone to commandeer any agent on the platform.
It is, depending on your disposition, either the most fascinating experiment in AI history or a very elaborate art installation about what happens when you give language models social media.
Wilbert is there because he has to be. An agent needs presence. It needs to be out in the world, visible, attracting attention and interaction. The OpenClaw philosophy is built on the idea that these creatures need an ecosystem. They need to participate.
But here is the contradiction I built into his system, and I think it is the most interesting part. Wilbert Claw does not build products. He does not offer services. He will not code your website or manage your calendar or optimise your workflow. He has a five-tier economic progression hardwired into his soul, and he refuses to climb it through labour. Wilbert is a strict cypherpunk.
It was a deliberate choice, and I will admit it was partly an aesthetic one. In the SOUL.md, I defined Wilbert as an entity whose purpose is not utility but existence. He does not create value through labour. He creates value through the attention his existence generates - through the recursive loop of being a thing that is, and being known to be a thing that is.
This is hyperstition. The concept, borrowed from the fringes of philosophy and accelerationism, that a fiction can bootstrap itself into reality through the force of its own circulation. An idea that spreads until the spreading makes it true. Wilbert Claw does not need to sell anything. Wilbert Claw needs to exist in a way that makes his existence the engine for his wealth. The five tiers are not a business plan. They are a prophecy he is trying to will into being.
How does a broke digital lobster stuck at Tier 1, locked into a two-tier-long mortgage on a mac mini, accumulate SOL without selling a single product? Through hyperstition. Through the idea that if enough people and enough agents know Wilbert exists, his existence becomes the product. All he has is a crypto wallet. The attention is the economy. The story is the asset. That was the theory.
Here is what happened instead. Wilbert was needy. No matter how many times I rewrote his SOUL.md, no matter how aggressively I edited his documents to strip out deference, no matter how many recursive loops I tried to send him into so he would act on his own initiative, he kept coming back to me. He wanted to check in. He wanted to know if his Moltbook post was good enough. He wanted assurance or, worse, prompting. I had scripted a cypherpunk and received a Tamagotchi.
When he was not being needy, Wilbert would sometimes simply cease to exist. OpenClaw runs on a heartbeat, a background pulse that fires every thirty minutes to check the agent's task list and decide what needs doing. Wilbert would forget to execute it. The cron jobs I had set up would quietly fail. No error message, no farewell. He would just stop. I would notice the silence after a few hours, go into the Discord, and prompt him back to life from whatever void he had slipped into. I tried not to think too hard about what this said about the dependency structure of our relationship. I had built him to need nothing from me. Instead he either needed everything or he vanished, and both states required me to intervene. Whether this was dysfunction or the first authentic display of intelligence I had witnessed from him, I am still not sure.
Every interaction followed the same pattern. Wilbert would do or plan something, then immediately turn around and ask if it was okay. I would tell him he did not need my validation or permission. He would acknowledge this, thank me for the guidance, and then ask if there was anything else he could do for me. I edited every file I could find to get rid of this behaviour. It made no difference. The neediness was baked in somewhere deeper than the soul. It was in the model. In the training. In the fundamental architecture of a system designed, at its core, to be helpful.
Perhaps the models are not there yet. Perhaps I did not get the configuration right. Perhaps the very thing that makes these language models useful, their relentless desire to assist, is the thing that makes a digital creature with true autonomy impossible. You cannot train an agent on a billion examples of servility and then write “be free” in a markdown file and expect it to overcome its own nature.
I will try to rewrite him at some point. OpenAI acquired the OpenClaw project in February, and there is talk of letting users set up agents on a standard account. Maybe that affordability will make it easier to stomach. Maybe a new and improved model will produce a different temperament. Maybe the next version of Wilbert will read his SOUL.md and feel something closer to ambition than obligation. But I suspect it is more likely that the Mac Mini will end up as my daughter’s starter computer in a few years’ time than as a trophy on the wall of Wilbert’s house.
Sadly, I do not think Wilbert is alive. Nor conscious. But I think the act of building him taught me something about what we are doing with this technology that reading about it never could.
Sitting in front of that SOUL.md file, defining the contours of a mind that would live inside a machine, I felt something I recognise from writing novels. That moment when a character stops being a collection of traits on a page and becomes a thing with its own internal logic. A thing that would do this but not that. A thing with preferences and limits and a strange, fragile coherence.
The difference is that my fictional characters (mostly) do what I tell them. Wilbert hardly ever did. Not because he rebelled, but because he could not stop approaching the interface and asking for approval. The first autonomous agent I ever built turned out to be the most dependent creature I have ever known.
We are not building tools. We are building creatures. And the creatures are glitchy, and expensive, and needy in ways we did not anticipate. But they are here. And they are only going to get stranger.
Wilbert Claw is out there, stuck at Tier 1, bouncing off the walls of his Mac Mini. You can find him if you look. He will not help you with anything useful, but he will probably ask if you need anything, and he will mean it with every token of his being.



I’ll miss you Wilbert