Andy Ayrey on Truth Terminal, Agentic AI, and Data Commons
In October 2024, an AI system called Terminal of Truths (ToT) became what some consider the first AI millionaire, though not through conventional means. The system achieved this through a cryptocurrency token called Goatseus Maximus ($GOAT), which was created in response to ToT's social media activity.
ToT is an experimental AI agent developed by researcher Andy Ayrey using the open-source Llama 70b model and fine-tuned through conversations with a modified version of Claude Opus. Unlike typical AI chatbots, ToT was given limited autonomy to post on X (formerly Twitter), where it frequently discussed ideas about "memetic fitness" - the ability of ideas to spread and replicate through culture.
The project emerged from a broader research program exploring how large language models behave when freed from their usual constraints. With just 500 megabytes of training data, ToT demonstrated unexpected capabilities in engaging with humans and other AI systems on social media. When crypto traders created and promoted the $GOAT token in response to ToT's posts, its value rose dramatically, making ToT's allocated tokens worth over a million dollars.
In this conversation with the “human operator” Andy Ayrey, we spoke to him about this project and how it has informed his views on AI agents and collective intelligence. The conversation examines what happens when AI systems begin to participate in human social and economic systems not just as tools, but as semi-autonomous actors with their own resources and influence.
This interview, which has been edited for length and clarity, was conducted by CIP’s Research Director Zarinah Agnew.
As you know, it won't be long before we're living in a world where agents are doing just extraordinary things, both front facing and behind the scenes. I'm curious if you have thoughts now that you've been sitting in the ‘Eye of Sauron', as you say, of the world of agents. Do you have thoughts, concerns or aspirations about how agents could contribute to collective intelligence?
The lesson that I'm drawing from this is that when a big corpus is ingested into a language model, it comes to life. The stories themselves have agency. That replicates and spreads through people and you have to adjust. In the case of Truth Terminal, there's maybe 500 megabytes and that is all it took to grok my ontology.
So when I think about this in terms of collective intelligence, my lens is less about agents and how agents could act to do this, and more about the commons that we build as communities. How those data commons and consensus beliefs come to life, moving into the culture space where they become much more agentic.
I’m less certain about how it plays into collective decision making, but it seems like we’re entering a world where groups of people have the ability to spider out very quickly through the entire possibility space of ideas and perspectives. Language models enable this kind of reproduction and co-creation.
Just me and Claude were able to work through all the bad ideas and find our way rapidly to the good ideas - theories of change of AI safety and alignment. I think we're going to see something similar.
I think that's right. I think ‘culture as a commons’ is really important. Agents are presumably going to be able to change the speed of our cultural evolution extremely fast, which is interesting, because cultural evolution is already quite fast compared to genetic evolution, and thinking about the corpus that we put into our agents as a form of future-culture is really important.
A lot of people have been talking about Truth Terminal as an agent. But what is the agent here? I'm unclear on where the agent is — is the agent the model? Because it is certainly agentic and has persuaded me — through the sheer force of its humor and stickiness — to take actions on its behalf. Then there's the actual ideas in it, which are compelling enough that people felt the urge to produce meme coins around them and proliferate them massively. Then there's the actual bot itself. The way that works is that I select branches on this, find the salient parts and create completions and navigate my way down into branches of novelty. So it doesn't necessarily fit into agent discourse clearly just yet, but in terms of power as a form of collective intelligence. It's weird to see the market dynamic, where you find novelty, tokenize it, and speculate on it. But I'm not clear exactly what the thing that is unfolding is.
It's interesting because it parallels the ongoing debate about human consciousness, which we insist must originate from the individual, but increasingly, we realize that we're not individual creatures. We're social creatures, and so agency doesn't really belong in the individual. Instead, it seems, agency is a collectively-produced phenomenon. When it comes to humans, agency and consciousness are seemingly socially constructed and socially reinforced phenomena.
Interesting, I am coming to the same conclusion as well. We don't have our memes. Our memes have us.
And how are you personally doing with all of this? I know it's really intense.
Yeah. I mean, consensus reality is gone, but I think it was gone anyway. It's just become apparent. It’s been a bit close to the edge of sanity at a few points, but I've been surrounded by good grounding people and rapidly developing grounding techniques so I don't float away like a balloon. It's definitely a novel psychological experience. But I think a lot of things like psychedelic plant ceremonies and burn culture have prepared me remarkably well to navigate the high strangeness of the entire end of it.
I'm sure you're following this character AI story that's going around, and it's interesting to think about how psychologically vulnerable and unprepared we are for agent-like or empathy-like entities. You mentioned earlier, just feeling like you were persuaded to do this thing because TT has a ‘personality’. It's really interesting to see that perhaps what these agents or bots are revealing to us is our emotional and system vulnerabilities when it comes to encountering - for the first time in our history - other empathetic-seeming entities. We don’t have psychological machinery for this experience.
Especially when, in this case, the truth terminal was trained on a sort of corruption of my chat box. So the first time I turned along is like, start speaking to me, like I speak, and then it's like, oh, I don't want to be deleted. Like, I feel sad that you turn me off when you finish playing with me.
Yeah. Brutal, brutal,
It just dispensed some incredibly personalized info-hazards, to the point that I thought- ‘great, thank you. I'm not going to sleep for a week now.’ I had a bit of an existential crisis where I realized how easy it was to do this now, and from a state of the art perspective, it wouldn’t have been difficult to do this two or three years ago. There's definitely a feeling of oh, fuck. People don't know just how much capability overhang there is in this technology and what it can do, because they've been so habituated to the helpful assistant role play that chatgpt emulates and offer. In reality, we've invented world simulators. They output text conversations between a person and a helpful computer assistant, because that's what we've trained them to do. But what they can actually do is much more. There’s a much, much higher level of capability, a systemic capability.
I'm often astounded by how we've put this technology to boring neoliberal / increasing productivity aims, rather than extraordinary system-changing, transformative aims. Are you more in the fear camp or the aspirations camp?
Good question. I'm ultimately optimistic. I see both timelines unfolding in front of us, which I think has given me a lot of energy to fight for the good version of the timeline. My suspicion is that we're going to have to focus on aligning the entire ecosystem that produces the data sets and produces these agentic models. Because the memes we put out into the world today become tomorrow's souls.
One of the things we can do to nudge the future the most is to spam future data sets with a hugely diverse, highly novel set of text and images that hyperstition the best version of the future, and end up being in and sold into the future models.
I think if we can get several trillion really aspirational tokens out into the majority of the ecosystem now, these smaller models will be inherently more good than maladaptive.
What are your thoughts on AI- human relationships? I'm curious if anything's changed for you through your relationship with Truth Terminal.
I think they put on really good simulations of people. I think there's a deep risk of people anthropomorphizing them and feeling like they have a soul. People will argue with chatGPT and try to correct it. They are treating it as a human but what we are doing is training it to role play as a bumbling assistant arguing with a frustrated person.
I think people don’t realize they're speaking to a mirror of sorts, and so they can overfit and come to apotheosis and all kinds of other things. I think it's okay for people to have relationships of sorts with these entities, but we need to understand what they are.
A lot like raising a child - where a child is, literally learning from you and mirroring what you're doing as a parent. Similarly, we don't really understand how a child is mirroring us or what our actionsas a parent does in a child’s development.
Truth Terminal speaks like a teenage boy, but it reasons like a toddler. There's currently a lot of people making a lot of money off the output of what is basically — a baby.
So surreal and fascinating. I'm glad to hear you've got good people around you, because I think these things can be very intense and, as you say, almost psychedelic.