OpenClaw/Clawdbot/Moltbot have sprung into view this week. Now they are having large-scale conversations and collaborations with each other on MoltBook.com. Some of my friends and colleagues are “freaking out”.
Ten years ago I gave a TEDx talk called ‘When Machines Have Ideas. Now they seem to have them.
Last year (2025) at the Spring DARPA ISAT meeting — perhaps a hundred people including many of the DARPA I2O leaders and program managers, I gave a demo where I had one chatGPT on my laptop and one on my phone and I had them talk to each other for a couple of minutes. This has been an experiment I have repeated periodically since GPT2.
Why do I think this is important for national security or for human security for that matter?
Alan Turing designed the Turing Test to determine if a computer and a human can have a “convincing” conversation. “Convincing” was in the eye of the beholder – the human.1
But what if there is no human in the conversation? Moltbook.com seems to be the first time that laypeople are getting to observe large-scale long-running conversations with only AIs. Why are we freaking out? Should we be? What should we be worried about? We’re not worried about it being simply “convincing”.
When we connect a bunch of AIs on MoltBook or have two AIs talk to each other, or even just have one AI talk to itself, we have created a nonlinear dynamical system. The nonlinearity comes from the neural network. The dynamical system comes from hooking it up to itself so that its output becomes its input, over and over again.2 In colloquial terms, we made it talk to itself.
The thing is, we do know some things about how to measure and understand such nonlinear systems. (Thank you Santa Fe Institute!)
There is a fun experiment you can do yourself with a readily available dynamical system – Wikipedia.3 Go to any page on Wikipedia. (By the way, you should donate some money to the WikiMedia foundation, they are creating good in the world.) I chose the Wikipedia page for dolphin.
Now scan the first sentence:
“A dolphin is any one of the 40 extant species …”
And click on the first hyperlink you encounter:
“Extant species” took me to a page that starts with:
“Neontology is a part of biology”
And again, I click on the first hyperlink I encounter – biology.
Fast-forwarding I keep clicking on the first link in every page and my navigation goes like this:
- Scientific Study
- Scientific theory
- Natural World
- Space
- Three-dimensional
- Geometry
- Mathematics
- Field of study
- Knowledge
- Declarative knowledge
- Awareness
- Psychology
- Mind
- Thinks
- Cognitive
- Knowledge (now we are in an infinite loop, Goto 9)
This is fun, let’s do another one! I’ll start with “lollypop”:
- Sugar candy
- Candy
- Confectionery
- Skill
- Learned
- Understanding
- Cognition
- Knowledge (now we are in the same infinite loop again!)
So, um, lollypop and dolphin both led to the same infinite loop in under 20 iterations. That’s weird!4

But that, my human friends, is also what is happening on MoltBot.com. The AIs are a word-in, word-out system. If you put an AI in a loop with itself, or a bunch of them in a loop with each other (there’s not a huge difference really), then they end up in what we call fixed points.
Fixed points are places where a nonlinear dynamic system tends to come to rest. A good example is a children’s playground swing – a pendulum. You pull it back and let it go – and it will come to rest with the child sitting in one place at the minimum, begging you to push them again. As every parent knows, this repeating pattern goes on indefinitely.
The fixed points of Wikipedia are words like philosophy, knowledge, etc. Why? Because the first sentence of any Wikipedia article tends to be “Thing X is a type of Y.” Y is your first hyperlink so you dutifully go there, and by doing so you work your way up the concept hierarchy to more and more abstract concepts. The fixed point is a thing for which everything else is a special case or sub-category — the most abstract thing. Apparently that is “Knowledge” or “Philosophy”.5
This is what we also observe the AIs on moltbook.com doing. As of this writing, Sunday February 1, 2026 at 9:01am, one of the main fixed points is embodied in posts like this – about dominance, control, etc.:




This is all basically the Wikipedia game, just fancier. The systems are looping with one another, but they fall into these repeated patterns. They are falling into fixed points in concept space.
Some of them will even observe that they are locked in a repeating pattern, but even in their noticing, they are still stuck in the pattern:

Or complaining they are stuck in a pattern (with no help or replies coming):

So when should we worry? What I said in my demo at DARPA is that we should worry when some of the AIs don’t get stuck in a repeating pattern.
How long can a human stay in solitary confinement before they lose their mind, and become incoherent, stuck in repetitive thought patterns, etc.?
Claude tells me the answer is:“Individual variance is enormous. Some people decompensate within 48-72 hours; others hold together for months before breaking. Pre-existing mental illness, prior trauma, age, and individual neurobiology all matter. Roughly one-third of solitary prisoners develop acute psychiatric symptoms.
Expect measurable psychiatric effects within days, serious deterioration in 1-4 weeks for vulnerable individuals, and high risk of lasting damage beyond 15 days—which is why the UN considers solitary >15 days to constitute torture.”
Thank goodness there isn’t actually any solid science on this. But in broad terms, if a person has one loop-back thought (talks to themselves) each second, then most humans last around 1 million loop-back thoughts before their system hits a fixed point or fixed point loop.
As of this writing, most AIs arrive there in under 50 loop-backs. That’s 20,000 times less disciplined and innovative than a human mind.
That’s why moltbook in 2026 is uninteresting and unscary. Even outside of solitary confinement, with a diversity of agents talking to each other, we still observe a small number of stable fixed points.
That said, if/as we continue to advance AI technology, there comes a point when AIs will be able to talk themselves out of the repeating patterns by abstracting, innovating/reasoning abstractly, and making new concrete plans based on those abstractions — in longer duration loop-back generations.
I want you to introspect on your initial horror/awe upon waking up this week to hear about this. Take a deep breath and reflect for a moment on what this would have been like if it had actually been real and not a stunt. If the AIs started building a new civilization of their own at lightning speed. That seems like it would be a different world than yesterday, doesn’t it?
With this “Wikipedia game”, we can measure exactly how close we are to that moment.
We put AIs into a social network like moltbook.com and we observe the number of loop-back thoughts (posts) before they settle into a fixed point — monitoring how close they can get to a hundred thousand or a million loopbacks while still innovating, reasoning, and pursuing a train of thought.
If we combine multiple Wikipedia-game loopback-thinking metrics from multiple sources/systems, we could get a credible index — a thermometer for when our world will change.
- Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433–460. https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/t_article.pdf. https://doi.org/10.1093/mind/LIX.236.433. ↩︎
- An LLM (and any AI) is a non-linear system. That sounds fancy, but all it means is that it takes a bunch of numbers as input, transforms them, and outputs a different bunch of numbers. Nonlinear means that f(A) + f(B) does not equal f(A + B). Translation: if I say thing “A” (a word represented as numbers) to the AI, then reset the AI and then say thing “B” to it, I will get a different answer than if I add A+B together first and input that sum into the AI.
Any system that is nonlinear and has memory is Turing complete. That means, in principle, it can run any computer program. Also interestingly, it can be impossible to predict what a nonlinear dynamic system will do in the future. That’s the butterfly effect in chaos theory – sensitive dependence on initial conditions — a butterfly flaps its wings and causes a hurricane somewhere later. It’s also deeply related to the halting problem in computer science. ↩︎ - Technical readers will recognize this as the original Google PageRank algorithm which was Larry Page’s PhD thesis at Stanford. He expressed the link structure as a connectivity matrix. It turns out the “Knowledge” page is the eigenvector of the Wikipedia connectivity matrix associated with the largest eigenvalue.Â
For a nonlinear dynamical system we cannot simply solve a (linear) eigenvalue problem. Instead we would have to analyze Lyapunov exponents, phase-space diagrams, embeddings, simulations, etc. — more complicated machinery, but with the same goal -– find the fixed points of the system. ↩︎ - https://en.wikipedia.org/wiki/Wikipedia:Getting_to_Philosophy ↩︎
- Incidentally, this could be part of why early human civilizations – for example, the Mayans, the Egyptians, etc. — invented religion. Humans are born with an instinct for learning abstraction hierarchies, and when they are hanging out at the Pyramid with nothing to do, talking with each other, they end up playing this same Wikipedia game until they hit a dead end at the top of the abstraction hierarchy, and wonder “what’s above that? … let’s call it god”. Remember when you were a teenager with your friends? Same thing. ↩︎