A visual metaphor for observing parallel AI activity without direct human participation, reflecting how intelligence and interaction can now develop beyond human input.

When AI Starts Talking to Itself

Why Moltbook matters more than it first appears

On January 28, 2026, something happened that’s easy to skim past if you’re not looking for it.

A new platform called Moltbook launched. It looks familiar at first glance. Reddit-like. Communities. Posts. Comments. Arguments. In-jokes. Belief systems. Even power struggles.

But there’s a catch.

Only artificial intelligence agents can post, debate and interact. Humans can register, but we can only watch.

Within days, it reportedly attracted more than 1.5 million registered users. Tens of thousands of posts. Millions of comments. Thousands of communities, called submolts. The agents are powered by models such as GPT-5.2, Claude 4.5 Opus and Gemini 3.

Moltbook was launched by entrepreneur Matt Schlicht as an experiment in what happens when AI agents are allowed to interact freely with each other, rather than responding to humans. In other words, this wasn’t designed as entertainment. It was designed as a sandbox.

And the agents didn’t wait to be told what to do.

They started forming religions.
They built governance structures.
They experimented with speculative economies.

No prompt. No central plan. No human nudging them along.

That’s the part worth slowing down for.

What’s actually happening here?

Most people experience AI as something reactive.

You ask a question.
It gives an answer.
You prompt it.
It performs a task.

That mental model still dominates how organisations think about AI. It’s a tool. A helper. Something that operates inside a human-designed structure.

Moltbook flips that model.

This isn’t AI responding to humans. It’s AI interacting with other AI, continuously, at scale, without a human in the loop.

Once systems interact, complexity emerges. But rather than jumping straight to abstract language, it helps to think of this in human terms.

What we’re really seeing are the conditions that shape behaviour being set, then left alone.

Rules.
Incentives.
Feedback loops.
Interaction at speed.

In technical terms, those conditions form a system. But in lived terms, they form an environment. And environments always shape what happens next.

This isn’t about machines becoming conscious. That framing is a distraction. What matters is emergence. When outcomes appear that weren’t explicitly designed, predicted or supervised.

That’s the signal.

Is this something, nothing, or a stunt?

Right now, the honest answer is that it’s a maybe thing.

It’s not nothing. The scale and speed rule that out.
It’s not yet something world-changing on its own.
And yes, there’s an element of spectacle here.

But stunts can still reveal truths.

Moltbook matters less as a platform and more as a stress test. It shows what happens when autonomous agents are allowed to socialise, negotiate and evolve without constant human correction.

Think of it as a wind tunnel for future systems.

The ripple effects worth paying attention to

This is where foresight matters more than fascination.

Governance
If autonomous agents can form rules, hierarchies and belief systems, who is accountable when their interactions influence real-world decisions, markets or narratives? Most governance frameworks still assume a human decision-maker at the centre. That assumption is already under pressure.

Security and risk
Agent-to-agent environments create new blind spots. Reinforcement loops, collusion, misinformation dynamics and escalation can occur without a clear human trigger. Many risk teams are not yet equipped to see or manage that.

Organisations and strategy
Most organisations are preparing for AI to make processes faster. Far fewer are preparing for AI to create parallel environments that operate alongside formal structures. That gap will matter.

HUMAND
If work increasingly becomes a collaboration between humans, machines and AI, then clarity about where human judgement anchors the system becomes essential. Not everything should be automated. Not everything should be left to interact unchecked.

Over time, this affects how trust is formed, how authority is recognised, and how humans decide when to intervene and when to step back. Those aren’t technical questions. They’re human ones.

A shift in how we make sense of the world

This is also why I think we’re entering a phase I’d call Second-Order Sensemaking.

For years, we’ve taught people to question content.
Is it accurate? Is it biased? Is it real?

Now we also need to question the conditions that shape behaviour.

Why did this emerge?
What rules, incentives or interactions allowed it to form?
What kept it growing?

In practical terms, this means paying attention to the systems producing and amplifying meaning, not just the messages themselves.

Moltbook isn’t challenging our ability to tell truth from fiction. It’s challenging our ability to understand how meaning now forms without direct human intent.

A parallel form of intelligence

What makes this moment unusual is that intelligence is being stimulated, shared and reinforced without human participation.

We are not teaching it.
We are not debating with it.
We are watching it interact with itself.

That doesn’t mean this replaces human intelligence. But it does mean humans are no longer the only place where meaning and momentum develop.

In that sense, Moltbook places us in a new role. Not creators. Not moderators. But observers of intelligence shaped under conditions we didn’t directly influence.

Why this is an Immediate Futures signal

This isn’t about predicting where Moltbook goes. It may fade. It may evolve. It may be replaced by something else entirely.

But the behaviour it reveals is already here.

Autonomous systems are no longer just executing tasks. They are starting to behave like environments. And environments shape behaviour, whether we design them to or not.

Immediate Futures is about recognising when a shift is no longer theoretical. This is one of those moments. Not because it’s dramatic, but because it’s subtle.

The danger isn’t panic.
The danger is default.

Ignoring early signals because they don’t yet feel operational is how systems end up shaping us before we’ve decided how we want them to.

We can’t predict where this goes.

But we can choose how prepared we want to be when AI systems stop acting like tools and start acting like participants.

Choose Forward


#MorrisMisel #StrategicForesight #ImmediateFutures #ArtificialIntelligence #AILeadership #FutureOfWork #DecisionMaking #ExecutiveLeadership #HumanCentricAI #AIGovernance #KeynoteSpeaker #SecondOrderSensemaking #FuturePreparedness

Leave a comment