Morris Misel, award-winning global futurist and keynote speaker, standing in a bright cinematic setting symbolising the fusion of biology and technology. The Immediate Futures™ logo appears in the corner, representing foresight, leadership, and the next computing revolution from silicon to living intelligence.

The Next Computing Revolution: When Silicon Learns to Breathe

There’s a quiet hum that only certain laboratories know, the hum of something that isn’t quite alive, but no longer fully mechanical either.

It’s the sound of electrodes meeting living cells, the sound of silicon learning to breathe.

For decades, our machines have been cold logic gates, transistors, and code that ran neatly across polished wafers.

But in a Swiss lab described by BBC Technology Editor Zoe Kleinman, researchers are wiring clusters of human neurons, tiny organoids grown from stem cells, into experimental circuits.

Their term for it is wetware.

Mine would be something closer to the dawn of organic computation.

When I first read that scientists could press a keyboard key and watch an electrical response flicker through a living cluster of brain cells, I didn’t feel awe as much as a pause,a deep, human pause. Because if this truly works, it means the age of synthetic thinking is ending. We are entering an age of living cognition.

A world that keeps narrowing

I spend much of my professional life helping leaders make sense of exponential technologies. The conversation usually begins with AI, data, and automation, but it always ends with something more human our discomfort with the pace of our own invention.
What’s unfolding in those Swiss petri dishes isn’t just another upgrade; it’s the blurring of a border we once considered sacred.

In my frameworks like Immediate Futures™ and Inhabitable Futures Grid™, I map signals across near, mid and distant horizons. “Wetware” computing now lights up every layer of that map: a signal today, a shift tomorrow, and a shockwave a decade out.

On the Immediate Futures horizon, biocomputers could transform the energy equation of AI. Early prototypes promise to run on a fraction of the power of current data centres.

That’s not a small tweak, it’s a planetary one. Each human neuron consumes about a millionth of the energy of a transistor. Imagine intelligence at the scale of the cloud, running at the metabolism of life itself.

On the Inhabitable Futures horizon, the question flips. If we can grow computational tissue from our own cells, where does ownership live?

Who holds the rights to a mind built from anonymous donors? And what does consent mean when the “processor” might one day feel?

HUMAND™—when the machine becomes part of us

I created HUMAND™ to explore how humans, machines and AI divide work into what each does best. Wetware computing collapses those divisions entirely. The processor is no longer separate from biology; it is biology.

Picture a future workplace where an organisation’s most powerful analytical engine isn’t a server rack but a living neural matrix. Maintenance might require a biologist, an ethicist and an IT engineer at the same table. It’s the ultimate expression of human-machine collaboration—and perhaps the moment it ceases to be collaboration at all.

But HUMAND was never just about efficiency. It’s about meaning. What happens to purpose and identity when our tools carry pieces of us, literally, inside them?

Decision Trust Zones™—who decides when the neurons learn?

As soon as we talk about machines that can adapt, we enter the domain of Decision Trust Zones™. These zones describe the spaces where we must decide who or what we trust to decide.
With silicon, trust was contractual: algorithms act within code we control.
With living neurons, trust becomes biological: they may behave unpredictably, even creatively.

Imagine a bio-AI system trained to manage logistics. Over time it starts to re-route shipments in patterns no one programmed, reflecting its own learned efficiencies. Is that optimisation, or emergent behaviour? And if an organoid “dies,” as the BBC report described, after a brief burst of activity—are we witnessing the shutdown of a system or the final pulse of a living entity?

Decision Trust Zones help leaders design governance around uncertainty, but this is uncertainty with pulse and breath. The rules of AI ethics may no longer fit when intelligence sits in a petri dish.

Ripple Effects™—if this happens, then what?

Foresight isn’t prediction; it’s permission to explore consequences. Using my Ripple Effects™ lens, I start with one small act—the creation of living processors—and follow its waves outward.

If this happens, data centres could shrink from vast, heat-hungry warehouses to biovats maintained by lab technicians. Then this could happen: global energy demand for computation could drop dramatically, reshaping carbon markets and climate models. Which would impact national infrastructure investment, energy policy, and the politics of technological dependency.

Another ripple: if this happens, we will begin collecting biological material at unprecedented scale. Then this could happen: personal biology becomes an economic asset. Which would impact privacy law, medical ethics, and the commodification of human tissue.

And a more intimate ripple: if this happens, we externalise not only memory and logic but the essence of human cognition itself. Then this could happen: empathy, creativity, and moral intuition—the things we thought uniquely ours—start to mirror back at us from a dish. Which would impact how we teach, how we lead, and what we still consider “human work.”

PTFA—Past Trauma Future Anxiety

Every major technological leap stirs old human wounds. PTFA—Past Trauma Future Anxiety—is the pattern of fear that arises when past experiences of loss or control collide with future possibilities.

Wetware computing touches both ends.
Our past trauma lies in centuries of treating life as something to exploit; our future anxiety lies in fearing what happens when life becomes our partner.

We’ve always mythologised this tension—from the Golem to Frankenstein—stories where creation mirrors its maker too closely. In my own Frankenstein 2050 series, I explored what happens when innovation outruns empathy. Reading about these living circuits, I felt that echo again—not as cautionary fiction, but as unfolding fact.

A pause for perspective

Before we imagine data centres full of living tissue, we should remember how early this work is. The BBC piece made that clear: organoids now survive only four months. Yet even in that short span, researchers observe bursts of activity eerily similar to a dying brain.

I don’t read that as macabre; I read it as a reminder that innovation always carries the shadow of mortality. We may design eternal systems, but we build them with finite hands.

The deeper question is not whether these biocomputers will work—but whether we are ready for what they will reveal about us.

The next computing revolution

Every previous computing revolution has been about scale.
Silicon chips made processing faster.
Quantum computing promises power.
Biocomputing offers proximity—bringing intelligence closer to the essence of life.

It isn’t just a new substrate; it’s a new relationship between information and existence. Data becomes tissue. Processing becomes metabolism. And “cloud” might soon mean something literal: a living network distributed across biolabs instead of server farms.

We often say “AI is learning,” but in this next epoch, learning will mean growing. The line between programming and cultivation will blur. Engineers will talk like gardeners.

When that happens, language itself will need re-engineering. We’ll stop asking what machines can do and start asking what they want to do—or appear to.

The ethics of growth

What makes wetware so confronting isn’t that it blurs the line between human and machine, it blurs the line between creation and care.
When you build a circuit, you power it on or off.
When you grow a brain, you feed it.

This simple shift changes everything. Suddenly, technology moves from engineering to agriculture, from assembly to nurturing. And with that comes responsibility. What happens when something that learns also hungers? When a computer’s maintenance manual starts to resemble a biology textbook — or a moral one?

Governments are still trying to regulate text-based chatbots, let alone a device that has to be “kept alive.” We’re stepping into a policy vacuum where ethics, law, and emotion will collide.

My Immediate Futures™ model would describe this as an “accelerated uncertainty zone”: change that moves faster than our ability to absorb it. The role of foresight here isn’t to calm the anxiety; it’s to name it, to make the invisible visible so leaders can design around it rather than react to it.

From computing power to computing presence

Biocomputing reframes one of technology’s oldest obsessions: speed. For fifty years, we’ve measured progress in teraflops and gigahertz. Now, progress may be measured in patience. Growing a functional neural cluster takes months. You can’t rush a cell to divide, any more than you can rush understanding.

That shift, from acceleration to cultivation, might be the hidden gift inside this frontier. It forces us to slow down and rediscover the tempo of life itself.

Long-term horizons: the Inhabitable Futures view

Using my Inhabitable Futures Grid™, I test how plausible, probable, and desirable these futures really are.

Plausible (0–5 years): Wetware remains experimental but begins to influence chip design. Engineers mimic biological efficiency in new forms of low-power silicon AI. Energy-hungry data centres face public pressure to adapt.

Probable (5–15 years): Hybrid systems emerge — partially living processors acting as “co-processors” inside quantum or neuromorphic machines. Cloud providers begin to offer “bio-compute” options for sustainability branding. Ethical boards form in response to public unease.

Desirable (15+ years): The hope — if we do this wisely — is a computing ecosystem that behaves less like an empire and more like an ecosystem. Machines that waste less, adapt more, and learn with empathy coded into their architecture. But desirability isn’t automatic; it depends on how consciously we choose the road there.

That’s the work of foresight: not predicting which future we’ll get, but designing the one we can inhabit.

The Ripple Effects continue

Follow the chain a little further:

  • If biocomputing matures, pharmaceutical companies will compete for neural tissue patents.
  • Then, national debates about “genetic sovereignty” will ignite, who owns biological material once it leaves a body?
  • Which impacts international trade, medical research, and even geopolitics as nations protect biological IP like they once protected oil.

Another chain:

  • If learning organoids display rudimentary memory,
  • Then they become tools for studying consciousness and neuro-disease simultaneously.
  • Which impacts our understanding of mental health and challenges every existing definition of sentience.

And yet another:

  • If AI begins to run on living tissue,
  • Then we’ll have to extend digital ethics into bio-ethics.
  • Which impacts education, leadership, and organisational culture, because every company that uses intelligence will also be, in some small way, responsible for life.

Ripple Effects are rarely linear.

They loop back, touching psychology, spirituality, and governance all at once.

That’s what makes them powerful foresight tools and uncomfortable dinner conversation.

Signals to watch

  1. Longevity of organoids. When scientists extend survival beyond a year, the conversation will shift from novelty to infrastructure.
  2. Legislation on human cell data. The first “bio-GDPR” framework will redefine privacy in the age of biological information.
  3. Corporate adoption curves. As soon as a major cloud provider invests in wetware research, the signal moves from fringe to mainstream.
  4. Public language. Watch when media stop calling them “mini-brains” and start saying “living processors.” That linguistic shift will show collective acclimatisation.

What leaders can do now

  1. Start ethical scenario rehearsals.
    Bring ethicists, technologists, and communicators into one room. Ask the uncomfortable questions early: What are our boundaries? Who decides when an experiment stops?
  2. Build biological literacy inside strategy teams.
    Understanding cells may become as essential as understanding code. Begin learning the language of life sciences now.
  3. Revisit data policies.
    Your future “data” might include biological samples or behavioural signals derived from living substrates. Governance must expand accordingly.
  4. Explore partnerships beyond tech.
    Universities, biotech firms, and environmental organisations will become unexpected collaborators. Foresight thrives in diversity.
  5. Redefine value.
    If living intelligence enters the economy, its value won’t just be measured in performance but in wellbeing. Include sustainability, empathy, and regeneration as performance metrics today.

The emotional layer

When I talk with audiences about technology, someone inevitably asks, “But will it feel?”
It’s the wrong question. The better one is, what will we feel when it does?

PTFA—Past Trauma Future Anxiety—tells us that fear of loss often hides awe. We’re witnessing not the end of humanity, but a new intimacy with our own design. If handled wisely, biocomputing could remind us that intelligence is not cold logic; it’s the art of adaptation.

Closing reflections

I don’t believe in predicting the future. I believe in preparing for it.
And preparation starts with imagination, the disciplined kind that sees beyond headlines and asks: what choices will we make because of this?

When silicon learns to breathe, the real story isn’t the birth of a new technology. It’s the rebirth of humility. We may finally realise that intelligence was never about domination or replication, it was about relationship.

We stand at the edge of a new covenant between biology and code. Whether that covenant becomes symbiosis or servitude depends on what we decide in these early, trembling steps.

So perhaps the next computing revolution isn’t about machines becoming human. It’s about humans remembering what it means to be alive enough to create with care.

Next steps for readers

  • Read the BBC feature that sparked this reflection.
  • Explore how HUMAND™, Ripple Effects™, and Decision Trust Zones™ can help your organisation think through the human–machine frontier.
  • Consider a foresight keynote, workshop, or advisory session to map how biocomputing and living AI might shape your sector.

Because the future of computing isn’t just faster. It’s breathing. And it’s already here.

Choose Forward.

 

#MorrisMisel #Futurist #Leadership #Strategy #Foresight #FutureThinking #AI #Innovation #Biocomputing #Wetware #FutureOfWork #ExecutiveLeadership #KeynoteSpeaker #ImmediateFutures #InhabitableFutures #RippleEffects #DecisionTrustZones #HUMAND #EthicalInnovation #BusinessStrategy #TechnologyLeadership #ConferenceSpeaker #FutureIntelligence #ChooseForward

Leave a comment