We Imagined AI Long Before We Built It

 

How old science fiction fantasies are becoming engineering problems, and why that matters more than most people realise

A lot of the future arrives twice.

The first time, it arrives as fantasy.

A comic book.
A cartoon.
A children’s TV show.
A late-night movie.
A wild idea someone laughs at, then quietly remembers for the next thirty years.

The second time, it arrives as engineering.

Not as magic.
Not as spectacle.
Not as a shiny promise floating above us.

As something awkward, expensive, imperfect, patent-filled, commercially motivated, and suddenly just plausible enough to make us sit up and realise that what once felt ridiculous has started becoming practical.

That’s what struck me in this week’s Hong Kong radio segment.

Phil and I began, as these conversations often do, with the big old sci-fi fantasies. The death ray. Teleportation. Flying cars. Invisible cloaks. Robot helpers. The sort of things many of us grew up watching, reading about, or imagining before we had any vocabulary for artificial intelligence, advanced robotics, or orbital infrastructure. In the segment, I found myself saying that AI isn’t magically making those things happen, but it is changing the possibility of making them possible by speeding up the engineering side of the equation. That, to me, is the real story.

And it is a very human story.

Because the deeper truth is not that we have suddenly become obsessed with robots, machine minds, or power beamed from space.

We have always been obsessed with them.

We have always wanted help.
Always wanted ease.
Always wanted something beyond us that could think, carry, organise, protect, predict, or simply make life less heavy.

AI didn’t invent that desire.

It inherited it.


Listen to the radio segment

This article grew out of my Hong Kong radio conversation with Phil Whelan this week.

If you’d rather listen first and read second, insert the audio here.(17 minutes 24 seconds)


Old sci-fi got the details wrong, but the cravings right

One of the mistakes people make when they talk about science fiction is assuming it was meant to be accurate.

Most of it wasn’t.

It wasn’t a blueprint.

It was a wish list.

Or a warning.

Or a projection of what people wanted, feared, or hoped technology might one day do.

And in that sense, old sci-fi was often more insightful than we give it credit for.

It got a lot of the details wrong.

But it got the cravings right.

We wanted robot helpers.
Machines that understood us.
Instant communication.
Strange new sources of energy.
Tools that removed friction.
A world in which effort could be outsourced, danger reduced, and wonder made ordinary.

That’s why so much of this still feels familiar.

When we talk now about humanoid robots, AI copilots, energy beamed from orbit, or machines acting on our behalf, it does not feel entirely new. It feels remembered.

That matters.

Because when something feels remembered, adoption gets easier. People may still resist it, fear it, question it, or mock it. But it does not land in a cultural vacuum. It lands on top of decades of imagination.

I’ve been writing and speaking about this for a long time. Back in 2012, I was already recording radio conversations under the simple banner of Science fiction coming to life, talking about how old imagined futures were steadily becoming everyday technology. Even that short archive piece is useful now, not because it says everything, but because it reminds me that this signal has been around for years.
https://www.morrisfuturist.com/science-fiction-coming-to-life/

That’s one of the reasons I care so much about keeping this archive alive.

Signals often return.


AI is not magic. It is an accelerator

Let’s be clear.

AI is not making teleportation real in the Star Trek sense.

It is not conjuring humanoid robots out of thin air.

It is not waving a wand over old science fiction props and bringing them to life.

What it is doing is much more powerful than that.

It is accelerating engineering.

It is helping humans model, compare, test, optimise, simulate, refine, and rethink complicated systems faster than we could before. That includes robotics, manufacturing, logistics, communication, and increasingly, energy. And it matters because once fantasy becomes an engineering problem, it becomes something investors can fund, inventors can prototype, and corporations can try to scale. That is exactly the shift I was pointing to on air.

This is also why I’m wary of the word “breakthrough” when people talk about the future.

Most of the time what looks like a breakthrough from the outside is actually a thousand tiny advances inside the system:

  • better materials
  • faster computing
  • sharper sensors
  • more useful models
  • tighter feedback loops
  • cleaner manufacturing
  • better energy access

Put enough of those together and something that looked fanciful last year starts looking commercially plausible this year.

That’s not magic.

That’s momentum.


Tesla’s Optimus is not really a robot story

It’s very tempting to make this whole discussion about Elon Musk and Tesla’s Optimus humanoid robot.

That is understandable.

It is dramatic.

It photographs well.

It fits the old fantasy of the human-shaped machine.

And yes, Tesla is taking Optimus seriously. In Tesla’s Q1 2026 investor update, the company said it is planning a first-generation production line in Fremont designed for up to 1 million robots a year and a second-generation Texas line designed for much larger annual capacity over time. Tesla also continues to describe Optimus as a general-purpose humanoid robot intended for dangerous, repetitive, or boring tasks. (Tesla Investor Relations)

That’s interesting.

But to me, the real story is not the robot itself.

It is what it tells us about where fantasy becomes practical.

Because Optimus is not being sold, at least initially, as your charming home companion or witty synthetic housemate. It is being positioned as labour. Factory labour. Warehouse labour. Repetitive task labour. Work that is tiring, dangerous, expensive, difficult to staff, or hard to do continuously with human beings.

That’s a very different reality from the old dream.

Sci-fi gave us robot companions.

Business wants robot productivity.

And that gap between fantasy and function is worth paying attention to.


The hand is where the fantasy gets expensive

One of the best parts of this week’s segment was that we got to talk about something most people would never normally care about.

Robot knees.
Robot fingers.
Robot joints.

And yet these are exactly the places where the future either works or falls apart.

Because if you want a humanoid machine to exist in a human world, it needs to do more than think.

It has to move.

It has to crouch, balance, recover, lift, turn, grip, hold, adjust pressure, pick up something fragile without crushing it, and something heavy without failing.

In the segment, I said what many engineers have known for years: we take our own bodies for granted. Human hands and knees are extraordinary pieces of design. Robots have historically struggled not just with intelligence, but with dexterity, recovery, balance, and adaptation in messy environments. Tesla’s recent knee patent and the hand patent reporting around Optimus matter because they are trying to close exactly that gap. (RoboHorizon)

This is not trivial.

It is the difference between a machine performing one scripted action in a controlled factory cage and a machine operating in a world built for people.

And it ties directly to something I wrote earlier this year:

robots do not need to be like us to be useful.

That article matters here because it pushes against one of the laziest assumptions in all this conversation, the idea that the future of robotics depends on perfect imitation of the human form. It doesn’t. The more useful question is what humans, machines, and AI each do best, and how we design around that. That is the heart of my HUMAND thinking.
https://www.morrisfuturist.com/robotics-humand-future-ai-collaboration/

And yet there is also something psychologically powerful about human-shaped machines.

We are drawn to them.

Which tells us just as much about ourselves as it does about the robots.


Why do we want them to look like us?

This was another thread in the segment that I think deserves more attention than it usually gets.

Phil rightly pointed out that people did not just want a machine.

They wanted something that looked like a person.

That’s important.

Because it means our future fantasies are not only about utility.

They are about relationship.

In the radio conversation I used the word anthropomorphic. Fancy word, simple idea. We keep wanting non-human systems to look human, sound human, tilt their heads, smile slightly, move in familiar ways, and make us feel as if we are interacting with something more relatable than a screen or a box.

That desire runs deep.

It is why people gave names to early voice assistants.
It is why children talk to machines.
It is why robots unsettle us more when they are nearly human than when they are obviously mechanical.
It is why old stories filled the world with gods, spirits, talking animals, mechanical men, haunted objects, and intelligent forces outside ourselves.

We have always projected mind into matter.

AI is just the latest place we are doing it.

And that is why the conversation about robots is never only technical.

It is emotional.

Cultural.

Psychological.

Mythic, even.

That’s one of the reasons I wanted this piece to exist as more than just a transcript recap. Because beneath the weekly radio banter there is a much larger signal sitting here.

We are not merely building machines.

We are externalising imagination.


Space energy sounds ridiculous until it doesn’t

Then there is the wonderfully strange part of this week’s conversation.

Meta wanting to power future AI data centres with energy beamed from space.

That sounds like comic-book nonsense.

Which is exactly why it matters.

Meta has reportedly secured early access to up to 1 gigawatt of potential space-based solar power through Overview Energy, with demonstration targeted for 2028 and commercial delivery discussed around 2030. The core idea is simple enough to explain, even if it is still years from maturity: collect solar energy in orbit where sunlight is more continuous, then beam that energy back to Earth to support power-hungry AI infrastructure. (Tom’s Hardware)

If that sounds absurd, good.

A lot of futures do at first.

But what matters is not whether this particular system becomes routine tomorrow. What matters is that serious money is now chasing ideas that used to live mostly in pulp fiction and speculative engineering.

Why?

Because AI needs an extraordinary amount of energy.

That is one of the less romantic truths about the current AI push. All of this intelligence, generation, prediction, modelling, orchestration, and digital productivity sits on top of a very physical problem.

Power.

Compute needs electricity.
Data centres need electricity.
Continuous inference needs electricity.
Agentic systems need electricity.

And if the energy equation does not work, the fantasy stalls.

So once again, the future becomes less about magic and more about infrastructure.

That’s one of my favourite patterns in foresight. The future often arrives disguised as glamour, but underneath it is plumbing.


Teleportation is still mostly a no

For the record, no, we are not teleporting people.

Not you.
Not me.
Not your suitcase.

But even here, there is a useful distinction to make.

Human teleportation remains fantasy. Quantum teleportation, meaning the transfer of quantum information, is a real scientific field and continues to advance, but that is a long way from disassembling Morris in Melbourne and reassembling me in Hong Kong. (RoboHorizon)

I mention this because it points to another rule of future thinking.

Words stay the same. Meanings shift.

Teleportation means one thing in science fiction and another thing in physics.

Intelligence means one thing in humans and another in machines.

Memory means one thing in brains and another in databases.

That matters because people often react to the word before they understand the category.

Which is why the role of interpretation remains so important.


This is where HUMAND becomes practical

Whenever I write about AI, robotics, or the future of work, I eventually come back to HUMAND.

Not because I want to wedge a framework into everything.

Because it genuinely helps.

HUMAND is my shorthand for thinking clearly about how Humans, Machines and AI work together. Not as enemies. Not as hype categories. As task partners.

The reason that matters here is that this whole article is really about allocation.

What should humans still do?

What should machines do better?

What can AI now coordinate, model, detect, compare, or accelerate that used to be too slow or too complicated?

That is the design challenge underneath all of this.

And I have been feeling that more and more personally too. In Me + My Machine, I wrote about what it feels like to work alongside an AI trained on my own archive. Not as a replacement, but as a provocation, a structure, a way of surfacing patterns and memory faster so I can stay in my lane of synthesis, intuition, and strategic questioning.
https://www.morrisfuturist.com/human-ai-collaboration-morris-misel/

That matters here because it is the same principle at a different scale.

Whether we are talking about a strategist with an AI foresight engine, a factory with humanoid robots, or an energy company trying to power artificial intelligence from orbit, the question is not “Will the machine replace us?”

It is:

How do we redesign the relationship?

That is where the real future work is.


The Ripple Effects are already visible

One of the reasons I prefer writing like this rather than just reacting to the weekly headline is that it lets us see beyond the novelty.

If Tesla improves robot hands and knees, the ripple effects are not just about Tesla.

They touch:

  • manufacturing design
  • labour economics
  • insurance and workplace safety
  • warehousing
  • aged care support
  • consumer trust in machines
  • the legal boundaries of machine error
  • education and training pathways
  • what kinds of bodies we design technology for

If Meta or others genuinely move space-based solar from weird idea to infrastructure play, the ripple effects are not just about energy.

They touch:

  • geopolitical competition
  • grid resilience
  • AI economics
  • access and inequality
  • the environmental cost of compute
  • new forms of orbital dependency
  • who gets cheap power and who does not

And if AI keeps making fantasy feel closer to practical reality, the ripple effects are cultural too.

We may start wanting different things.

Expecting different things.

Trusting different systems.

Becoming more comfortable with synthetic partners in daily life, not because we suddenly changed, but because the things we once imagined as distant start becoming ordinary.

That is how a signal becomes an operating condition.


So what do we do with this?

This is the part I always care about most.

Not just the fascination.

Not just the story.

The next move.

If I were talking to a leadership team, a conference audience, a board, or a group of strategists, I would say this.

1. Watch where fantasy becomes infrastructure

Don’t just dismiss strange ideas because they sound theatrical. Watch for the moment they become engineering problems. That is usually when money enters, patents appear, prototypes improve, and timelines shorten.

2. Separate the craving from the form

Old sci-fi fantasies often get the object wrong but the need right. Ask what human desire sits underneath the technology. Help? Ease? Safety? Companionship? Power? Speed? Once you identify the desire, the future becomes easier to read.

3. Stop treating AI as a screen-only story

The next wave is increasingly physical. Robots, devices, interfaces, energy systems, mobility, materials. AI is moving off the page and into the world.

4. Redesign work through HUMAND, not fear

Do not ask only whether a machine can do something. Ask whether it should. Ask what still needs human judgement, trust, care, or creativity. Ask which tasks belong where.

5. Prepare culturally, not just technically

The hardest part of many future shifts is not the engineering. It is whether people are ready to accept, trust, and live alongside the thing being built.


Final thought

We like to imagine the future arriving with a soundtrack.

A dramatic entrance.
A glowing beam.
A machine stepping into the room and changing everything in a single theatrical moment.

Most of the time it does not happen that way.

Most of the time the future arrives in fragments.

A patent filing.
A design tweak.
A small engineering advance.
A company reserving energy from a technology that does not yet fully exist.
A robot that still cannot quite move like us, but is beginning to move well enough to matter.
A cultural memory being quietly reactivated by a tool we once thought belonged only in cartoons.

That is what this week’s segment really reminded me of.

We imagined much of this long before we could build it.

Now some of it is becoming practical.

Not all at once.
Not cleanly.
Not perfectly.

But enough to matter.

And once fantasy becomes engineering, it is no longer just a story.

It becomes a choice.

If you want your team, audience, or organisation thinking more clearly about the signals shaping what comes next, that’s the work I do.

On stage.
In boardrooms.
In strategy sessions.

Choose Forward.

Morris Misel
Foresight Strategist | Keynote Speaker | Advisor
morrismisel.com


Morris Misel is a foresight strategist, keynote speaker, and advisor who helps leaders understand the signals reshaping work, technology, decision-making, and the future.

Heard by millions each year in the media and onstage, he works across industries to turn uncertainty into clarity and preparation.


#MorrisMisel #ArtificialIntelligence #Robotics #ScienceFiction #FutureOfWork #StrategicForesight #HUMAND #FutureSignals #Innovation #TeslaOptimus #HumanMachineCollaboration #FutureOfTechnology #SpaceEnergy #KeynoteSpeaker

Leave a comment