By the end of this year, technology won’t feel smarter. It’ll feel closer.
I was on radio this week with Hong Kong Radio 3’s Phil Whelan, doing what we usually do. Talking ideas out loud. Following the conversation where it wanted to go. Letting curiosity do most of the work.
At one point Phil asked a simple question.
“What’s something we might actually see by the end of this year. Something real. Something you could almost take out of a box.”
It’s a better question than it sounds.
Because most people don’t experience the future as a headline or a launch event.
They experience it when something they use every day suddenly works a bit differently to how it did last month.
When a system responds in a way they weren’t expecting.
When something that used to take effort now doesn’t.
That’s how the future usually arrives. Sideways. In personal, ordinary moments.
And what’s coming now isn’t louder technology. It’s more personal technology.
It’s technology moving closer to us.
This isn’t about smarter tech. It’s about proximity.
For a long time, the conversation has been about smarter systems. Faster systems. More powerful systems.
The signal I’m watching now is different.
It’s proximity.
Technology is moving:
-
closer to our bodies
-
closer to how we think and respond
-
closer to the decisions we make every day
That’s why a lot of people feel unsettled without being able to explain why. The change isn’t abstract anymore. It’s right there, next to them.
Let me walk you through a few examples we explored on air.
Emotion-sensitive AI isn’t about feelings. It’s about reading the situation.
One of the clearest signals this year is what’s being called emotion-sensitive AI.
The name makes people nervous. It sounds like machines developing emotions.
They’re not.
What they’re getting better at is reading context.
Not just what we type or say, but how we do it. The pace. The pauses. The subtle signs that someone is confused, tired, overloaded or disengaged.
Think about education.
An AI tutor that just keeps pushing information is only marginally useful. It’s a faster worksheet. But a system that notices when a student is struggling, slows down, explains things differently, or changes its tone becomes something else entirely.
Not human. But far more helpful.
The same idea applies to cars that notice driver fatigue, software that adapts when users are overwhelmed, or services that stop treating everyone like an average case.
The upside is obvious. Less unnecessary effort.
The risk is quieter. Being interpreted without realising it.
This is where Decision Trust Zones start to matter. Who is interpreting behaviour, and what are they allowed to do with that interpretation?
Brain-computer interfaces arrive quietly, and that’s deliberate.
We also talked about brain-computer interfaces. Or BCIs.
They sound dramatic. In reality, they’re already here.
They just tend to show up in very practical places.
Think of a lightweight headset that reads neural signals. Not mind reading. Signal translation. Turning intention into action.
These systems have been used for years to help people with limited mobility move wheelchairs, type, speak, or control devices using thought alone.
That’s the real shift.
This isn’t about making humans superhuman. It’s about removing friction between intent and execution.
From a HUMAND perspective, which is how I frame the future of work as tasks shared between humans, machines and AI, this is a clean example.
Humans decide. Machines do. AI translates between the two.
The question isn’t whether we can do this.
It’s where we should and where we shouldn’t.
Screens don’t disappear. They stop being flat.
Another signal that keeps coming up is spatial computing becoming cheaper and easier to access.
This isn’t about living in virtual worlds. It’s about information stepping off the screen and into the space around us.
Training is where this lands first.
Learning how something works by seeing it in front of you. Walking through a process instead of reading about it. Collaborating in shared environments rather than staring at rectangles on a screen.
The risk here isn’t technology. It’s excess.
Just because we can layer information everywhere doesn’t mean it helps. Sometimes it just gets in the way.
Good foresight isn’t about adding more. It’s about knowing when enough is enough.
Energy shifts change behaviour before they change beliefs.
Not everything that matters looks futuristic.
Night-time solar and solid-state batteries are good examples. They don’t feel exciting, but their ripple effects are significant.
When energy becomes reliable and continuous, people stop thinking about it. Homes behave differently. Vehicles charge faster. Outages become less common.
The biggest shift isn’t technical. It’s psychological.
Stable infrastructure creates a sense of calm that people rarely notice until it disappears. That’s an Inhabitable Futures issue, not just an energy one.
Invisible systems create invisible responsibility gaps.
AI-driven power grids are another quiet shift.
They work by acting faster than humans can. That’s the advantage.
But it also raises a harder question. When systems make decisions no one sees, who is accountable when something goes wrong?
These are the kinds of technologies you only notice when they fail.
Which is why preparation matters more than prediction. Clarity about oversight needs to exist before it’s tested.
The common thread is closeness.
When you zoom out, the pattern is clear.
Technology isn’t just getting smarter. It’s getting closer.
Closer to our bodies.
Closer to our thinking.
Closer to our judgement.
That’s why the future feels more uncertain for some people at the moment. It’s not arriving as spectacle. It’s arriving quietly in everyday moments.
The work now isn’t to guess what comes next. It’s to decide, deliberately, where humans stay involved, where we accept assistance, and where we draw boundaries rather than drifting into them.
So what do you actually do with this?
This isn’t about quick wins or tactical fixes. It’s about orientation.
Here are three moves that matter right now.
1. Track closeness, not just capability.
Most organisations track technology by features and performance.
Start tracking something else.
Notice where systems are moving closer to people:
-
reading more about behaviour
-
acting earlier in the decision chain
-
removing effort without asking
That’s where real change is happening.
If something suddenly feels easier, ask what moved closer to make that possible.
That question builds foresight muscle.
2. Decide early where humans must stay in the loop.
Don’t wait until a system is already embedded.
Have the conversation early:
-
where judgement matters more than speed
-
where context matters more than efficiency
-
where interpretation shouldn’t be automated
If you don’t decide intentionally, it gets decided by default.
That’s not anti-technology. It’s responsible leadership.
3. Make decisions you’re willing to revisit.
One of the quiet shifts underneath all of this is timing.
Decisions don’t hold for as long as they used to.
That doesn’t mean they’re wrong. It means they’re provisional.
The posture I come back to is simple:
I’m making the best decision I can right now with what I know and have, knowing I’ll make another one when I know and have more.
That mindset reduces anxiety and increases adaptability.
It’s also how humans stay valuable in systems that keep learning.
Listen to the radio conversation (17 minutes 59 seconds)
These aren’t abstract futures. They’re already shaping daily life and work.
You can’t predict exactly how they’ll show up for you.
But you can prepare for how close they’re going to feel.
Choose Forward.
#MorrisMisel #StrategicForesight #FutureOfWork #LeadershipThinking #HumanCentredAI #DecisionMaking #PreparingForTheFuture #EmergingTechnology #ExecutiveStrategy #KeynoteSpeaker #Foresight #BusinessLeadership #CLevel #EventOrganisers #FutureReady #MediaCommentary