What next for virtual me?
Since writing my last piece, a lot of people have come up to me to talk about it - it’s clearly spiked some interest. Some curious, some slightly unsettled, a few clearly wondering if this is where things start to get a bit weird.
It even fooled a couple of members of my own family, which, depending on your perspective, is either a great demo… or mildly concerning.
Because once you can make a digital version of yourself, the next question isn’t can you? — it’s when should you?
The difference between a tool and a proxy for being human
Let’s start with the obvious.
There are entirely reasonable, even brilliant, use cases for digital human twins:
Scaling educational content without endless re-recording
Localising communication across languages
Creating consistent internal comms for global teams
Accessibility — giving people a voice they might otherwise struggle to project
Used like this, a digital twin is closer to a microscope than a mask. It enhances what’s already there. It doesn’t pretend to be something else.
But there’s a subtle line here, because the more convincing these avatars become, the more they drift from being tools of communication into stand-ins for presence, which then becomes something more interesting.
When the message keeps talking after the person stops
Let’s take that idea one step further - what happens when the person behind the avatar is no longer around?
This is where things move from clever to complicated.
In a recent episode of AI Confidential, mathematician and communicator Hannah Fry speaks to Justin Harrison — founder of an AI startup called You Only Virtual. You can see the clip here.
His idea sits within what’s increasingly being called “grief tech”: using AI to recreate aspects of people who have died. In the case of You Only Virtual it’s their voice and their conversational style, which is even accessed by phoning them.
In other words, this is a digital twin that doesn’t just scale your communication but outlives you.
On one level, you can see the appeal. Humans are storytellers. We leave voicemails, letters, photos, videos — all attempts to stretch our presence beyond the limits of time.
This is just a more advanced version of that instinct.
But it also raises a question that feels less technical and more philosophical:
Are we preserving memory… or simulating existence?
The emotional physics of “still being there”
Grief, at its core, is about absence.
It’s one of the few truly universal human experiences — a process that has played out, in different forms, across every culture and every era. Rituals change. Beliefs vary. But the underlying arc is remarkably consistent.
Someone is here, and then they’re not, and we learn, slowly and imperfectly, how to live in that new reality. I’m all too familiar with this having lost my mother 18 months ago.
Now imagine introducing a system that can speak back.
Not perfectly, but convincingly enough.
Grief has always involved maintaining a connection to people who are no longer there. People talk to loved ones internally, I know that I do. We revisit memories, we keep objects, we maintain a sense of connection.
So in principle, the idea of “staying connected” isn’t new, it’s deeply human, but traditionally, that connection doesn’t talk back.
When it does, we’re no longer just remembering, we’re interacting. And I’m no psychologist, but that subtle shift might be where things stop being comforting and start becoming complicated.
On one hand, this could be comforting — a way to revisit stories, hear a familiar voice, keep a connection alive. A kind of digital echo.
But surely grief isn’t just about remembering, it’s about processing loss and allowing the mind to reconcile with absence.
So what happens if absence never fully arrives?
If you can message, hear, or “interact” with a version of someone who’s gone, are you preserving memory or are you interrupting the process that helps you move forward?
At what point does comfort become dependency?
And at what point does that dependency become something closer to harm?
There’s a slightly uncomfortable analogy here.
It’s a bit like picking at a scab - there’s a strange relief in it, a sense of returning to something familiar, even if it slows healing. Do it occasionally, and maybe no real damage is done. But done repeatedly, compulsively, it can stop the wound from ever properly closing.
Grief needs space to settle. To scar, in the healthiest sense of the word.
If technology keeps reopening that space — even gently, even with good intentions — we have to ask what the long-term outcome looks like.
Not just emotionally, but behaviourally.
Does it prolong grief?Does it reshape attachment?Does it create a kind of emotional limbo — where someone is neither fully gone nor meaningfully present?
It’s a bit like Schrödinger’s cat, but emotional rather than quantum.
The person is both gone and not quite.
And humans, historically, aren’t great at holding those two states at once.
Authenticity, consent, and the long tail of identity
Then there are the practical — and slightly more uncomfortable — questions.
Who owns your digital likeness after you die?
Who decides how it’s used?
What happens if it says something you never would have said?
Because a digital twin isn’t you, it’s a model trained on patterns of you and quite likely from a very limited data set.
Close enough to feel real initially, but with a lack of authentic training date data it will drift over time.
Which means we’re not just talking about preserving identity — we’re talking about extending it into a space where it can evolve without you.
Historically, your story ended when you did, now it might continue.
Semi-autonomously. Iteratively. Potentially indefinitely.
It’s less like writing a memoir, more like releasing an open-source version of yourself.
So… what is “appropriate” use?
This is where I think a useful distinction emerges.
Appropriate use feels like amplification.Helping a real, living person communicate more clearly, more consistently, more accessibly.
Problematic use starts to look like substitution.Replacing presence, simulating relationships, or extending identity beyond meaningful consent.
The closer a digital twin gets to standing in for a human relationship, the more ethically fragile it becomes.
And that fragility isn’t just technical, it’s emotional, social, even cultural.
Because communication isn’t just about information transfer, it’s about trust.
The storytelling lens
From a storytelling perspective, this is fascinating.
We’ve always used technology to stretch narrative — from cave paintings to cinema to social media.
But this might be the first time we can stretch the storyteller themselves.
Not just what they said, but how they said it.
That’s powerful, it doesn’t necessarily come with a built-in sense of restraint.
A quiet question to sit with
So here’s the question I keep coming back to:
If you could create a version of yourself or of a loved one that keeps speaking after you or they are gone, would you?
I would dearly love to speak to my mum again, hear her voice and have a chat, but I don’t think this technology is one I’ll be using.
Curious to hear how others are thinking about this.