I can now make a virtual me, but should I?
It’s something we get asked increasingly often - can we reproduce a real person using AI?
It started with an interviewee getting a line wrong, or miss-saying an acronym, so simple voice amends, which obviously raised issues around where you draw the line, but it’s led on to the video side of things in a much bigger way.
Questions like:
Our CEO hates presenting, can you make an AI version of him so in future we can just feed it the script?
and
Someone has left the company - can you replace her in that sales film with an AI version?
If you’ve come across platforms like HeyGen, Synthesia, or 2wai, you’ve seen them: digital avatars so lifelike that even if you knew the person well you’d still have to look hard to realise it’s not really them.
Well we love an experiment here. We thought why not - so meet virtual Chris!
Now these aren’t cartoon characters or stylised cartoons. These are video twins — AI-generated representations that look, speak, and move like a real person, trained on existing footage, and voice recordings. Virtual Chris was trained on just 3 minutes of me talking to camera. That’s all. Nothing else.
These avatars aren’t just a sci-fi novelty anymore. They’re rapidly entering corporate training, marketing, entertainment, and even commerce — and are now influencing how billions of people perceive reality in motion. But beyond the technological marvel lies a set of profound cultural and ethical quandaries: what does it mean when a digital ghost can act, speak and sell as if it were you?
From CGI to Continuity
Traditional CGI characters are clearly artificial. Even the most advanced animation carries perceptual distance: we know we’re watching something constructed.
Digital twins are different.
They trade not on spectacle, but on continuity. They resemble real people closely enough that viewers assume alignment—between the on-screen performance and the off-screen individual it resembles. In fact the digital twins are now so realistic that we’ve reached a point where the audience may not realise that this isn’t the “real” human that is speaking to them. Research into human–AI interaction shows that people respond socially to synthetic agents when visual and behavioural cues cross a realism threshold, even when they know the agent is artificial (Nass & Moon, 2000; Reeves & Nass, The Media Equation).
That assumption does most of the work.
Technically, these systems combine generative models for video, voice synthesis, and language, allowing an avatar to deliver new content from text prompts alone. Practically, this means a person can now “appear” in thousands of videos without being present for any of them.
That’s powerful. It’s also destabilising.
What Digital Twins Are Being Used For
The use cases are easy to sell.
Digital twins promise scale without exhaustion: multilingual explainers, consistent brand messaging, personalised content delivered on demand. In corporate settings, they’re pitched as tireless presenters. In education, as endlessly patient tutors. In marketing, as brand ambassadors who never miss a briefing. Our CEO who hates public speaking? A digital twin might be an ideal solution - film them once, then feed scripts to the digital twin.
Studies on virtual influencers suggest that perceived authenticity and expertise strongly affect trust and persuasion—even when users are aware that the influencer is synthetic (Miao et al., 2022; Jin et al., 2019). The problem isn’t that people are fooled. It’s that realism still works even when we know better.
For creators, digital twins offer something even more tempting: decoupling presence from time.
But the more interesting question for me isn’t what they can do.
It’s what they carry with them.
When the Twin Borrows Your Authority
There’s another layer to the digital twin debate that tends to get less airtime than consent forms and contracts, but may be more dangerous in practice: borrowed authority.
A digital twin doesn’t just replicate your face or voice. It inherits something far more valuable — the trust capital you’ve accumulated over years, sometimes decades, of professional work. Reputation is slow to build and easy to damage, but digital systems don’t respect that asymmetry.
If a digital version of me appears on screen — speaking confidently, framed professionally, perhaps introduced with a title — viewers don’t see “a generated model”. They see continuity. They assume alignment between the digital performance and the real person behind it.
That assumption is where the risk lies.
Authority doesn’t need to be explicit to be effective. It’s often implied through context. A figure in a lab coat, standing at a bench surrounded by equipment, carries an inherited scientific legitimacy regardless of whether they are real, qualified, or even human. The visual language does the work. The same is true of professional titles. A name prefixed with “Dr” subtly shifts how information is received — it signals expertise, credibility, and trustworthiness before a single word is spoken.
Digital twins can exploit these signals with unsettling efficiency.
An AI avatar that looks like a scientist, sounds confident, and resembles a known professional can influence behaviour, opinions, or decisions in ways that feel natural rather than coercive. The viewer isn’t persuaded through argument alone, but through symbolic authority — the quiet social shorthand that tells us who to believe.
The ethical problem is that this authority can be mobilised with or without the ongoing consent of the person it’s borrowed from.
Even if a digital twin was created legitimately, its future use may drift. Contexts change. Messages evolve. A twin trained to explain research findings today could, tomorrow, be used to endorse a product, support a political stance, or communicate claims the real person would never make. The appearance of endorsement can exist even when the intent does not.
This creates a form of reputational leverage that is uniquely powerful — and uniquely hard to monitor.
In traditional media, misuse of a person’s authority required their presence, or at least their explicit participation. With digital twins, authority becomes decoupled from agency. The face remains; the decision-making does not. That separation opens the door to manipulation, whether commercial, ideological, or informational.
And crucially, audiences are not well equipped to defend themselves against this. Humans are cognitively wired to trust familiar faces, professional cues, and institutional symbols. We don’t interrogate them every time — we can’t. When those signals are synthesised and scaled, they can be used to nudge belief subtly rather than persuade overtly.
This is where misuse shades into something more systemic.
If digital twins can trade on accumulated professional trust, then we’re no longer just talking about identity rights or image licensing. We’re talking about the ethics of delegated credibility. Who gets to speak as you? Under what constraints? And how clearly does the audience understand that the authority they’re responding to may be simulated?
Without strong norms of disclosure, contextual limitation, and revocable control, digital twins risk becoming tools that launder influence through familiarity. They don’t invent authority — they reuse it.
And once trust becomes a programmable asset, the incentives to stretch, distort, or quietly repurpose it become very hard to resist.
Authority at Scale: The Khaby Lame Inflection Point
This is why the recent news around Khaby Lame matters far beyond influencer culture.
Khaby’s operating company was reportedly acquired in a deal valued close to a billion dollars, with a key component being the rights to deploy an AI version of his likeness—capable of producing multilingual content and appearing continuously across markets.
The value here isn’t technological novelty. It’s trust at scale.
Khaby’s persona was built slowly, through millions of small moments of recognition and consistency. His digital twin doesn’t just resemble him—it trades on the audience’s emotional familiarity with him. That’s what makes it commercially explosive.
But it also exposes the core risk.
Once authority is separated from agency, reputation becomes a reusable asset. Something that can be redeployed, reframed, or extended long after the person it originated from would have chosen differently.
This mirrors concerns raised in emerging research on pre-mortem digital twins, which warns that identity replication without ongoing control risks misalignment between representation and intent (Öhman & Floridi, 2023; arXiv 2502.21248).
This isn’t a bad-actor problem. It’s a structural one.
So Where Do We Go From Here?
We’re at a crossroads:
On one hand, digital twins and AI avatars offer efficiency, creative freedom, and personalised engagement.
On the other, they challenge legal rights, personal identity, authenticity, and the nature of trust itself.
This isn’t a technological problem alone — it’s a societal design challenge. The frameworks we establish now — around consent, transparency, contract structures, and rights over one’s digital likeness — will shape how people relate to AI for decades.
If creators like Khaby can turn their likeness into corporate profitability, imagine what will happen when every public figure, educator, CEO or parent faces the same choice.
The question isn’t whether digital twins will exist — it’s how we treat them once they do.
Should a Digital Twin Say What It Is?
At some point, every debate about digital twins runs into a deceptively simple question: should they have to say they’re not real?
On the surface, the answer feels obvious. Of course they should. Transparency builds trust. Deception erodes it. Problem solved.
But in practice, it’s more complicated — and more revealing — than that.
The issue isn’t just whether a digital twin declares itself, but how, when, and in what form that declaration takes. A tiny disclaimer buried in metadata or terms of service may satisfy legal requirements, but it does very little to inform an audience in the moment where trust is actually being formed.
Human perception doesn’t operate on footnotes.
If a digital twin looks directly into the camera, speaks fluently, and occupies a familiar professional context, the viewer’s brain processes that interaction as social first, analytical second. By the time a disclaimer is noticed — if it’s noticed at all — the authority has already landed.
That means disclosure has to compete with realism. And realism usually wins.
This raises an uncomfortable possibility: a disclosure that doesn’t meaningfully interrupt the illusion isn’t really disclosure at all. It’s theatre. A box ticked after the fact.
So what would meaningful declaration look like?
One approach is persistent visual signalling — a visible marker that remains on screen, not just at the start or end. This could be as simple as a clear label (“AI-generated digital twin”) or a recognisable visual frame that distinguishes synthetic performances from human ones. The key is persistence. If the marker disappears, so does the reminder.
Another is contextual declaration. Not just what the digital twin is, but what it is authorised to do. Is it delivering pre-approved educational material? Representing a brand in limited contexts? Speaking only within defined subject boundaries? This shifts disclosure from identity alone to scope of agency, which is where most ethical ambiguity lives.
There’s also the question of tone. If disclosure is delivered in the same confident, authoritative voice as the content itself, it risks being absorbed as part of the performance. Paradoxically, a more neutral or system-level declaration may be more honest — even if it feels less elegant from a production standpoint.
Some argue that explicit declaration breaks immersion, undermining the very usefulness of digital twins. That’s true — and it’s also the point.
If a digital twin’s effectiveness relies on the audience forgetting what it is, then we should be honest about the kind of influence being exercised. Transparency that slightly degrades performance may be ethically preferable to realism that quietly trades on misplaced trust.
Ultimately, the question isn’t whether audiences can be told. It’s whether we’re willing to accept a small loss of persuasion in exchange for a large gain in trust.
Because once the default assumption becomes that on-screen humans might not be human at all, silence stops being neutral.
It becomes misleading.
And in that environment, declaration isn’t just a courtesy.
It’s the ethical minimum.
What This Means for Video Practice, Not Just Theory
For those of us working in video production, this isn’t an abstract future problem — it’s already a practical one. Every decision about framing, casting, context, and disclosure shapes how authority is perceived on screen. As digital twins and AI avatars enter the production toolbox, ethical responsibility doesn’t stop at whether something can be made convincingly. It extends to whether it should, how clearly its nature is communicated, and whose trust is being invoked in the process. In our own practice, that means treating credibility as something to be protected, not exploited; designing transparency into the creative choices, not relegating it to legal footnotes; and recognising that realism is never neutral. If video is one of the primary ways people now decide what — and who — to believe, then ethical production isn’t about resisting new tools. It’s about using them in ways that preserve trust rather than quietly cashing it in.
Sidenotes:
For marketers
Digital twins can scale trust quickly—but misuse risks long-term brand damage. If audiences feel misled, even technically “ethical” deployments can backfire. Disclosure isn’t a legal safety net; it’s a reputational one.
For educators
AI avatars can widen access and personalise learning, but authority matters. A digital lecturer inherits credibility by default. Clear signalling about what is human-led, AI-assisted, or fully synthetic is essential to preserve epistemic trust.
For scientists and experts
Titles, environments, and professional cues amplify perceived truth. A digital twin in a lab coat speaks with authority whether it deserves it or not. Guardrails around context, scope, and endorsement are no longer optional—they are professional safeguards.