Could a Drone Walk My Dog?
A January conversation about wet pavements, flying robots, and why the hardest problems start with the smallest questions.
January has a particular talent for draining enthusiasm.
The lights are gone, the novelty of the New Year has worn thin, and the weather feels like it’s actively discouraging ambition. The kind of cold that doesn’t inspire bracing walks or fresh starts, just damp socks and regret.
My dog, however, remains unmoved by seasonal reality.
So there I am, standing in the doorway, rain tapping insistently on the pavement, dog staring up at me with the quiet confidence of someone who knows they won’t be the one getting soaked.
And that’s when the thought arrives. Not a sensible thought. Not a useful thought. But a persistent one.
Could a drone walk my dog?
Not should. Just… could.
It’s the sort of question you ask half-jokingly, assuming common sense will deal with it later. Drones don’t have hands. Dogs pull. Trees exist. Laws exist. End of conversation.
Except the question doesn’t go away. It sits there. It pokes.
So I do what we all do. I Google it.
Which is how I end up reading a LinkedIn post by Ciprian Zamfir, that mentions a dog walking drone, and suddenly I’m no longer sure if I’m being silly, or if I’ve stumbled into something much bigger.
A few messages later, we’re on a Google Meet. Indoors. Dry. Sensible.
“That’s already the wrong question”
I ask him straight out.
“So… could it?”
Ciprian doesn’t answer. Not immediately. Instead, he smiles in the way people do when they recognise a trap they’ve spent years learning to avoid.
“That’s already the wrong question,” he says.
This is my first clue that I’m not talking to someone who builds things by instinct. I’m talking to someone who builds them by resisting instinct.
“Most people,” he continues, “jump straight to a solution. A system engineer doesn’t. We slow down and ask what problem we’re actually trying to solve.”
It’s mildly frustrating. I want a yes or a no. But I also know—uncomfortably—that he’s right.
The dog-walking drone isn’t a product idea. It’s a thinking exercise.
Because the moment you say “a drone walking a dog,” you’ve already made assumptions. About the dog. About the drone. About what “walking” even means.
And systems engineering, it turns out, lives in those assumptions.
The dog is not cooperating
“Let’s start with something simple,” Ciprian says. “Who is this system for?”
Easy, I think. Me. The owner. The soggy human.
He shakes his head.
“You’re a stakeholder,” he says. “Not the stakeholder.”
There’s the dog, for a start. Not a passive payload, but a moving, reacting, occasionally irrational participant with its own agenda. There are regulators, manufacturers, service engineers, insurers, neighbours, local authorities, aviation bodies.
And then there’s the bit I hadn’t considered at all.
“Your dog has intent,” Ciprian says.
Intent. Not behaviour. Not movement. Intent.
That single word shifts the ground under the whole idea.
Because now we’re not talking about a flying leash. We’re talking about a system that has to interpret another living thing’s decisions in real time. A thing that might suddenly stop. Or lunge. Or decide—without warning—that today is the day we die fighting that squirrel.
The problem just got bigger. Much bigger.
Everything is fine, until it isn’t
Ciprian starts listing scenarios. Not dramatically. Just methodically.
The dog walks calmly. The dog stops. The dog pulls. The dog really pulls. A squirrel appears. A bin lorry appears. It starts raining harder. Then wind. Then fog.
Each one is a small twist. None of them are unlikely. All of them matter.
This, he explains, is where systems live or die. Not in the perfect scenario, but in the awkward edges—the moments designers quietly hope won’t happen.
Most people design for the “happy path.” System engineers don’t trust it.
“What happens,” he asks, “if the dog suddenly applies more force than expected? What’s the maximum the system can respond with before it causes harm?”
I picture a drone hovering uncertainly while my dog drags it sideways into a hedge.
The thought experiment has officially stopped being funny.
The bit everyone avoids
The conversation shifts again. This time into territory people don’t like to linger in.
Cyber security.
“People ignore it,” Ciprian says, “because they don’t want to think about it.”
Not because it’s too complex—but because it forces you to confront how exposed you really are.
This reminds me of a demonstration I once saw while filming at the National Cyber Security Centre where everyday connected devices—children’s toys, smart home gadgets—were compromised and used to unlock doors. Not through cinematic hacking, but through mundane oversights.
Now apply that thinking to autonomous systems.
Vehicles in ports. Trucks in freight depots. Drones in public spaces.
Suddenly cyber security isn’t about stolen data. It’s about physical consequence. A single vulnerability cascading into real-world paralysis.
And crucially, this isn’t IT security transplanted into new places. Automotive and robotic systems behave differently. The stakes are different. The feedback loops are faster.
A spreadsheet error is inconvenient. A compromised autonomous system is dangerous.
Why autonomy keeps tripping over reality
We end up talking about autonomous vehicles, because of course we do.
On paper, the problem seems solvable. Sensors get better. Software gets smarter. Compute gets faster.
But the real world refuses to cooperate.
Road signs change by country. Lane markings fade. People behave unpredictably. Sensors all fail in different ways. Cameras struggle with glare. Radar struggles with certain materials. Weather interferes with everything.
So systems rely on fusion—combining imperfect signals and deciding which ones to trust, moment by moment.
And sometimes, that trust is misplaced.
Ciprian mentions a well-known accident where a camera couldn’t distinguish a white trailer from a bright sky. A single blind spot. A single wrong assumption.
The consequence was fatal.
This is why full autonomy keeps retreating into controlled environments—ports, airports, depots—places where unpredictability is reduced, not eliminated.
The long road, and the slow one
There’s another force dragging everything down to a crawl: responsibility.
At higher levels of autonomy, liability shifts. From driver to manufacturer. From manufacturer to supplier. From software to hardware to everyone in between.
That changes the question again.
It’s no longer “can we make it work?” It’s “can we guarantee it won’t fail?”
And guarantees are hard when vehicles last decades, share roads with older systems, and operate in environments that change faster than regulation can keep up.
Ciprian is blunt about it.
“Full Level 4 autonomy everywhere?” he says. “Not for a long time. Twenty, thirty years.”
Not because engineers aren’t capable. But because the system is bigger than the technology.
Back in the rain
Later that day, I’m back where I started. At the door. Lead in hand. Dog vibrating with optimism.
No drone. No autonomy. Just a coat that isn’t quite waterproof enough.
And yet, the walk feels different.
Because the question was never really about the dog.
It was about how easily we leap to solutions. How rarely we slow down to understand the system we’re stepping into. How often the hardest problems are hiding inside the most mundane frustrations.
Could a drone walk my dog?
Maybe. Under the right conditions. With the right safeguards. In the right environment.
But the more interesting question—the one worth sitting with—is:
What would it take to trust it?
And that’s a January question if ever there was one.
I'll release Part 2, from the dogs POV, tomorrow!