Is Seeing Believing?

Might Transparency be the Missing Gear in Trusting Autonomous Vehicles?

BLUESKYE AI

If you want people to trust a vehicle with their life, how about showing them how it thinks?

From the video production work we’ve done with Zenzic on their ground breaking CAM Scale-Up UK programme, I’ve seen first hand some of the incredible technologies that will enable the development and deployment of autonomous mobility technologies, particularly the most recent cohort leveraging AI methodologies in the sector.

However, what has also become clear to me is that in order for us to see the full benefit of autonomous mobility, we need the widespread adoption and acceptance of the general public.

Public trust in autonomous vehicles doesn’t hinge purely on performance — it also hinges on perception. A flawless safety record means little if people still feel uneasy about what’s happening under the hood. For me, trust in this context isn’t built on blind faith. It’s built on understanding.

Public trust in autonomous vehicles doesn’t just depend on how safe or advanced they are — it depends on how understandable they feel. No matter how good the data looks, people won’t hand over control to something they can’t comprehend. For example, courses to treat a fear of flying often hinge on the explanation and then understanding of how a plane stays in the air, regardless of how unlikely it looks.

That’s where a functional visualisation can help. It’s a way of showing how a system works — not necessarily through technical accuracy, but through meaningful clarity. In my view, a functional visualisation doesn’t have to be perfectly accurate, in fact quite the opposite; it just needs to make sense to the person looking at it. The goal isn’t to expose every line of code, but to help people understand what’s going on at a level they can comfortably cope with.

The best visualisations translate complexity into something intuitive — showing how an autonomous car “notices” a cyclist, “thinks” through a junction, or “chooses” a safe route. The point isn’t to simulate machine vision, but to offer a human-scale logic we can follow.

SAIF Autonomy testing at Horiba Mira

Of course the problem here is that AI is a black box, by it’s nature it is stochastic or non-deterministic - this is why the excellent people at SAIF Autonomy and Deontic are developing systems that add a deterministic layer to autonomy, so that the black box is filtered through human defined and therefore human understandable rules and regulations.

And when people can see a system’s reasoning, even in a hugely simplified form, it shifts from feeling alien to feeling trustworthy.

That’s not just good communication — it’s probably good ethics too. As decision-making shifts from human drivers to algorithms, do we have a moral duty to make those systems visible and interpretable? Without that transparency, autonomy becomes a black box — one that demands compliance rather than earning consent.

If we want society to embrace autonomous systems — on the roads, in healthcare, or beyond — and the benefits of doing so are very clear, we need to design for trust, not just performance. That means building technologies that don’t simply work, but show their work.

Because public confidence won’t come from promises of perfection. It will come from openness — from seeing how the logic unfolds in real time, and knowing that it makes sense.

One of the most encouraging aspects for me of the recent Zenzic Pathfinder launch was the breadth of stakeholders at the event – from the tech developers that you’d expect, through to Government policy driven by Centre for Connected and Autonomous Vehicles, the world leading testbeds such as HORIBA MIRA, but also Local Authorities. If we are going to deploy CAM broadly, we need everybody to be welcome in the CAM room,  and this needs to include Local Authorities who will deal with the deployment at a street level, along with the educators and communicators.

When it comes to CAM I’m an enthusiast, not an expert - I certainly don’t know the answers, but I do have some questions:

How much does understanding influence trust when it comes to emerging tech like autonomous vehicles?

Do we need to see how machines think before we’ll ever feel comfortable sharing the road with them?

How can we use storytelling and visualisation to make complex systems feel more human and trustworthy?

If transparency is the new trust currency, should it be required — not optional — for technologies that make decisions on our behalf?

Would clearer communication change how society adopts AI-driven systems like autonomous cars?

I’m curious to hear your thoughts, and how others are tackling the challenge of showing not just what technology does, but how it does it.

Because in the end, seeing really might lead to believing — especially when the thing doing the seeing is an autonomous vehicle.


Foot Notes

Having said all of the above, maybe the general public are happy with blind overtrust - This 2016 study published in New Scientist showed that almost everyone of a 30 person sample chose to follow a robot out of a simulated burning building even though it was taking them away from the real and clearly marked exit!

Previous
Previous

Is Seeing Believing? Part 2

Next
Next

the science of storytelling