I have always been someone who was inclined to trust, perhaps at times more readily than I should have.
That instinct was not formed in ignorance. My earliest encounters with the realities of human behaviour were not sheltered ones. I volunteered with Amnesty International, translating reports that documented torture and political violence, and I come from a family shaped by a civil war and displacement. I have known from an early age, that cruelty exists, that people can act with indifference to suffering, and that integrity is not a given.
And yet, despite this, I have always believed that more often than not people are trying to do good, and to behave in ways that are thoughtful and considered. There is, in fact, a substantial body of research that supports this.
Trust has always been my starting point but in recent months, I have found that something has changed.
Where I might once have taken something at face value, I now find myself pausing, almost automatically, and asking a series of questions. Is this true? Who has produced it? On what basis should I believe it? And, perhaps most importantly, is there someone who stands behind it in a way that can be understood and, if necessary, challenged?
Information is continuous and persistent. Hundreds of hours of video are uploaded every minute, while millions of pieces of content circulate across platforms each day. The question is not only what is being said, but how it is encountered, at what speed, and with what opportunity for reflection. The pace at which information appears and moves leaves little room for considered judgment, and over time alters how we think.
The nature of what is being presented has changed dramatically. Virtual influencers, have audiences in the millions and are embedded within mainstream advertising ecosystems. For example, Lil Miquela has over two million followers and collaborates with global brands.
Peer-reviewed studies published in the Journal of Business Research between 2022 and 2024, show that audiences form parasocial relationships with these entities and respond to them in ways that are strikingly similar to how they respond to human influencers.
A similar shift is now visible in the production of media itself. AI-generated presenters have been deployed by organisations such as Xinhua News Agency. More significantly, content production is beginning to operate at an industrial scale. Inception Point AI reports operating more than 4,000 podcast shows with10 million listeners. These shows are generated through automated systems, including AI-written scripts and synthetic narration. Content produced without a responsible human author
Political leaders now make statements that are demonstrably untrue, not as isolated incidents, but as part of a pattern in which contradiction does not necessarily carry consequence. These statements are repeated, amplified, and defended, often alongside attempts to undermine the institutions , the press, the courts, independent expertise, that would historically have acted as points of challenge. The effect is that consequences are weakened.
None of this is new. Propaganda has always existed. Political leaders have always misrepresented facts. Media has always framed events and narratives.
In earlier media environments, however imperfect, there were clearer structures of responsibility. A newspaper had an editor whose name could be identified. A broadcaster operated under licensing regimes and regulatory frameworks that imposed obligations, however inconsistently applied. In the United Kingdom, for example, Ofcom requires due accuracy and impartiality in broadcast news, while the Editors’ Code enforced by the Independent Press Standards Organisation requires that inaccurate or misleading information be corrected promptly and with due prominence. Comparable frameworks exist across Europe. These systems did not prevent falsehood, but they made it possible, at least in principle, to locate responsibility.
Today, that chain is far less visible. A piece of content can be generated in one context, amplified in another, monetised through platforms, and encountered globally without any clear line of accountability. The issue is not only that falsehood is easier to produce. It is that it has become more difficult to attach it to someone who can be held responsible.
Trust is not disappearing. It is becoming conditional. The Reuters Institute Digital News Report 2024 found that only 40% of people across 47 markets say they trust most news most of the time. Mistrust of the media seems to have become the new nom.
The World Economic Forum has identified misinformation and disinformation as among the most significant short-term global risks. A large-scale study published in Science in 2018 found that false information is 70% more likely to be shared and reac large audiences far more quickly. The 2024 Edelman Trust Barometer indicates that among those aged 18 to 34, a substantial minority – up to 40% – report being comfortable relying on AI for structured advice, and decision support rather than on humans.
We have become accustomed to describing the present moment as “post-truth”, as though it were a condition to be observed rather than a shift to be examined or corrected. The term does not fully capture what is happening. What is changing is not only the relationship between truth and falsehood, but the expectation that truth matters.
A thriving society depends on forms of trust that are rarely articulated because they are so deeply embedded. We trust that schools will act in the interests of children, that medical advice is grounded in expertise, that infrastructure will function as expected, and that systems of communication are not fundamentally deceptive. These are not abstract ideals but conditions that make ordinary life possible.
When those conditions begin to weaken, institutions do not collapse in any immediate or visible sense. What changes instead is how they operate. Schools find themselves needing to verify whether work has genuinely been produced by students. Organisations introduce additional layers of review, not because decisions are inherently more complex, but because confidence in the material on which those decisions are based has diminished. Leaders are required not only to articulate their thinking, but to demonstrate that it is their own. Energy shifts away from acting towards validating.
This is the space in which Saviesa is working. The question is not whether AI will advance, because it will, but what happens to human judgment, responsibility, and meaning as it does. Saviesa’s work brings together inquiry and application, developing frameworks, convening leaders, and piloting programmes that translate human values into practice across education, leadership, and culture. It is concerned not only with how organisations make decisions, but with how they think, what they prioritise, and how they act in this new world. In that sense, the challenge is not simply to manage technological change, but to ensure that the human capacities on which society depends remain visible, active, and operational within the systems we are building.
In a world where so much can be generated, the question is no longer what is said, but who is prepared to stand behind it.
Leonor Diaz Alcantara
April 2026
Leonor Diaz Alcantara is the founder of Saviesa, an independent European think and do tank examining how leadership, education and culture are changing in an age of artificial intelligence.
Saviesa brings together global leaders, educators and researchers to explore how human judgement, responsibility and values can remain visible and operational within the systems being built today