Trust, Transparency, and the EU AI Act
November 18, 2025At the latest ETH Zurich Global Lecture, Liisa-Ly Pakosta, Estonia’s Minister of Justice and Digital Affairs, and incoming Vice President for Knowledge Transfer and Corporate Relations and digital ethicist Professor Effy Vayena, joined Chris Luebkeman to discuss how trust, transparency, and practical regulation are shaping the future of AI governance in Europe. Drawing on Estonia’s digital transformation and Switzerland’s sectoral approach, the panel explored the challenges and opportunities of implementing the EU AI Act.
How do we govern artificial intelligence in a world where technology evolves faster than regulation? This was the central question at the recent Global Lecture, “The EU AI Act in Practice: Cross-Border Perspectives on AI Governance,” hosted by Chris Luebkeman, Head of the Strategic Foresight Hub at ETH Zurich. At the start of the event the room was packed, making it one of the most well-attended lectures of the year, with a strong Estonian delegation and Estonian students keen to show support for their country’s inspiring representative, in addition to a curious ETH audience.
Liisa shared Estonia’s journey as a digital pioneer, explaining, “After regaining independence in 1991, we had to build an efficient state with limited resources. We chose a digital path, making transparency central to our approach.” Estonia’s investment in universal internet access and digital literacy laid the groundwork for a government where citizens own their data and can see exactly who accesses it—a model that has fostered deep public trust. “Trust is fundamental. People own their data and control who accesses it. This transparency builds trust and security,” she emphasised.

Effy reflected on the broader European context, noting that while many countries have the technical capacity for digital transformation, “regulatory and citizen engagement challenges remain.” She highlighted that transparency and clear benefits are key to building trust, regardless of a country’s starting point.
As AI systems become more embedded in daily life, the question of trust extends beyond government to technology itself. “We typically trust people, not machines,” Effy observed. “For AI, we seek reliability, safety, accountability, and transparency. Citizens want assurance that systems are safe and that someone is accountable.”

Liisa described Estonia’s evolving AI strategy: “We realised the need for full government control over sensitive data. We oppose backdoors in systems, even for law enforcement, to maintain trust and security.” She also stressed the importance of defining what AI should not do: “Declaring what is off-limits is crucial for maintaining trust.”

The EU AI Act aims to regulate AI in line with European values, but its complexity and the challenge of keeping pace with technological change were recurring themes. “The Act is complex, and simplification efforts are already underway,” Effy explained. In contrast, Switzerland, not an EU member, has opted for sectoral rather than horizontal regulation.
Liisa added, “Estonia advocated for more sectoral rules and a better balance between innovation and regulation. While rules are necessary for trust and clarity, the Act’s implementation is unclear, especially regarding prohibited and high-risk areas. Regulatory burdens often fall hardest on small and medium enterprises.” To address this, Estonia is developing a law to guarantee compliance for companies, shifting the burden from businesses to the state, and offering a compliance sandbox and state guarantees for AI products.
To close the discussion, Chris invited the speakers to a series of rapid-fire questions, offering concise insights on some of the other pressing dilemmas in AI governance:
- What builds trust faster: strong regulation or strong transparency?
“Transparency,” said Liisa. - How do you manage security and backdoor access?
“Use separate, secure architectures like X-Road.” - How will Estonia navigate differences with the EU?
“With courage.” - Why is there a discrepancy between attitudes toward pharma and AI regulation?
“Pharma regulation has a longer history. We’re still defining our goals in AI,” Effy noted. - Do we need a UN convention on AI?
“Statements exist, but implementation and enforcement are key.”
Looking Ahead
The Global Lecture made clear that responsible AI governance is not just about rules—it’s about building systems that earn and deserve public trust. As Liisa concluded, “Despite technological possibilities, regulation must enable innovation while protecting individual freedom and imperfection. Estonia welcomes collaboration and research in this area.”
The conversation underscored that while the EU AI Act is a significant step, the journey toward trustworthy AI is ongoing—and will require courage, transparency, and a willingness to learn across borders.
Chris closed the event by thanking the panel for their “clarity, depth, and inspiration,” adding, “It’s been a joy, and I’ve learned a lot.” His words captured the spirit of the session: open dialogue, shared learning, and a commitment to shaping a trustworthy digital future.
Check out the pictures from the event here: Meet ETH Flickr
And in case you missed it, watch the talk here:

