Leading for Digital Trust in the Age of AI
Drawing on discussions from the eco Future Skills Meet-up on AI & Leadership, this article distills key insights on how organizations can translate digital trust from abstract principle into operational practice – and why leadership, not technology alone, determines whether AI creates real value.
© OpenAI (generated with ChatGPT)
Artificial intelligence is no longer peripheral; it has become structural. Across the Internet industry, AI supports decision-making, recruitment, compliance, product development, and customer operations. Yet with speed comes responsibility: Who owns AI-generated outcomes, and how can trust be ensured when systems operate faster than humans can verify?
These questions were at the heart of the first eco Future Skills Meet-up on AI & Leadership, which I had the pleasure of moderating in my role as a Board Member of the eco Association and initiator of the Future Skills initiative. Rather than focusing on tools or use cases alone, the discussion repeatedly returned to a core insight: digital trust cannot be engineered purely through technology. It must be built through governance, culture, and clearly assigned responsibility. In my dotmagazine article, “Why Transparent Leadership Matters in an AI World,” I argue that transparency and clarity serve as stabilizing forces in times of technological uncertainty.
Panel Discussion at the eco Future Skills Meet-up
From productivity gains to measurable value
A recurring theme was the growing gap between perceived productivity gains from generative AI and actual business value. In her keynote contribution, Professor Eugenia Schmitt, Professor of Business Administration and Data Science, cited industry data showing that while 74% of executives expect AI to boost productivity, only 11% see measurable ROI. That 63% gap represents value disappearing between activity and outcome.
“What gets lost between activity and value is sense-making,” Schmitt argued. AI systems can generate impressive outputs at scale, but without contextual understanding and human judgment, speed alone does not translate into trust or competitiveness. This distinction between output and outcome is central to digital policy debates, particularly as regulators increasingly emphasize accountability over raw performance.
Governance in practice: risk-based oversight
To address this gap, speakers emphasized the need for practical governance models that go beyond abstract principles. One framework discussed by Prof. Schmitt was a risk-based classification system for AI outputs – what she called the traffic light model. Internal, low-risk uses may require limited oversight, while externally visible or legally relevant content demands stricter human-in-the-loop controls. High-risk outputs – such as contractual commitments or compliance-related statements – require mandatory human verification.
Risk-based AI governance – the traffic light model
These proportional approaches mirror the logic of the EU’s risk-based regulatory framework for AI. A similar argument is developed in Alexander Rabe’s article “Digital Trust Is Built on Infrastructure, Governance, and Clear Policy”, which stresses that trust only emerges when governance frameworks are aligned with technical realities and policy objectives. As Schmitt noted during the panel discussion, “Digital resilience does not come from speed alone. It comes from knowing when to stop, review, and take responsibility.” This mindset shifts AI governance from a binary decision – use it or ban it – toward a nuanced assessment of impact and risk.
Psychological safety as a trust safeguard
Digital trust, however, is not sustained by processes alone. Several speakers highlighted the role of organizational culture, particularly psychological safety, in managing AI-related risks. Employees must feel able to question AI-generated results, flag inconsistencies, or raise concerns without fear of negative consequences.
Legal expert Nina Hiddemann, Attorney Specializing in Data Protection and AI Compliance, warned that unmanaged AI use often creates more risk than structured experimentation. “Shadow AI is the real danger,” she said, referring to employees using unapproved tools in the absence of clear guidance. In practice, this means that unclear rules and silent tolerance of informal AI use can undermine both compliance and trust.
From a policy perspective, this insight is critical: internal rules, approved tools, and training do not restrict innovation – they enable trust, compliance, and responsible use.
Trust by design: examples from practice
Concrete examples illustrate how trust can be embedded into organizational processes. In people operations, AI-supported workflows are increasingly common, but their acceptance depends on transparency and clearly defined boundaries. At Adacor Hosting, AI agents support specific steps along the employee journey, while responsibility for decisions remains explicitly with human teams.
According to Kiki Radicke, Head of People & Culture at Adacor Hosting, acceptance grows when employees are involved early and understand both the possibilities and limits of AI. “People engage when they see where AI helps – and where it clearly does not belong,” she explained.
From an infrastructure and innovation perspective, Alexander Grau, representing cloud provider from OVHcloud, highlighted the importance of enabling experimentation within clearly defined boundaries. At OVHcloud, hackathons test AI applications for customer challenges – such as multi-agent systems providing context on startup business models. But experiments run in secured environments with explicit rules: certain tools can access internal data, while Internet-connected public tools are forbidden from touching customer information.
Bridging policy and organizational reality
Across its work on artificial intelligence, digital infrastructures, and cybersecurity, eco consistently highlights a risk-based approach that balances innovation with responsibility. This perspective is further developed in eco’s white paper “AI as the Key to Cyber Resilience,” which examines how structured, risk-based AI deployment strengthens resilience across digital infrastructures and security-critical environments. A broader policy context is provided by Philipp Ehmann in his article “Building European Digital Sovereignty,” which links trust to strategic autonomy, governance capability, and the ability to act responsibly within interconnected digital ecosystems.
Insights from the Future Skills Meet-up underscore that this alignment must occur inside organizations. Regulation can define the framework, but digital trust is operationalized through three interdependent dimensions:
- Governance structures – Clear decision rights, documented review processes, and defined escalation paths that anchor accountability.
- Organizational culture – Psychological safety that enables employees to question AI outputs and flag risks without hesitation.
- Leadership literacy – A sufficient conceptual understanding of AI systems at senior levels to make informed strategic decisions.
AI literacy, clear decision rights, and transparent escalation paths are not optional. They are prerequisites for sustainable digital transformation.
What this means for the Internet industry
As AI adoption accelerates, digital trust is increasingly becoming a defining factor of competitive advantage. Organizations that treat trust as a compliance checkbox risk falling behind. Those that invest in governance, culture, and human oversight are better positioned to navigate regulatory requirements while maintaining agility.
The Future Skills Meet-up made it clear that AI is not just a technical challenge. It is a leadership and policy challenge that requires organizations to rethink how responsibility, trust, and value are created in a digital economy.
About the Future Skills initiative
The eco Future Skills initiative brings together practitioners, researchers, and decision-makers from across the Internet industry to explore how organizations can build resilience, trust, and competence in times of rapid technological change.
📚 Citation:
Kanes, Silke. (February 2026). Digital Trust in the Age of AI Leadership. dotmagazine. https://www.dotmagazine.online/issues/digital-trust-policy/digital-trust-age-ai-leadership
Silke Kanes is the newly elected Board Member for software as a service at the eco – Association of the Internet Industry. Having spent many years in executive positions at software manufacturers, where she was responsible for product development, she now works as a strategic advisor to entrepreneurs on digital and corporate culture transformations.
FAQ