Today we’re joined by Chloe Maraina, a business intelligence expert with a unique talent for transforming vast datasets into compelling visual stories. For years, she has been at the forefront of data management and integration, and today she helps us navigate the complex landscape where AI strategy, corporate governance, and digital trust intersect. The conversation will explore why public trust in AI has hit a wall, stalling in what she calls a “trust rut.” We’ll delve into the critical perception gap between those who build AI and those who use it, examining why users are more concerned with ethical issues like privacy and accountability than with technical reliability. Finally, we’ll discuss actionable mandates for leaders to move from passive observation to actively architecting trust, using their own organizations as models for responsible innovation.
The AI Trust Index has stalled around 307, with an 11-point gap in concern between end-users and providers. Beyond acknowledging this “trust rut,” how can a CIO begin to tangibly close that perception gap? What specific metrics would you use to track progress?
It’s a crucial question because that 11-point gap isn’t just a statistic; it’s a chasm in confidence. We see providers feeling optimistic, with 83% believing the benefits outweigh the risks, but only 65% of end-users feel the same way. A CIO can’t just issue a press release about trustworthiness. They have to make it a measurable, core part of their strategy. The first step is to shift the dashboard away from purely technical KPIs. Instead of only tracking system uptime or model accuracy, let’s start measuring “trust proxies.” This could include tracking the volume and sentiment of customer feedback specifically mentioning AI fairness or privacy. You could create a metric that tracks the percentage of AI projects that undergo a formal ethical review before deployment. Another powerful metric is what I call the “Transparency Engagement Rate”—how many users are actually reading your AI usage policies or engaging with explainability features? When you start measuring these things, you make trust an operational priority, not just a vague corporate value.
The report shows user concern is highest for NIST attributes like privacy (63%) and accountability (61%), rather than technical reliability. Could you walk us through a step-by-step example of how a development team can shift its focus from purely functional goals to these ethical outcomes?
Absolutely. It’s a fundamental mindset shift from “Can we build it?” to “How will it behave?” Let’s imagine a team developing a new AI-powered HR tool for resume screening. Traditionally, their goal would be to maximize the accuracy of identifying qualified candidates. To pivot toward ethical outcomes, the process changes completely. First, during the design phase, they would conduct a “Bias and Fairness” workshop. Here, they would actively brainstorm how the tool could inadvertently discriminate, using the 59% concern around fairness as their guide. Second, during data sourcing, the focus isn’t just on volume but on representation. They would set a specific goal to ensure their training data reflects diverse demographics, directly addressing potential biases. Third, in the testing phase, they move beyond simple accuracy tests. They’d implement “fairness testing,” where they measure the model’s performance across different gender, ethnic, and age groups to ensure no single group is being unfairly disadvantaged. The goal is no longer just a high accuracy score; it’s a high fairness score. This embeds accountability directly into the development lifecycle.
Given that trust in workplace AI is relatively high while trust in public media scenarios is low, how can a CIO use their enterprise as a “model for responsible deployment”? Please describe two or three internal AI policies that could be shared publicly to build customer confidence.
This is a massive opportunity. The data shows people are far more comfortable with AI in the workplace, with a low concern score of 289, compared to the deep skepticism they have for AI in media, which scored a high 339. CIOs can leverage this internal trust as their best marketing tool. The first policy to publicize would be an “Internal AI Bill of Rights.” This document would clearly state how employee data is used, the principles of fairness applied to internal AI systems, and a clear process for employees to question or appeal an AI-driven decision. Making this public shows customers you hold yourself to the highest standard, even when no one is watching. A second, equally powerful policy would be a “Responsible AI Procurement Standard.” This would outline the strict ethical and transparency requirements you demand from any third-party AI vendor. By publishing this, you’re not only demonstrating your own commitment but also pushing the entire industry toward a higher standard. You’re telling your customers, “We don’t just build responsibly; we buy responsibly.” This transparency becomes a powerful market differentiator.
The article suggests co-design programs and a Chief AI Ethics Officer to build “experiential trust.” In practical terms, what would a successful co-design session look like, and what would be the first major initiative for a newly appointed ethics officer to prove their value?
Experiential trust is about showing, not telling, and these two initiatives are central to that. A successful co-design session is not a typical focus group where users just react to a finished product. Imagine a workshop where developers, product managers, and a diverse group of end-users are all in the same room, mapping out a new AI feature on a whiteboard. The users are empowered as “red team” experts, actively trying to find ways the technology could fail them or their community, especially regarding privacy, which is a top concern for 69% of end-users. Their insights are captured and integrated right there, in real-time. It’s about sharing power. For a new Chief AI Ethics Officer, the first initiative must be tangible and impactful. Instead of starting with a lengthy governance document, they should identify a high-stakes, existing AI system within the company and lead a public-facing “Ethical Audit.” They would transparently assess it for bias and privacy risks, report the findings—good and bad—and then oversee the remediation. This single act would prove the role has authority and is committed to action, not just theory, immediately building credibility both internally and externally.
What is your forecast for the AI trust gap over the next two to three years?
I believe the overall trust gap is likely to get a bit worse before it gets better. The overall Trust Index score is already high at 307, and as AI becomes more embedded in our daily lives, particularly in sensitive areas like media and personal finance, we’ll see more high-profile failures. These incidents will naturally deepen the skepticism of end-users, potentially widening that 11-point perception gap with providers in the short term. However, I am optimistic that this pressure will force a change. We will start to see a divergence. Companies that ignore these trust signals will falter, but the organizations that embrace these mandates—that build experiential trust, prioritize ethical outcomes like privacy and accountability, and use their own workplace as a transparent model—will begin to stand out. They will create “islands of trust” in a sea of skepticism. So while the overall index might stagnate, the real story will be in the performance gap between the companies that architect for trust and those that don’t. True leadership in the AI era will be defined not by the most powerful algorithm, but by the most trusted one.
