I’m thrilled to sit down with Chloe Maraina, our resident Business Intelligence expert, whose passion for weaving compelling visual stories from big data has redefined how businesses approach data management and integration. With a keen eye for data science and a forward-thinking vision, Chloe has helped countless organizations navigate the complex landscape of AI adoption. In this conversation, we dive into the importance of a metrics-driven strategy for meaningful AI implementation, explore why so many initiatives fall short, and uncover how businesses can lay the groundwork for lasting value through structured, measurable approaches.
How would you describe a ‘metrics-driven approach’ to AI adoption, and why is it so essential for businesses?
A metrics-driven approach means grounding every step of AI adoption in clear, measurable indicators that reflect the organization’s current state and desired outcomes. It’s about having a roadmap where you can track progress, spot issues early, and align efforts across teams. Metrics are essential because they cut through the hype and provide a reality check—without them, businesses risk pouring resources into AI without understanding if it’s solving real problems or just creating new ones. It’s the difference between guessing and knowing whether you’re on the right path.
What sets a metrics-driven strategy apart from simply diving into AI implementation without a plan?
Diving in without a plan is like building a house without a foundation—you might get something up quickly, but it won’t last. A metrics-driven strategy starts with visibility into your operations, processes, and pain points before any tech is deployed. It prioritizes preparation over speed, ensuring AI amplifies strengths rather than weaknesses. Without metrics, you’re often just reacting to pressure or trends, which leads to misaligned goals and wasted effort.
Given that 70 to 80% of AI initiatives fail to meet their objectives, what do you see as the biggest reasons behind this staggering statistic?
The high failure rate often comes down to a lack of readiness. Many companies jump into AI because of competitive pressure or fear of missing out, without first understanding their own operations. They don’t have a clear baseline of how things work—or don’t work—internally. This rush skips critical steps like defining success, aligning teams, or even ensuring data quality. As a result, AI becomes a shiny tool that magnifies existing chaos rather than resolving it.
You’ve mentioned that AI acts as a multiplier of what a company already has. Can you unpack that idea for us?
Absolutely. AI isn’t a magic fix; it’s a force that scales whatever you feed it. If your processes are streamlined and your data is solid, AI can turbocharge efficiency and insights. But if you’ve got inefficiencies, silos, or poor data quality, AI will amplify those flaws, making problems more visible and costly. It’s why getting your house in order before adoption is non-negotiable—AI won’t clean up the mess for you; it’ll just make it bigger.
Why is having visibility into operations such a critical step before adopting AI?
Visibility is everything because you can’t optimize what you can’t see. If you don’t have a clear picture of how your systems function, where bottlenecks are, or how decisions are made, AI implementation becomes a shot in the dark. Without that insight, you risk automating broken processes or solving the wrong problems. Metrics give you that visibility—they act as a lens to understand your operations and ensure AI is applied where it can drive real value.
Can you walk us through the key areas that businesses should measure to prepare for AI adoption?
Sure. First, you need to assess your current state—how do systems and processes actually work, not just how they’re supposed to. That means looking at things like manual interventions or time-to-decision. Then, identify cross-department gaps, such as inconsistent data definitions or workflows that don’t align. Next, define early failure signals—metrics like rising error rates or missed service levels that warn you things are off track. Finally, track AI adoption levels, like the percentage of workflows using AI or the quality of training data. These measurements ensure you’re not just experimenting but building toward sustainable impact.
The idea that legacy systems don’t have to be a barrier to AI adoption is intriguing. How can companies tackle challenges with outdated infrastructure?
Legacy systems are often seen as a dead end, but they don’t have to be. The key is a structured modernization approach where metrics guide the process. Start by mapping out what’s working and what’s not—measure things like system downtime or data access delays. Then prioritize upgrades or integrations that align with AI goals, using metrics to track improvement. I’ve seen companies turn legacy challenges into opportunities by focusing on small, measurable wins that build momentum for broader transformation.
What’s your forecast for the role of metrics in AI adoption over the next few years?
I think metrics will become the backbone of any successful AI strategy as more companies realize that adoption without measurement is just gambling. We’ll see a shift toward standardized frameworks for assessing readiness and progress, with a heavier emphasis on real-time data to course-correct quickly. As AI becomes more embedded in everyday operations, metrics will evolve from a planning tool to a continuous feedback loop, ensuring that transformation isn’t just a one-time project but a sustainable, value-driven journey.