Meet Chloe Maraina, a Business Intelligence expert with a deep passion for crafting compelling visual stories through big data analysis. With her sharp expertise in data science and a forward-thinking vision for data management and integration, Chloe has become a trusted voice in the realm of enterprise AI. In this interview, we dive into the evolving landscape of AI in business, exploring the shift from experimental projects to profit-driven strategies, the critical role of data access and quality, and innovative approaches like bringing AI to the data. Chloe shares her insights on overcoming integration challenges, building unified architectures, and unlocking the true potential of AI for measurable returns.
How has the focus of enterprise AI shifted from experimentation to profitability, and what’s driving this change?
Over the past few years, enterprise AI started with a sense of wonder—businesses were asking, “What can AI do?” This led to a flurry of pilot projects, some creative and others a bit quirky, all aimed at exploring possibilities. But now, the conversation has pivoted to “How do we make AI profitable?” The driver behind this shift is simple: costs are piling up, and stakeholders want to see tangible returns. Companies are under pressure to move beyond proofs of concept and ensure AI delivers real business value, whether that’s through efficiency gains, better decision-making, or revenue growth.
What were some of the standout lessons from those early AI pilot projects?
One of the biggest takeaways was that AI’s potential is directly tied to the data it can access. Early pilots often revealed that without comprehensive, reliable data, AI outputs were limited or even misleading. We also learned that scalability is a challenge—many pilot projects worked in isolated environments but struggled when applied to broader operations. Lastly, there was a realization that cultural buy-in matters just as much as tech; teams needed to trust and understand AI to integrate it effectively into workflows.
With 96% of IT leaders saying AI is at least somewhat integrated into core processes, what does ‘somewhat integrated’ typically mean for most organizations?
‘Somewhat integrated’ often means AI is being used in specific, siloed areas of the business—like customer service chatbots or predictive maintenance in manufacturing—but it’s not yet woven into the fabric of the entire organization. It might be deployed in a few departments or for particular use cases, but it lacks the breadth to impact end-to-end processes. This partial integration often stems from technical limitations or a cautious approach to scaling until results are proven.
Why do you think so few companies have achieved full AI integration across their operations?
Full integration is tough because it requires a level of data readiness and organizational alignment that most companies just don’t have yet. Data is often scattered across multiple systems—clouds, data centers, even edge locations—with inconsistent formats and governance. Plus, there’s a skills gap; not every team has the expertise to manage AI at scale. There’s also a fear of disruption—fully integrating AI means rethinking workflows, and that can feel risky without guaranteed outcomes.
Data access seems to be a major hurdle, with only 9% of organizations having all their data available for AI. Why is this such a significant issue?
When AI doesn’t have access to all the data, it’s like trying to solve a puzzle with missing pieces. The outputs are incomplete or skewed, which can lead to poor decisions in critical areas like customer targeting or risk assessment. Scattered data also means delays—teams spend more time wrangling information than acting on insights. Ultimately, limited access undermines AI’s ability to be a strategic tool, turning it into more of a novelty than a value driver.
How does having data spread across different clouds and data centers impact AI performance?
Fragmented data creates bottlenecks. AI models need to pull from various sources, and when those sources are spread across public clouds, private data centers, or edge environments, you get latency issues and higher compute costs. There’s also the problem of inconsistency—different systems might have varying data formats or governance rules, which confuses the model. The result is slower insights and outputs that might not reflect the full picture, reducing trust in the AI’s recommendations.
Can you share an example of how incomplete or outdated data has affected AI outcomes in a real-world situation?
Absolutely. I’ve seen this in retail, where a company used AI for inventory forecasting but relied on outdated sales data from only one region. The model predicted demand inaccurately, leading to overstock in some stores and shortages in others. This not only hurt sales but also frustrated customers and increased logistics costs. It was a clear case of how partial data can cascade into operational headaches, showing why comprehensive, current data is non-negotiable for AI success.
The concept of data lineage comes up a lot when discussing trustworthy AI outputs. Can you explain what data lineage means in this context?
Data lineage is essentially the ability to trace data from its origin to its final use in an AI model. It’s about knowing where the data came from, who touched it, when it was updated, and how it’s been transformed along the way. Think of it as a detailed breadcrumb trail—every step is documented. This transparency is crucial for validating AI outputs, ensuring they’re based on accurate, reliable information rather than guesswork or corrupted inputs.
Why is data lineage especially important in industries like healthcare or finance?
In regulated sectors like healthcare and finance, accountability is everything. If an AI model makes a recommendation—say, a clinical diagnosis or a loan approval—you need to trace back to the exact data points that influenced it. Was it a patient’s medical history or a credit score? Was the data current? Without lineage, you can’t defend the decision or comply with strict regulations. It’s also about trust—stakeholders need assurance that the AI isn’t pulling from flawed or biased sources, especially when lives or livelihoods are at stake.
There’s a growing emphasis on bringing AI to the data instead of moving data to the AI model. Can you unpack what this approach entails?
Bringing AI to the data means deploying AI algorithms directly where the data lives—whether that’s in a cloud, a data center, or at the edge—rather than hauling massive datasets to a centralized model. It flips the traditional approach on its head. Instead of dealing with the logistical nightmare of moving petabytes of data, you embed intelligence at the source, allowing AI to process information in its native environment. It’s a game-changer for efficiency and security.
How does this method of bringing AI to the data help tackle challenges like latency or high compute costs?
When you move data to a model, especially across networks or between clouds, you introduce latency—delays that slow down insights. You also rack up compute costs from transferring and storing redundant copies. By bringing AI to the data, you cut out those middle steps. Processing happens locally, so results come faster, and you’re not burning budget on unnecessary data movement or duplicate storage. It’s a leaner, quicker way to get value from AI.
What does a unified data and AI architecture look like in practice, and why is it so beneficial?
A unified architecture is like a single, seamless ecosystem where data and AI work hand in hand, regardless of where the data sits. In practice, it means having a platform that connects all your data sources—clouds, on-prem systems, edge devices—and lets AI operate across them with consistent rules. The benefit is huge: you get faster model deployment, fewer blind spots, and a single source of truth. It also simplifies governance, so policies around security and compliance aren’t patchwork but uniform across the board.
How does a unified architecture support consistent governance and policy enforcement?
When everything operates within one architecture, you can apply the same security protocols, access controls, and compliance standards everywhere. There’s no worrying about one cloud having lax rules while another is locked down tight. It centralizes oversight—whether data is at the edge or in a data center, the same policies govern how AI accesses and uses it. This consistency reduces risk and builds trust, especially in industries where a single misstep can lead to hefty fines or reputational damage.
What are your thoughts on the future of enterprise AI, and where do you see it heading in the next few years?
I’m optimistic about enterprise AI, but I think its future hinges on solving the data challenge. Over the next few years, we’ll see more companies adopting unified architectures and bringing AI to the data as a standard practice. There’ll be a bigger push for real-time insights, especially as edge computing grows. I also expect AI governance to become a hot topic—trust and transparency will be non-negotiable as AI touches more critical decisions. Ultimately, the organizations that treat data as the foundation of AI, not an afterthought, will be the ones leading the pack.
