Today, we’re thrilled to sit down with Chloe Maraina, a visionary in the realm of Business Intelligence with a deep passion for crafting compelling stories through big data analysis. With her expertise in data science and a forward-thinking approach to data management and integration, Chloe has become a leading voice in transforming how organizations harness data for strategic impact. In this conversation, we’ll dive into the critical importance of prioritizing people and processes over technology, the power of starting small with targeted initiatives, the value of strategic partnerships, and the art of celebrating incremental successes to drive broader adoption of data quality practices.
How did you come to understand that forcing a data quality tool across an organization often leads to resistance rather than results?
Early in my career, I saw firsthand how top-down mandates for tool adoption created more frustration than progress. Teams felt overwhelmed and disconnected from the purpose of the technology. At one organization, we had a powerful data quality tool, but the push for universal adoption ignored the unique needs of different business units. It became clear that without buy-in from the people using it, even the best tools would just sit unused. That experience taught me the importance of focusing on understanding the human and operational challenges before introducing a technical solution.
What inspired you to prioritize people and processes over immediately jumping to a technological fix for data quality issues?
I realized that technology is just an enabler, not the solution itself. If the people using the data don’t trust it or don’t understand its value, no tool can fix that. I’ve found that taking the time to align processes—how data is collected, interpreted, and used—and engaging with the teams who interact with it daily creates a foundation for sustainable change. It’s about building a culture where data quality is everyone’s responsibility, not just a mandate from leadership.
Can you share some of the early hurdles you encountered when trying to implement a data quality initiative in a large organization?
One of the biggest hurdles was the lack of alignment across different teams. Each group had its own way of handling data, often with manual workarounds that they’d grown comfortable with. When I introduced a data quality tool, there was skepticism about whether it would actually make their jobs easier or just add another layer of complexity. Additionally, there was a gap in data literacy—some teams didn’t even realize why their data was considered ‘bad’ until we dug into their specific pain points. Overcoming that meant a lot of listening and patience.
Why do you think starting with just a couple of focused use cases is more effective than a broad, organization-wide rollout?
Starting small lets you prove value without overwhelming the organization. When you focus on one or two use cases, you can really understand the specific data challenges and tailor the solution to fit. It’s easier to get quick wins that build credibility. For instance, targeting a supply chain issue allowed us to show tangible improvements in decision-making, which then created a ripple effect. Other teams saw the success and became curious about how it could help them, rather than feeling forced into adoption.
How do you go about identifying the right starting point for a data quality project within a complex business environment?
I start by talking to as many stakeholders as possible to uncover where the pain points are most acute. It’s about finding an area where data quality issues are directly impacting business outcomes—like delays in supply chain decisions due to inconsistent data. I also look for teams that are already trying to address these issues on their own, even if informally. Those are often the best places to start because there’s already motivation to improve, and you can build on their existing efforts.
What’s your approach to building partnerships with different teams to support data quality initiatives?
Building partnerships is all about listening and aligning with their priorities. I make it a point to understand what keeps each team up at night and show how better data quality can help solve those problems. For example, working with a legal team wasn’t something I initially expected, but by connecting data quality to their need for accurate vendor contracts, we found common ground. It’s about being open to unexpected allies and demonstrating value in their terms, not just pushing a tool for the sake of it.
Why is celebrating small successes so critical in gaining momentum for data quality efforts across an organization?
Small successes create a story of progress that people can rally around. When teams see a win—like automating a tedious manual process and getting better insights—they start to believe in the initiative. Celebrating these moments, whether through a company newsletter or a quick shout-out in a meeting, builds excitement and trust. It shifts the perception from data quality being a burden to something that actually makes work easier and more impactful.
How do you ensure that business context remains the focus rather than getting distracted by the latest data quality tools on the market?
I always start with the ‘why’ behind the data issues. I work with teams to define what ‘bad data’ means to them—whether it’s missing information or inconsistent formats—and how it affects their goals. Only then do we look at tools, and even then, it’s about finding something that fits the problem, not chasing the newest shiny thing. Keeping the business problem at the core ensures that any solution we adopt is relevant and actually solves real issues, not just theoretical ones.
Can you walk us through how improving data literacy among teams contributes to a shared understanding of data quality needs?
Data literacy is the foundation of any successful data quality effort. When teams understand what good data looks like and why it matters, they’re more likely to care about maintaining it. I’ve facilitated workshops and discussions to help teams articulate their data challenges in a common language. For example, one team might think complete data is enough, but another downstream team needs specific details for their workflows. Bridging that gap through education helps everyone see the bigger picture and work toward the same standards.
What is your forecast for the future of data quality management, especially with emerging technologies like AI and machine learning?
I believe data quality management will become even more critical as we integrate AI and machine learning into business processes. These technologies rely heavily on clean, reliable data to deliver accurate results, so the demand for robust data quality practices will only grow. I foresee a shift toward more automated, proactive data quality monitoring, where AI itself helps identify and even correct issues before they impact decisions. However, the human element—understanding context and building trust—will remain just as important. We’ll need to balance these advanced tools with a continued focus on people and processes to truly unlock their potential.
