Chloe Maraina is a powerhouse in the world of business intelligence, known for her unique ability to weave complex big data into clear, visual stories. With a background that bridges the gap between technical data science and high-level management strategy, she has spent years helping enterprises navigate the treacherous waters of digital transformation. Today, she joins us to discuss the seismic shifts in the analytics industry, from the move to cloud-first models to the sudden, explosive rise of agentic AI.
Since 2018, the shift from traditional business intelligence to a cloud-first SaaS model has redefined the industry. What were the primary hurdles in migrating legacy enterprise customers to managed cloud environments, and how did those efforts lay the groundwork for today’s data integration capabilities?
When leadership changes hands after an eight-year tenure, you really start to see the weight of the cloud migration journey that began back in January 2018. The biggest hurdle wasn’t just the technology; it was convincing legacy customers to move away from the comfort of on-premises servers to a fully managed SaaS environment. By replacing older models with a more robust cloud business platform in 2018, the industry forced a necessary evolution in how we handle data at scale. These efforts were essential because they broke down the silos that had existed for decades, allowing for the deep data integration capabilities we see now. Without that five-year process to reorganize and expand, we wouldn’t have the foundation required to feed the hungry, data-intensive AI applications that are currently taking center stage.
Expanding a specialized analytics tool into a comprehensive data platform requires a rigorous acquisition and integration strategy. How do you assess the technical success of merging disparate data management tools into a unified ecosystem, and what metrics prove that a platform is truly “full-featured”?
The true test of a platform’s success lies in how seamlessly it can transition from being a simple visualization tool to a full-featured data powerhouse. Over the last several years, we’ve seen a concerted effort to add data integration and AI capabilities through a series of strategic acquisitions that took nearly half a decade to fully bake into the ecosystem. You know a platform is “full-featured” when it no longer just generates a pretty report but actually manages the entire lifecycle of data, from the lakehouse to the final insight. It’s a sensory shift for the user; they stop worrying about where the data lives and start focusing on the speed of the outcome. The metrics that matter here aren’t just uptime or query speed, but the diversity of workloads—like agentic AI and automated data pipelines—that the platform can now support under one roof.
The emergence of generative AI has moved the spotlight away from standard reports and toward automated agents and insights. How should organizations restructure their data pipelines to support these agentic technologies, and what specific steps ensure that the resulting AI applications are both trusted and actionable?
The launch of ChatGPT in November 2022 was a seismic event that forced every analytics vendor to pivot from being a provider of reports to an enabler of AI. Organizations now have to restructure their pipelines to prioritize “trusted data,” because an AI agent is only as good as the information it’s fed. To make these applications actionable, companies are now deploying tools specifically designed to develop, deploy, and manage AI agents that can work autonomously. It’s a transition that requires moving beyond static dashboards and toward a lakehouse architecture that can handle real-time data flows. When you see a company unveiling these features at a major user conference in Kissimmee, you realize the goal is to turn massive data sets into meaningful action that a business can actually bank on.
Economic volatility and the rapid pace of innovation often complicate the transition from private equity ownership back to public markets. What are the strategic benefits of remaining private during a massive technological pivot, and how does that status impact a company’s ability to compete with hyperscale AI providers?
Remaining under the wing of a private equity firm, like the $3 billion acquisition by Thoma Bravo in 2016, provides a vital shield against the relentless scrutiny of the public market. This privacy allowed for a massive, multi-year transformation for the cloud that might have been impossible if leadership had to answer for every quarterly dip in revenue. Even when a company filed paperwork for an initial public stock offering in January 2022, economic uncertainty and the sudden shift toward generative AI proved that being private offered the flexibility to delay and recalibrate. This status is a huge competitive advantage when going up against hyperscale providers because it allows for long-term R&D without the pressure of immediate stock price fluctuations. It gave the space needed to evolve from a BI vendor into an AI-first organization during one of the most volatile periods in tech history.
Launching a suite of AI tools and lakehouse features just before a major leadership transition creates a unique set of challenges. How can a company maintain its innovation roadmap during a change in the CEO role, and what are the immediate priorities for ensuring that new AI initiatives gain traction with global customers?
It is certainly a bold move to announce a leadership change just two weeks after a massive annual user conference where new AI tools and lakehouse features were the stars of the show. To maintain momentum, the priority must be on demonstrating the reliability of these new agentic technologies to global customers who are often skeptical of rapid changes. The roadmap stays intact if the vision of turning trusted data into action is deeply embedded in the company culture, rather than just being the mandate of a single person. You have to ensure that the transition in the front office doesn’t slow down the technical support and deployment of these new AI assistants. It’s about keeping the focus on the impact those eight years of growth had, ensuring the new leadership can hit the ground running with a platform that is already positioned for the future.
What is your forecast for the future of data analytics and AI integration?
I believe we are entering an era where the distinction between “data management” and “AI” will disappear entirely. We are moving toward a reality where data is not just something we store and analyze, but something that actively works for us through autonomous agents that can anticipate business needs before a human even looks at a dashboard. The organizations that will win are the ones that spent the last few years building a foundation of trusted data, rather than just chasing the latest AI hype. My forecast is that within the next three years, the most successful enterprises will be those that have fully integrated their lakehouse architectures with agentic AI, turning their entire data ecosystem into a self-correcting, insight-generating machine. It’s a thrilling time to be in this field, as we watch the legacy of cloud transformation finally meet the limitless potential of artificial intelligence.
