I’m thrilled to sit down with Chloe Maraina, a trailblazer in the realm of Business Intelligence with a deep passion for crafting compelling visual stories through big data analysis. With her expertise in data science and a forward-thinking vision for data management and integration, Chloe brings a unique perspective on how technologies like graph-based systems are shaping the future of AI. Today, we’ll dive into the exciting world of agentic and generative AI, exploring how innovative investments and new products are addressing industry challenges, the role of graph technology as a foundational layer for AI systems, and the impact on enterprises and startups alike.
Can you share what’s driving the massive push into agentic and generative AI right now, especially with significant investments like the $100 million we’ve seen recently?
Absolutely. The surge in interest and investment in agentic and generative AI comes from a pressing need to make AI systems more actionable and reliable. Companies are seeing that while AI has immense potential, many projects stall at the pilot stage. The industry is grappling with issues like lack of contextual understanding and explainable results. A substantial investment like this signals a commitment to solving those problems by building infrastructure that supports smarter, more context-aware AI—think of it as giving AI a memory and reasoning capability that it desperately needs to move from experimental to practical.
What are some of the biggest hurdles in the AI industry that make such large-scale investments necessary at this moment?
One of the biggest hurdles is the high failure rate of AI pilots—studies show about 95% don’t deliver returns. The core issue often lies in the inability of these systems to handle complex, interconnected data or to provide reasoning behind their outputs. Without a strong foundation to ground AI in real-world context, businesses can’t trust the results. Investments of this scale are crucial to develop technologies that bridge that gap, focusing on systems that can understand relationships and deliver consistent, explainable outcomes for enterprises.
How does becoming the ‘default knowledge layer’ for agentic AI systems play into the broader vision for the future of AI technology?
Becoming the ‘default knowledge layer’ means being the go-to infrastructure that powers how AI systems store, retrieve, and reason with information. It’s about creating a backbone where AI doesn’t just process data in isolation but understands connections and context—like how humans think in networks of ideas. This vision aligns with the future of AI, where systems need to be more autonomous and capable of making decisions with clarity and accountability. It’s a shift from AI as a tool to AI as a partner in problem-solving.
Why is establishing this kind of foundational role so critical for the evolution of AI systems?
It’s critical because without a solid knowledge layer, AI systems struggle with inconsistency and lack of trust. If AI can’t remember past interactions or reason through complex relationships, it’s limited to surface-level tasks. A foundational layer ensures AI can build on previous knowledge, adapt to new contexts, and provide transparent decision-making. This is essential for scaling AI from niche applications to widespread enterprise use, where reliability and explainability are non-negotiable.
Many generative AI projects struggle to move from pilot to production. What do you see as the main barriers holding them back?
The main barriers are often tied to data integration and the inability of AI models to handle real-world complexity. A lot of pilots look great in controlled environments but fall apart when faced with messy, interconnected data or when they can’t explain their outputs. There’s also a trust issue—businesses won’t invest in scaling something they don’t fully understand or can’t rely on. These projects need a way to ground AI in structured, meaningful data relationships to make that leap from prototype to practical application.
How can graph-based technology help overcome these challenges and support companies in scaling their AI initiatives?
Graph-based technology excels at mapping relationships and connections, which is exactly what AI needs to make sense of complex data. Unlike traditional databases, graphs mimic how humans think—through networks of ideas and context. This allows AI to not just process raw information but to understand how pieces fit together, improving reasoning and decision-making. For companies, this means AI can move beyond isolated tasks to deliver insights and actions that are relevant and trustworthy, paving the way for successful scaling.
Can you walk us through one of the innovative tools recently introduced, like a platform for building AI agents, and explain its purpose?
Sure, let’s talk about a tool like Aura Agent, which is designed to help enterprises create and deploy AI agents quickly using their own data. Its purpose is to simplify the process of building AI that’s tailored to specific business needs—think of it as a customizable assistant that can be up and running in minutes. It’s grounded in graph technology, so these agents aren’t just reacting to inputs; they’re understanding context and relationships, making them far more effective for real-world applications.
How does this kind of tool specifically empower businesses to accelerate their AI adoption?
Tools like this lower the barrier to entry for AI adoption by making the development process faster and more intuitive. Businesses don’t need a huge team of data scientists to build something custom—they can leverage a platform that integrates their data and deploys agents with built-in reasoning capabilities. This speed and accessibility mean companies can test and iterate on AI solutions without massive upfront costs or time delays, ultimately driving faster innovation and deployment across various use cases.
Another exciting development is the integration of graph-based memory and reasoning into AI applications. Can you explain how this enhances existing systems?
Integrating graph-based memory and reasoning into AI applications is a game-changer because it gives AI a persistent understanding of context. Traditional AI often forgets past interactions or struggles with disconnected data points. With a graph approach, AI can store and recall relationships over time, much like a human memory. This leads to better decision-making, as the system can draw on a web of knowledge to provide more accurate and relevant responses, significantly enhancing performance in dynamic environments.
What kind of feedback or impact have you seen from early adopters of these advanced graph-based AI solutions?
Early adopters have been really excited about how these solutions bring clarity and usability to AI. For instance, features like natural language querying—where users can ask questions in plain English and get meaningful answers—have been a big hit. Companies are seeing that their teams, even those without technical backgrounds, can interact with AI more naturally. The impact is twofold: it democratizes access to AI within organizations and boosts confidence in the technology as a reliable tool for everyday decision-making.
With major enterprises already leveraging graph technology for AI, can you share a success story that highlights its real-world impact?
I can’t name specifics, but I’ll paint a picture with a large retail enterprise. They used graph technology to power their recommendation engine for personalized customer experiences. By mapping out intricate relationships between products, customer preferences, and purchase history, their AI could suggest items with uncanny accuracy. The result was a significant uptick in customer satisfaction and sales conversions. It showed how understanding connections, rather than just raw data, can transform a business operation through AI.
What’s been the most valuable lesson learned from working with large-scale enterprises on their AI journeys?
The biggest lesson is the importance of trust and transparency in AI adoption. Large enterprises often have complex systems and high stakes, so they need to see how and why AI makes decisions before they fully commit. We’ve learned that providing explainable results—showing the ‘why’ behind an AI’s recommendation or action—is just as important as the result itself. Building that trust through clear, relationship-driven insights has been key to helping them embrace AI at scale.
Looking ahead, what is your forecast for the role of graph technology in the future of agentic and generative AI?
I believe graph technology will become the cornerstone of agentic and generative AI in the coming years. As AI systems evolve to be more autonomous and decision-oriented, the need for contextual understanding and reasoning will only grow. Graphs are uniquely positioned to provide that by mirroring how knowledge naturally connects. My forecast is that within a decade, most advanced AI systems will rely on graph-based infrastructure as their knowledge foundation, driving a new era of intelligent, trustworthy technology across industries.