ThoughtSpot Tackles AI Costs With New Agentic Tools

ThoughtSpot Tackles AI Costs With New Agentic Tools

Passionate about creating compelling visual stories through the analysis of big data, Chloe Maraina is our Business Intelligence expert with an aptitude for data science and a vision for the future of data management and integration. Today, we’re diving deep into the seismic shifts happening in enterprise data platforms, exploring how a new generation of agentic AI is moving beyond simple analytics to automate entire data workflows, from preparation to insight. We’ll discuss how these platforms are tackling the crippling problem of unpredictable cloud costs, what truly sets a data agent apart from a simple copilot, and what the future holds as these intelligent systems learn to work together.

Enterprises often face unpredictable cloud costs when scaling AI-driven queries, which can hinder adoption. How does a caching capability like SpotCache address this financial uncertainty for leaders, and what key performance metrics typically improve for data teams after implementation? Please provide a step-by-step example.

This is a fantastic question because it gets right to the heart of one of the biggest barriers to widespread AI adoption. For C-suite leaders, the fear of an explosive, unpredictable cloud warehouse bill can put a complete stop to promising innovation. A capability like SpotCache is the sleeper hit because it introduces cost certainty. The magic is in creating representations of data that can be queried an unlimited number of times within the platform. This completely decouples the exploratory work of analysts from the pay-per-query model of the underlying cloud data warehouse. Suddenly, a leader can greenlight an AI initiative knowing the cost is contained.

For a data team, the impact is immediate. First, query performance skyrockets because they are hitting a highly optimized, local cache. Second, their productivity soars. Imagine a team is building a new AI-driven sales forecast. Step one would be using SpotCache to create a cached snapshot of terabytes of historical sales, product, and customer data. Step two, the data scientists and analysts can then run thousands of iterative queries—profiling, testing hypotheses, and training models—directly against that snapshot without ever touching the live cloud warehouse. The result is a predictable budget and, more importantly, a culture of fearless exploration that accelerates adoption and innovation.

Many data prep tools now offer “smart suggestions” or copilots. How does a natural language data prep agent fundamentally change an analyst’s daily workflow compared to these assistants, and what specific, complex data preparation tasks can it fully automate? Please share a detailed use case.

The difference is truly night and day; it’s the difference between having a helpful backseat driver and having an expert chauffeur. A copilot or a “smart suggestion” feature observes your work and offers tips—it might highlight a column with dirty data or suggest a join. It’s helpful, but the analyst is still manually performing every step. An agentic model, on the other hand, takes instructions and executes the entire workflow. The analyst transitions from being a manual laborer to being a manager of data tasks.

Let’s walk through a common, complex scenario. An analyst receives several new data sources to blend for a customer churn analysis—a few massive CSV files from a marketing campaign, a direct connection to a cloud application, and a Google Sheet with manual adjustments. Instead of painstakingly inspecting each source, they can simply tell the data prep agent: “Profile these new datasets, identify and flag all schema discrepancies, generate the SQL to blend them with our primary customer table based on the ‘customer_id’ field, and troubleshoot any errors in the process.” The agent doesn’t just suggest these steps; it performs them autonomously. This frees the analyst from hours of tedious, error-prone work to focus on the higher-value strategic analysis that actually drives business decisions.

With some analytics vendors moving “upstream” into data preparation, they are collapsing prep, modeling, and analysis into a single workflow. What are the primary advantages for a business in adopting this unified agentic model, and what organizational shifts are needed to fully capitalize on it?

The primary advantage is the radical reduction of friction. In a traditional setup, you have this disjointed assembly line. A data engineer preps the data in one tool, hands it off to a data modeler who uses another, who then passes it to a BI analyst for visualization in a third. Each handoff is a potential point of failure, miscommunication, and delay. By collapsing this into a single, agentic workflow, you’re creating a cohesive “AI readiness” pipeline. The business gains tremendous speed and agility. Insights that once took weeks can now be generated in hours because the entire process is streamlined and orchestrated by AI.

To fully capitalize on this, however, organizations need to rethink their team structures. The old model of siloed data teams becomes a bottleneck. The shift is toward more integrated, cross-functional teams that own the data lifecycle from end to end. You need analysts who understand data prep and engineers who understand the business context of the analysis. This new operating model requires breaking down walls and empowering teams with unified platforms that reflect this more holistic approach to data. It’s less about specialized tool jockeys and more about versatile data strategists.

Looking beyond current capabilities, a key challenge is coordinating multiple specialized agents, such as a prep agent communicating with a security agent. What are the main technical and governance hurdles to creating this “hive” of agents, and how can they be overcome to ensure secure, autonomous operations?

This is the next frontier, moving from a single agent to a coordinated “hive” of specialists. The technical hurdles are significant. You need a robust communication protocol, a central orchestrator, that allows these agents to interact securely and efficiently. Imagine a prep agent needing to transform a dataset; it can’t just act alone. It must first query a security agent to understand the row-level permissions for the user requesting the data. The security agent must then respond with the appropriate policies, which the prep agent then applies during the transformation process, all without human intervention. This requires a sophisticated, context-aware foundation, like a Model Context Protocol server, that governs these interactions.

The governance hurdles are just as challenging. Who is responsible when an autonomous process goes wrong? How do you audit a decision made through a chain of five different agents? Overcoming this requires building trust and transparency into the system from day one. Every action taken by an agent must be logged and explainable. We need to establish clear rules of engagement and robust security frameworks that ensure data is handled securely and ethically, even when the process is fully autonomous. It’s about creating a system where we can grant autonomy because we have provable, built-in guardrails.

Platforms are becoming very effective at descriptive and diagnostic AI, explaining what happened and why. To create more value, what is the next step toward predictive and prescriptive intelligence, and what foundational data capabilities must be in place to make that leap successfully?

The industry has largely mastered explaining the past and the present. We are excellent at building dashboards that show what happened and using AI to diagnose why it happened. The true value, however, lies in the future. The next leap is into predictive and prescriptive intelligence—telling a business user what could happen next and, more importantly, recommending what actions they should prioritize to achieve the best outcome. This is the shift from being a rearview mirror to being a GPS for the business.

Making this leap successfully requires two foundational pillars. First, you need an incredibly robust, context-aware data foundation. The agents need to operate on a rich semantic and modeling layer that understands the relationships, hierarchies, and nuances of the business. Without that deep context, any prediction is just a guess. Second, the platform must be able to not just read data but also write back. A prescriptive insight is useless if it’s trapped in a dashboard. The system needs the ability to trigger actions—like automatically adjusting a marketing budget or flagging an at-risk customer for outreach—to close the loop between insight and action.

What is your forecast for the evolution of agentic data platforms over the next three to five years?

Over the next three to five years, I forecast a dramatic shift from human-in-the-loop systems to human-on-the-loop systems. Today, we assist the AI. Soon, a “hive” of coordinated agents will manage the entire data-to-insight pipeline autonomously, and our role will be to supervise, set strategic direction, and handle the exceptions. We will see the rise of the “AI workload optimizer” as a dominant platform category, where the system’s core value is not just generating charts but efficiently managing the entire data ecosystem for cost, performance, and security. The interface will become completely conversational, and the distinction between data prep, analysis, and action will blur into a single, fluid experience, making every employee a sophisticated data user.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later