Atlassian Optimizes AI Efficiency to Reduce Token Burn Costs

Atlassian Optimizes AI Efficiency to Reduce Token Burn Costs

Navigating the complex landscape of enterprise AI requires more than just deploying the latest large language models; it demands a surgical approach to data management and a keen eye on the bottom line. As organizations transition from simple chatbots to autonomous agentic workflows, the hidden costs of “token burn” and data egress are becoming the new battleground for CTOs and cloud architects. In this discussion, we explore how shifting from raw data ingestion to structured relationship graphs can stabilize operational budgets and why the industry is reaching a tipping point regarding the cost of AI self-supervision.

AI agents often pull excessive raw data to fill context windows, leading to massive token burn. How can using structured relationship graphs instead of raw data dumps change the financial ROI for enterprise AI, and what technical hurdles do teams face when shifting to these more refined protocols?

When agents operate without a map, they resort to “stuffing” context windows with every bit of raw data they can find, which is essentially like trying to find a needle in a haystack by buying more hay. By utilizing structured relationship graphs, like the Teamwork Graph, agents can query for specific objects and their connections rather than performing a blind data dump. This shift is transformative for ROI, as we’ve seen it produce 44% more accurate search results while simultaneously slashing token costs by up to 48%. The primary technical hurdle is moving away from “noisy” protocols that treat all data as equal; teams must now learn to broker access so that agents understand the semantic layer of an organization. It’s a shift from brute-force reasoning to precision automation, and while the initial setup requires more architectural discipline, the long-term savings prevent that nightmare scenario of waking up to a million-dollar bill for unproductive compute.

Many organizations find that using supervisor agents to evaluate the work of other autonomous agents can effectively double operational costs. What strategies can companies use to balance output quality with these compounding expenses, and how do you determine when the accuracy justifies the extra token consumption?

The trend of “agents watching agents” is a double-edged sword because while it drives quality to the necessary enterprise standard, it also makes the entire operation twice as expensive as initially projected. To balance this, companies should move away from flat consumption-based pricing and toward models that tie costs to specific outcomes, though the industry is still struggling to perfect this transparency. You justify the extra token consumption only when the task has a high blast radius—such as an autonomous level 1 service desk agent interacting with customers—where the cost of a mistake far outweighs the cost of a second agent’s review. For internal, low-risk tasks, organizations should lean on “lighter-weight” requests and bypass the heavy supervisor layer to keep overhead manageable. It’s about creating a tiered governance structure where the level of supervision matches the criticality of the workflow, ensuring you aren’t spending premium tokens on trivial administrative reasoning.

Platforms are beginning to separate agent-specific data requests from standard per-gigabyte egress charges to manage overhead. How should infrastructure leaders renegotiate data terms with cloud vendors to support agentic traffic, and what risks arise when multiple third-party agents attempt to ingest an entire internal knowledge graph?

Infrastructure leaders need to look closely at innovations like ServiceNow’s Access Fabric, which differentiates between high-volume data migration and the lighter, frequent “pings” of AI agents. When renegotiating with cloud vendors, the goal should be to isolate agentic traffic from traditional per-gigabyte egress fees to avoid being penalized for the very connectivity that makes AI useful. There is a palpable tension here because if a third-party agent attempts to ingest an entire knowledge graph to “learn” the company, it can trigger massive, unexpected costs and security red flags. The risk isn’t just financial; it’s a matter of data sovereignty, as allowing external agents to download a full map of internal inferred relationships can lead to a loss of proprietary intelligence. Leaders must put a “price tag” on these massive ingestions and implement guardrails that prevent third-party startups from scraping the entire organizational context under the guise of “integration.”

Effective agent orchestration requires a deep understanding of ground truth through asset discovery and configuration management. How do you successfully integrate data from various third-party discovery tools into a unified teamwork graph, and why is this reconciliation process critical for preventing agents from acting on outdated info?

Integrating data from third-party tools like Lansweeper or Flexera into a unified graph is the only way to move from “hallucination-prone” AI to “grounded” enterprise agents. This reconciliation process is critical because without a single source of truth—essentially a modern CMDB—an agent might try to resolve an incident using a server that was decommissioned three months ago or apply a patch to a legacy system that no longer exists. By using specialized tools for data cleansing and analysis, such as those gained through the AirTrack acquisition, organizations can ingest and scrub data from multiple sources to ensure the “intelligence layer” is accurate. This “ground truth” acts as a tether for the AI; it ensures that when an agent acts, it is doing so based on the current state of the infrastructure, not a stale snapshot. Without this rigorous discovery and reconciliation, you aren’t building an automated workforce—you’re building an expensive engine for generating errors at scale.

The shift toward no-code agent builders allows non-technical users to automate complex, multi-step tasks across different software suites. What governance controls must be in place before deploying these agents at scale, and how do you ensure they maintain organizational context across different platforms?

Before unleashing no-code agents built in tools like Rovo Studio, organizations must establish strict permissions that mirror their existing security posture, ensuring a marketing bot can’t accidentally wander into sensitive HR records. Governance isn’t just about restriction; it’s about providing a “contextual sandbox” where the agent understands the relationships between projects, teams, and goals across different platforms like Jira or Salesforce. We must use a common model, such as the Model Context Protocol (MCP), to broker how these agents talk to each other while maintaining a unified audit log to see exactly who authorized which action. To maintain context across suites, the agents should rely on a shared intelligence layer that infers relationships, so a task started in one tool carries its “why” and “how” into the next. This prevents the “fragmented agent” problem, where a bot becomes useless the moment it leaves its native platform, losing the thread of the multi-step task it was assigned to complete.

What is your forecast for cross-domain AI agent orchestration?

The future of orchestration lies in being completely agnostic to where an agent or model originates, allowing a “fleet” of specialized agents from various vendors to work together as a single, cohesive team. While this level of sophisticated deployment is currently beyond the reach of most enterprises—who are largely focused on simpler, single-platform workflows like those in Salesforce—the next eighteen months will see a rapid shift toward these cross-domain ecosystems. We will move away from the fear of the “$4 million token burn” as tools for monitoring and reducing noisy data exchange become standard in every AI Control Tower. Eventually, the measure of a successful orchestration strategy won’t be how many agents you have, but how efficiently they navigate the teamwork graph to deliver outcomes without human intervention. The winners in this space will be the companies that treat their organizational context as a living asset, allowing agents to dip in and out of data streams with surgical precision and minimal financial friction.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later