Scaling Agentic AI Through Strategic Work Redesign

Scaling Agentic AI Through Strategic Work Redesign

In the rapidly evolving landscape of enterprise technology, the leap from experimental artificial intelligence to full-scale operational integration remains one of the most daunting hurdles for modern leadership. Chloe Maraina brings a unique perspective to this challenge, combining her deep passion for big data visualization with a visionary approach to data management and integration. As a seasoned Business Intelligence expert, Chloe specializes in transforming raw numbers into compelling narratives that drive business strategy. Today, we sit down with her to discuss the current state of “agentification”—the transition toward autonomous AI agents—and how organizations can navigate the messy middle ground between small-scale pilots and enterprise-wide ROI.

Our conversation explores the critical shift from simply training employees in AI fluency to fundamentally redesigning the nature of work itself. We delve into the frustrations of “pilot purgatory,” where only a small fraction of AI initiatives successfully scale, and examine why technical decisions made during the experimentation phase often become roadblocks later on. Chloe sheds light on the necessity of disciplined governance, the nuances of industry-specific AI deployment, and the importance of visualizing future workflows before a single line of code is written. By focusing on the balance between bottom-up innovation and top-down strategy, she provides a roadmap for leaders looking to move beyond productivity gains toward true business transformation.

Many AI pilots solve narrow productivity issues but fail to scale across the entire enterprise. What specific technical or data-related criteria should be evaluated during the initial pilot phase to ensure viability at scale, and what metrics indicate it is time to move beyond the experimentation stage?

The reality for many organizations right now is a sense of deep frustration because the percentage of pilots they have been able to take to scale is often less than 20%. This “pilot purgatory” happens because many projects are constructed to solve a very specific, narrow productivity problem in isolation rather than being framed against the broader enterprise infrastructure. When you are in the pilot phase, you must evaluate whether your data foundations and technical choices—which might seem manageable in a small room—can actually survive the weight of the entire company. A data-quality issue that is just a minor annoyance in a pilot of ten people becomes a catastrophic failure when you try to scale it to ten thousand. You have to decide the scale questions as part of the pilot, looking at things like multi-agent orchestration and whether your current security protocols can handle agents that need to collaborate across different departments. A clear sign that it is time to move beyond experimentation is when you have established a disciplined, evaluative process where the proposed value is no longer just a guess but a calculated certainty based on consistent standards.

Research suggests that training employees for AI fluency does not capture full value without redesigning actual roles. When a position is reduced from seventeen tasks to ten through automation, how should organizations reassign that freed-up time, and what steps ensure human judgment remains the primary focus?

Redesigning work is perhaps the most underrated aspect of AI adoption, but it is where the real value is hidden. If you simply apply AI to the work people already do, you make them more efficient, but you haven’t actually changed the outcome or the strategic value of their time. Consider a role that currently involves seventeen distinct tasks; through agentic AI, we might find that only ten tasks remain for the human because the others have been automated or streamlined. This is a moment of significant opportunity where you can give that person different, higher-level work that focuses on cognitive enhancement rather than just speed. The goal is to reach a state of cognitive parity where the employee isn’t just doing the work faster but is producing a product with greater quality and robustness. To keep human judgment at the center, the role must shift toward “managing the agent,” where the human oversees the quality and ethics of the output while the AI handles the repeatable, data-heavy functions.

As agents transition from assisted tools to autonomous systems that execute tasks without seeking permission, security risks increase. What specific governance protocols must be established to manage these independent actions, and how can teams maintain high data quality when deploying multi-agent orchestration across different departments?

The shift toward autonomy is a massive technical and cultural leap because an autonomous agent, by definition, is not going to seek permission before it executes a task. This independence necessitates a much higher level of security and governance than the “assisted” tools we have used in the past. Organizations need to move away from the “rogue IT” mentality, where employees are downloading unauthorized apps, and instead implement a top-down enterprise approach with very strict protocols. Governance must be established at the start of the journey, focusing on data quality as a foundational pillar, because these agents rely on that data to make decisions in real-time. If you have agents from the marketing department trying to collaborate with agents in the supply chain, they must be operating on a unified “data truth” or the orchestration will fail. You maintain quality by ensuring that every stakeholder, no matter their department, follows a consistent and disciplined evaluative process for how these agents interact with the company’s core information.

Agentification impacts industries differently, ranging from supply chain management in hardware to creative storytelling in entertainment. How does a strategy for internal efficiency differ from one focused on transforming core products, and what unique challenges do leaders face when agentic AI begins to impact creative output?

It is fascinating to see how agentification creates a fork in the road depending on the industry; for some, it is an internal shield, while for others, it is a sword for growth. In the hardware industry, the focus is often on the back-office and the physical supply chain, where agents create efficiency by predicting delays or optimizing logistics. However, in the entertainment industry, AI is touching the very heart of the business—the creativity of storytelling—which brings a much more emotional and existential set of challenges for leadership. When AI begins to impact creative output, the strategy must shift from simple cost containment to value creation, asking how these tools can change the actual experience of the product for the consumer. Leaders in these creative fields face the unique challenge of maintaining the “soul” of their product while utilizing agents to handle the massive amounts of research and data analysis that inform modern storytelling. The key is to be clear about your vision from day one: are you trying to mitigate costs, or are you trying to fundamentally change what you sell to the world?

Organizations often lack a clear roadmap, leading to “rogue IT” scenarios where employees use unauthorized apps. How can leadership reconcile top-down tool mandates with organic, bottom-up innovation, and what specific goal-setting process ensures that AI investments prioritize long-term growth over simple cost mitigation?

We often see a tension between the organic way pilots grow—where people just grab an LLM to help with their daily duties—and the need for a coherent, top-down corporate strategy. According to recent surveys of over 3,200 business and IT leaders, a major challenge is that more than half of companies are still in the early stages or have no strategy at all, which leaves them without a roadmap to judge progress. To reconcile this, leadership needs to implement a disciplined goal-setting process that asks whether the investment is for “now,” “next,” or the “future.” You have to be incredibly clear about whether your primary goal is cost containment or if you are looking to create value through new products and services. By establishing a periodic evaluation process, you can harvest the best bottom-up ideas from your employees while ensuring they align with the standardized toolsets and security protocols defined at the enterprise level. This prevents the organization from splintering into different directions where everyone is calculating benefits differently and executing against a different vision.

High-performing organizations often visualize future workflows and operating models before the first line of code is ever written. Can you walk through the step-by-step process of mapping out a future-state workflow, and how does this foresight prevent the common pitfalls associated with “pilot purgatory”?

The most successful companies are those that have the discipline to propose their future workflow and see it clearly before they even begin the technical development. This process starts with a rigorous mapping of current tasks—literally listing out those seventeen tasks we mentioned earlier—and then identifying which ones are ripe for agentification based on repeatable patterns. Once you have that “future state” map, you can see exactly how the operating model needs to change and, more importantly, how the workforce needs to be reskilled to fill the new gaps. This foresight is the ultimate antidote to pilot purgatory because it forces you to solve the scale and integration questions before you are already buried in the project. When you know what the future looks like, you can make smarter choices about which LLMs or AI tools to utilize, ensuring they fit the long-term roadmap rather than just providing a temporary fix. It allows leaders to move forward with a sense of confidence, knowing that the “change management” aspect—the training and role shifts—is already accounted for in the plan.

What is your forecast for agentic AI?

My forecast is that we are moving toward a world where the distinction between “using a tool” and “managing a partner” will completely blur. I believe we will see a rapid shift where the “assisted” AI we see today—the chatbots that help us write emails or summarize meetings—will be replaced by multi-agent systems that autonomously handle entire business cycles from end to end. We will stop measuring success by how many employees are “AI fluent” and start measuring it by how many roles have been successfully redesigned to leverage cognitive enhancement. However, this transition will create a massive divide between companies that have built a disciplined data foundation and those that haven’t; the latter will find themselves stuck in a permanent state of experimentation while their competitors scale. Ultimately, the winners will be the organizations that treat agentification not as a technology project, but as a fundamental evolution of their corporate strategy and human potential.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later