The landscape of enterprise technology is currently defined by a paradox: while AI development moves at a breakneck pace, the organizational structures required to support it often struggle to keep up. Research shows that 56% of CEOs have yet to see tangible revenue or cost benefits from AI, highlighting a significant gap between technical potential and business reality. As companies move beyond the initial hype, the focus has shifted from the “how” of technology to the “why” of change management. This conversation explores the strategies of industry leaders who are successfully navigating the friction between rapid prototyping and rigorous enterprise governance.
AI allows for rapid prototyping in weeks, yet the full vetting process for security and cost controls often takes months. What are the specific risks of bypassing these established enterprise structures, and how can leaders streamline governance without stifling the creative speed of their development teams?
The primary risk of bypassing established structures is the creation of “fiscally irresponsible” innovation that lacks long-term viability. For example, at Insight Enterprises, we developed a minimum viable product for a website AI agent in just three weeks, but the total vetting process for security and cost controls took three months. If we had ignored those 50 or 60 years of foundational IT principles, we would have faced severe risks in data privacy and unmanaged cloud spend. To streamline this without killing creativity, leaders must remind developers that teams like FinOps and security aren’t there to prevent innovation, but to ensure it can scale safely. We have to maintain the “front end” of the application development cycle, ensuring that speed never comes at the expense of the rigorous verification required for enterprise-grade software.
Employees often range from over-enthusiastic adopters to skeptical opponents who fear for their roles. How do you tailor training for the “silent middle” that wants to help but lacks a manual, and what specific metrics indicate that a training program is actually moving the needle on productivity?
To reach the “silent middle”—the majority of our 14,000 employees who were curious but confused—we launched a platform called Flight Academy to lower the barrier to entry. This program starts with the absolute basics, such as “What is a prompt?”, and uses gamification to encourage individuals and teams to compete as they link AI outputs to their actual daily tasks. We move the needle by watching how skeptical employees, who initially wanted to “burn it with fire,” transition into realizing the tool is necessary for their continued relevance. Success is measured by the progression of users through these deeper prompting levels and the organic adoption of AI to remove “toil” or repetitive tasks. When employees start reporting that they are leveraging AI to enhance their specific outcomes rather than just playing with a chatbot, we know the training is working.
Organizations are often overwhelmed by a flood of AI ideas that lack a clear business case. What objective criteria should be used to filter these into a formal onboarding process, and how do you determine when a high-potential prototype simply isn’t worth the long-term financial investment?
Filtering “a zillion good ideas” requires an objective onboarding engine that can strip away the hype to find the ROI. We utilize a platform called Insight Prism that runs ideas through an engine to generate a specific business case, calculating exactly how much revenue an idea will generate or how much money it will save. If the numbers don’t provide a clear business justification, we simply do not invest, no matter how technically impressive the prototype might be. This prevents the “starting gate” stall where projects remain stuck in perpetual pilot mode. By forcing every AI concept to survive a financial and operational audit early on, we ensure that our resources are only spent on tools that deliver a measurable impact on the bottom line.
Companies frequently face a choice between decentralized “citizen development” and a centralized engineering approach. How do you decide which use cases are “core DNA” requiring deep monitoring, and what practical steps prevent a decentralized model from turning into unmanaged shadow AI?
The decision depends on where the use case sits in what we call the “adoption pyramid.” In a decentralized model, you let users build and experiment with niche tools on a hosted platform, but the moment you see a specific application rising fast in popularity, you must evaluate if it is “core DNA.” If a tool is essential to the organization’s unique value proposition, the central engineering team takes over to provide end-to-end control and deep monitoring. To prevent shadow AI, we use gamification and rating systems, similar to GitHub stars, to see what is being built in the wild. This allows us to “bubble up” the most useful cases into the managed enterprise environment before they become a security liability.
Internal hackathons often reveal organic needs for specific automation tools that gain hundreds of users overnight. When a niche tool proves its value this quickly, what is the step-by-step process for moving it into a licensed, enterprise-supported environment without killing its original agility?
The process must be driven by observed user demand rather than top-down mandates. At Telus, a hackathon revealed that out of 100 ideas, about 15 were independently using the n8n automation platform, which led to 1,300 users in just three days. Once we saw this organic “off the rails” growth, our step-by-step approach was to immediately validate the need, secure enterprise licensing, and then integrate it into our internal developer platform. By hosting these tools within a controlled environment like a Backstage-based IDP, we provide the necessary guardrails and security while keeping the “hackathon mentality” alive. This allows the platform team to be seen as an enabler of speed rather than a bottleneck, because we are supporting the tools the employees have already proven they need.
AI is increasingly handling entry-level tasks, forcing junior staff to focus on the “why” rather than the “how” of building systems. How must leadership redefine career paths for these workers, and what new skills are required to ensure they don’t lose the foundational craft of their profession?
This is a profound shift because AI is now performing the simple tasks that historically served as the “learning grounds” for junior developers. We have to mentor these workers to move past the “how” of syntax and start mastering the “why” of system architecture and business logic much earlier in their careers. The new required skills are less about rote execution and more about critical thinking, prompt engineering, and holistic system design. As leaders, we must be mindful that we are asking them to skip the traditional apprenticeship of “toil” and move straight into high-level oversight. This requires a more robust educational foundation within the company to ensure they still understand the underlying mechanics of what the AI is generating for them.
Some veteran staff find themselves shifting from technical creators to roles that resemble product management, which can lead to friction. How can managers repurpose that deep technical expertise effectively, and what strategies help senior employees embrace a shift toward high-level oversight and strategy?
Friction occurs when a senior developer who loves the “craft” of coding feels forced into a bureaucratic role. To repurpose this expertise, we must show them that their deep technical knowledge is the only thing that allows for effective AI governance and high-level strategy. If a pure product management role doesn’t suit them, we look for ways they can lead the technical “verification” and security vetting that AI-generated code desperately needs. The strategy is to help them through the “unknown” of this transition by providing education that pays dividends later. We emphasize that they aren’t losing their roots; rather, they are applying decades of IT wisdom to ensure this new, faster-moving technology doesn’t break the fundamentals of the business.
What is your forecast for enterprise AI?
My forecast is that the industry will move away from the current “hype” phase into a period of deep operational refinement where the winners are those who master the “people” side of the equation. We are going to see a massive consolidation of AI tools, moving from “a zillion ideas” to a few core, high-impact agents that are deeply integrated into the “core DNA” of the business. The “silent middle” of the workforce will become the primary drivers of productivity as AI training becomes as standard as basic computer literacy. Ultimately, the successful enterprise will be one that treats AI not as a magic wand, but as a sophisticated tool for removing toil, requiring the same rigorous governance and human oversight that has guided IT for the last sixty years.
