Is Shadow AI Putting Your Company at Risk?

Is Shadow AI Putting Your Company at Risk?

As a Business Intelligence expert with a sharp eye for the stories hidden within massive datasets, Chloe Maraina has built a career on translating complex data into clear, actionable strategies. Her work at the intersection of data science and management gives her a unique perspective on one of the most pressing challenges in corporate America today: the unchecked rise of artificial intelligence in the workplace. With organizations racing to adopt AI, she focuses on the critical need for governance to manage the very real security risks that follow.

The conversation explores the paradox of declining personal AI use alongside a surge in sensitive data leaks, delving into why employees often sidestep corporate-approved tools. We’ll discuss the concrete, practical steps security leaders must take to move beyond policy and into active enforcement, and look ahead at how the nature of “shadow AI” is set to evolve in the coming years.

The most recent Netskope report presents a fascinating contradiction: while the use of personal AI accounts at work has dropped to 47%, incidents involving employees sending sensitive data to AI apps have actually doubled. Could you help us understand the critical security gaps this trend exposes, perhaps by illustrating how a single unmonitored tool could spiral into a significant breach?

It’s a really interesting paradox, isn’t it? On the surface, it looks like progress, but the doubling of sensitive data incidents tells a much more alarming story. The core issue is a complete loss of visibility. When an employee uses a personal AI account, their activity exists in a black box, completely outside the company’s security purview. Imagine a financial analyst working on a quarterly earnings report. They’re under a tight deadline and decide to use their personal ChatGPT account to help draft some commentary. They paste a draft containing non-public revenue figures and strategic forecasts directly into the chat window. That data is now stored on a consumer-grade server, linked to a personal account that might be protected by a weak, reused password. It completely bypasses every sophisticated corporate firewall and data loss prevention tool the company has invested in. The breach doesn’t have to be a loud, dramatic hack; it can be as quiet as a credential stuffing attack on that employee’s personal email, giving an attacker a key to a treasure trove of sensitive corporate data. That number—223 sensitive data incidents per company, per month—shows this isn’t a hypothetical fear; it’s a daily, relentless bleed.

The data also reveals a small but growing number of employees—now up to 9%—are actively switching between their work and personal AI accounts. This suggests that the corporate-provided tools aren’t quite hitting the mark. From your perspective, what specific conveniences or features are employees looking for in consumer tools, and what are the very first steps an IT team should take to bridge this gap?

That 9% figure is a huge red flag because it signals a fundamental disconnect between corporate policy and user experience. Employees aren’t being malicious; they’re just trying to get their work done efficiently. They might find the corporate-approved AI tool is slower, has stricter content filters that hinder their queries, or lacks a specific plugin or integration they’ve grown accustomed to on their personal account. The consumer versions of tools like Google Gemini or ChatGPT are often perceived as faster and more flexible. It’s the path of least resistance. The first step for any IT or security team isn’t to issue another mandate or block more sites. It’s to listen. They need to engage with the workforce to understand the friction points. Why are you switching? What is the enterprise tool failing to do? Once they have that feedback, the next step is what the report calls “better provisioning”—not just handing out licenses, but actively configuring the enterprise tool to mirror the convenience of the consumer version while maintaining security. Make the sanctioned path the easiest and most effective path, and you’ll see that switching behavior decline naturally.

With the average company experiencing 223 sensitive data incidents a month through AI apps, the report rightfully emphasizes the need for strong governance. Moving beyond simply writing a policy that might sit unread, what are the crucial, step-by-step actions a security leader must implement to gain real visibility and effectively enforce these rules?

That statistic—223 incidents a month—is staggering. It translates to more than seven data leaks every single day. A policy document alone is completely powerless against that kind of volume. The first, non-negotiable step is to establish visibility. You cannot govern what you cannot see. This means deploying modern security solutions that can actually inspect cloud traffic and identify not just the use of an AI app, but whether it’s a personal or corporate instance, and what kind of data is being transmitted. The second step is to create granular, intelligent policies. A blunt “no AI” rule is unrealistic and will just drive usage further into the shadows. Instead, the policy should be nuanced: “Yes, you can use our corporate AI to summarize articles, but no, you cannot upload any document tagged as ‘confidential’ or containing customer PII.” The third and most critical step is automated enforcement. The same tool that provides visibility should be configured to act on these policies in real-time. It should automatically block an employee from uploading a sensitive file to their personal AI account and, ideally, provide a pop-up message explaining why the action was stopped and redirecting them to the approved corporate tool. This creates a feedback loop that educates users while actively preventing the 223 monthly incidents from ever happening.

What is your forecast for the evolution of shadow AI?

I believe shadow AI won’t disappear, but it will become far more sophisticated and harder to detect. The current battle is centered on unsanctioned platforms—employees using personal ChatGPT instead of the corporate Copilot. As more companies roll out enterprise-grade AI tools, that specific problem will likely shrink. However, the next frontier of shadow AI will be in unsanctioned integrations and APIs. We’ll see employees connecting their approved corporate AI to a whole ecosystem of other third-party apps, plugins, and custom scripts to automate their workflows. Each of these connections is a potential data leak, creating a complex, spiderweb-like attack surface that is much more difficult to map and control than just blocking a few URLs. The risk will shift from the shadow platform to the shadow connection. Security teams will have to evolve from being gatekeepers of applications to being monitors of a dynamic and constantly shifting flow of data between a multitude of services, both sanctioned and unsanctioned.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later