Google Buys Intersect to Control Its AI Power Supply

Google Buys Intersect to Control Its AI Power Supply

In a world where artificial intelligence is demanding unprecedented levels of computational power, the physical infrastructure that supports it is straining at the seams. At the heart of this challenge is a growing disconnect between the speed of technology and the pace of the energy grid. To unravel this complex dynamic, we sat down with Chloe Maraina, a business intelligence expert whose work focuses on the intersection of big data, infrastructure, and future-proofing technological growth. We explored the strategic chess moves being made by tech giants to secure their energy future, the new risks of “stranded capacity,” and what this power-centric landscape means for enterprise leaders who must now think as much about megawatts as they do about microchips.

The article highlights Google’s acquisition of Intersect to meet AI demand from services like Gemini. How does Intersect’s model of co-locating power generation with data centers solve the problem of grid timelines being slower than compute deployment? Please walk me through the key differences.

It’s a fundamental shift in thinking, moving from being a consumer to a co-creator of the energy ecosystem. The traditional model is reactive; you find a piece of land, you build a data center, and then you get in line and wait, sometimes for years, for the local utility to navigate interconnection queues, substation upgrades, and permitting cycles. The speed of AI development has completely broken that model. What Intersect does is flip the script entirely. Instead of waiting for the grid to come to you, you bring the generation directly to the load. By building dedicated gas and renewable power sources right alongside the data center, you create a symbiotic relationship where both are orchestrated together. It’s the difference between waiting for a train that might be delayed and building your own private track where compute and power arrive at the station at precisely the same time. This gives a company like Google immense sequencing control and, most importantly, time certainty in an era where every second counts.

Google is also pursuing geothermal with NV Energy and CO2 batteries with Energy Dome. How do these diverse investments create a more resilient energy strategy than simply relying on utilities? Can you share an anecdote or metric that illustrates the “stranded capacity” risk Google is trying to avoid?

This is all about de-risking and avoiding a single point of dependency. Relying solely on one utility or one energy source is fragile. Google’s strategy, by embracing everything from a 115MW geothermal project to innovative CO2 batteries, is like building a financial portfolio; you diversify to insulate yourself from volatility. When one technology pathway stalls, perhaps due to regulatory hurdles or supply chain issues, another can carry the load. The “stranded capacity” risk is the nightmare scenario that keeps infrastructure planners up at night. Imagine spending billions to construct a state-of-the-art facility, perfectly designed and ready for racks of servers, only to have it sit half-empty because the promised power delivery is late. You have this gleaming, enormously expensive asset that is severely underutilized, its potential just stranded. It’s not a hypothetical; we’re seeing builders complete facilities on schedule, yet they run under capacity for months because the power simply isn’t there. It’s an incredibly inefficient use of capital, and in the race for AI dominance, it’s a fatal flaw.

The text advises that “time to power” is now a crucial factor in site selection. What practical, step-by-step process should an enterprise CIO follow to conduct energy due diligence for a new facility, ensuring power delivery timelines align with their buildout and avoid unexpected bottlenecks?

The first step for any CIO is a mental one: you must elevate energy from an operational afterthought to a core strategic pillar of your technology decisions. The old model of choosing a site based on network latency and real estate cost is dangerously outdated. Your due diligence process must now start with a deep “time to power” analysis. This means engaging with energy experts early, not after the site is chosen. You need to investigate the health and capacity of the local grid, understand the typical duration of permitting and environmental approvals in that jurisdiction, and map out the entire energy supply chain. You have to assume that supply will be constrained and that energy contracting will become a much longer and more complex process, even as modular data center designs allow you to build the physical structure faster than ever. It’s about front-loading that entire energy conversation, making it part of the initial site selection matrix, not a problem for the facilities team to solve later.

Analysts warn that when cloud regions hit power ceilings, capacity gets rationed, leading to delays. What specific contract terms or contingency plans should enterprise buyers now negotiate with cloud and co-location providers to gain transparency and protection against these energy-related service disruptions?

This is where the rubber meets the road for most enterprises that aren’t building their own infrastructure. The abstraction of the cloud can hide these energy risks until it’s too late. When a region hits its power and GPU ceiling, your project can be delayed or you might be nudged to a more expensive, less ideal region. To protect yourself, you need to move beyond standard SLAs and demand far greater transparency in your contracts. You should be asking for specific language around capacity commitments for your projected growth. What are the provider’s publicly stated region expansion goals, and how are they powered? Most importantly, you must negotiate clear contingency plans. If capacity is rationed, what is the process? Is there a priority queue? What are the service credit implications? You need to understand their power resilience strategy so you can align it with your own. It’s no longer enough to just lease space or cloud instances; you are now indirectly investing in your provider’s energy strategy, and your contracts must reflect that reality.

What is your forecast for how the integration of energy development and data center operations will impact cloud pricing and availability for enterprises?

I predict we are moving into a multi-tiered cloud market, where both price and availability will be explicitly linked to energy certainty. The hyperscalers and providers who successfully integrate their own power generation, as Google is doing with Intersect, will be able to offer a premium tier of service—one with stronger guarantees on capacity, availability, and deployment timelines. For customers on standard tiers or with providers still wholly dependent on a constrained grid, availability will become less of a sure thing. We’ll see scarcity pricing become more common, where costs fluctuate based on regional power availability. The quiet truth of the next decade is that kilowatts, permits, and even local politics will have as much to say about your cloud-first roadmap as your software developers do. Your ability to execute your digital strategy will be directly determined by your provider’s ability to keep the lights on.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later