With a keen eye for turning vast datasets into compelling visual stories, Chloe Maraina has become a leading voice in the push for sustainable enterprise AI. As a Business Intelligence expert with deep roots in data science, she champions a future where data management and operational excellence are intrinsically linked to environmental responsibility. In this conversation, we explore her practical, five-layer framework for implementing Green AI. Chloe breaks down how to move sustainability from a corporate talking point to a core engineering discipline, discussing the operationalization of “GreenOps,” the surprising power of simple energy monitoring, and how to foster a culture where efficiency becomes a competitive sport for engineering teams.
You argue that by 2025, treating sustainability as a corporate initiative is no longer viable. For a CTO just starting this journey, what is the first tangible step to embed this as a core design constraint, and how do you overcome initial resistance from teams?
The very first step is to take sustainability out of the abstract and make it a concrete, measurable part of your engineering leadership’s accountability. Don’t start with a vague mission statement; start with specific Objectives and Key Results. For instance, set a clear goal like “Reduce model training emissions by 30% year-over-year.” This isn’t assigned to a separate sustainability office; it belongs to the CTO. The key to overcoming resistance is to integrate these goals into the processes your teams already live and breathe. Weave sustainability readiness into your standard release checklists, right alongside security and performance reviews. When an engineer sees that their deployment can’t be approved without a carbon-efficiency metric, just like it can’t be approved with a security vulnerability, the mindset shifts. It stops being a “nice-to-have” and becomes a non-negotiable part of building professional, production-ready systems.
Your five-layer framework highlights both infrastructure location and model right-sizing as key levers. Could you describe a real-world scenario where a team must weigh the trade-offs between these two and what specific metrics, beyond carbon intensity, help them make the final architectural decision?
Absolutely. Imagine a team building a new customer service chatbot. They have two main architectural choices. Option A is to use a very large, powerful, general-purpose language model but run it in a data center region powered almost entirely by renewables, which could cut emissions by up to 40%. The latency might be slightly higher due to the distance. Option B is to use a much smaller, task-specific model that is perfectly “right-sized” for chatbot queries, but the only available serverless endpoint with the required low latency is in a region with a higher carbon-intensity grid.
This is where a balanced scorecard becomes critical. Beyond just the carbon intensity (kg CO₂e/workload), the team needs to look at “joules per inference” to understand the model’s raw energy efficiency. They also need to track latency, because a greener but slower chatbot frustrates users. And, of course, they look at the cost per million requests from a FinOps perspective. The final decision isn’t about picking the single lowest carbon number; it’s a strategic choice. They might find the smaller model is so efficient that its lower energy use outweighs the dirtier grid, making it the overall winner when considering cost, performance, and sustainability together.
You introduce “GreenOps” as the sustainability counterpart to FinOps. Beyond just tracking carbon, how do you operationalize this? Can you detail the first three steps an organization should take to build a GreenOps function and create the dashboards that change engineering behavior?
Operationalizing GreenOps is all about making sustainability data visible, actionable, and routine. The first step is integration, not isolation. Don’t build a separate “green” dashboard that no one looks at. Instead, pipe your energy and carbon data directly into the cloud cost reporting dashboards that your FinOps teams and engineers already use every day. Place the carbon cost right next to the dollar cost.
Second, form a GreenOps guild or community of practice. This brings together passionate engineers, finance partners, and product managers to define what you’ll measure and how. Their first task should be to create a simple, high-impact dashboard widget. For example, a leaderboard showing the energy-per-inference for the top ten services. This immediately gamifies efficiency.
Third, you automate. Once the data is flowing and visible, you can build automated policies. A classic example is carbon-aware scheduling, where your CI/CD pipeline automatically deploys a non-urgent model training job to whichever global region has the lowest carbon intensity at that moment. The dashboards are what drive the change. When a developer can see a clear visual—”Model X: 75% carbon-efficient vs. baseline”—it stops being an abstract concept and becomes a clear, actionable engineering challenge.
The article notes that a 15% power reduction in inference was achieved in two sprints just by implementing energy monitoring. Can you elaborate on what this “energy per inference” monitoring looks like in a CI/CD pipeline and give an example of waste that engineers immediately noticed?
It sounds complex, but it’s surprisingly straightforward. In our CI/CD pipeline, right after the performance tests run, we added a new stage for sustainability regression. This stage uses profiling tools to hit the newly deployed inference endpoint with a standardized batch of, say, 1,000 queries. It measures the total energy consumed by the hardware during that run and calculates an average “joules per request.” This number is then logged and tracked with every single build, just like we track latency or memory usage. If the number suddenly spikes, the build can even be flagged for review.
When we first turned this on, the impact was immediate. The 15% reduction came from engineers seeing waste that was previously invisible. For instance, one team noticed their inference servers had a huge energy draw even when idle between requests. The GPUs weren’t entering a low-power state. A simple configuration change fixed it. Another team found that their data preprocessing function, which ran before the AI model, was surprisingly inefficient and consumed almost as much power as the inference itself. They optimized that code, and the energy-per-inference metric dropped instantly. It wasn’t about a massive refactoring project; it was about shining a light on the small, cumulative sources of waste.
You mentioned that teams began competing to reduce emissions, turning compliance into innovation. Can you share an anecdote of this in action? How can leaders use the “recognition and storytelling” you advise to make sustainability a genuine point of competitive pride for their engineers?
I have a favorite story about this. We had two different teams working on fraud detection models for different product lines. When we introduced the Green AI scorecard, the first team, Team A, spent an optimization sprint on model quantization, a technique to make the model smaller and faster. They managed to cut their joules-per-inference by 25% and presented their results at our monthly engineering all-hands. You could feel the buzz in the room.
Two weeks later, Team B, not wanting to be outdone, came up with a brilliant caching strategy. They realized that many of the same checks were being run repeatedly, so they built a system to cache recent results, avoiding redundant computation. This not only slashed their energy consumption even further than Team A’s, but it also dramatically improved their service’s latency. They became the new leaders on the efficiency scorecard. Leaders can foster this by creating that stage. You have to celebrate these wins publicly through spotlights and all-hands meetings. Frame the narrative as innovation. It’s not about “meeting the sustainability target.” It’s about “Who built the smartest, most efficient fraud detection engine in the company?” When sustainability becomes a metric for engineering excellence, it becomes a point of pride, and that’s when the real, lasting change happens.
What is your forecast for sustainable AI?
My forecast is that within the next few years, sustainability metrics will become as fundamental to evaluating AI models as accuracy and latency are today. We’re moving past an era defined purely by who can build the largest model. The future will be led by those who can run them the smartest. I foresee public model leaderboards and platforms like Hugging Face including carbon footprint and energy-per-inference as standard benchmarks. Enterprise customers will start demanding this data as part of their procurement process, and investors will look to a company’s “efficiency portfolio” as a sign of operational resilience and innovation. The ultimate competitive edge won’t just be performance; it will be performance-per-watt. This shift is inevitable because it aligns technological progress with both business logic and planetary health.
