Passionate about creating compelling visual stories through the analysis of big data, Chloe Maraina is our Business Intelligence expert with an aptitude for data science and a vision for the future of data management and integration. Today, she unpacks the latest advancements in enterprise storage, focusing on how a combination of custom-designed hardware and intelligent, agent-driven AI is reshaping the landscape for mission-critical workloads, offering unprecedented density, security, and operational simplicity.
With the introduction of 105 TB custom FlashCore Modules, how does building hardware in-house help navigate memory supply chain issues versus sourcing third-party SSDs? Please share some specific engineering or cost benefits this approach provides to customers.
That’s a fantastic question because it gets right to the heart of the strategy. When you see the ongoing global memory shortages, you realize how fragile a dependency on a single source or type of SSD can be. By designing our own FlashCore Modules, we fundamentally de-risk our supply chain. We aren’t beholden to one supplier; we can source flash NAND from multiple vendors. This directly cuts out a layer of the supply chain, which in turn cuts out a layer of cost for us, and we can pass that stability and value on. From an engineering standpoint, it’s a game-changer. Instead of being constrained by a standard SSD form factor, we control the entire architecture. We organize the flash NAND on the wafer ourselves, allowing us to engineer the module to use the physical space more efficiently. It’s how we can pack such immense capacity, like the new 105 TB module, into our systems.
The new systems feature agentic AI for automated data placement and migration. Could you walk us through how FlashSystem.ai proactively identifies a performance issue and then automates moving a workload, including the rationale it presents to the human operator?
Imagine a core banking application experiencing a subtle, creeping latency during peak processing hours. The old way involved an admin getting an alert, digging through logs, and manually figuring out the problem. FlashSystem.ai is designed to be proactive. The AI agent is constantly monitoring performance metrics at a granular level. It might detect that a specific workload’s I/O patterns are creating a hot spot on a particular set of drives. Before this ever becomes a user-impacting problem, the AI formulates a solution. It won’t just act blindly; it will present a clear recommendation to the operator, something like: “Move this database workload from Array A to Array B to alleviate contention and improve response time.” The best part is the transparency. There’s a “view rationale” button that explains its reasoning in plain English, detailing the performance trends it identified and the expected outcome of the migration. Once approved, it automates the entire non-disruptive migration, making a complex optimization feel effortless.
While some vendors apply AI to file and object storage, the new FlashSystem focuses on block storage for workloads like core banking applications. What specific operational and scalability challenges does agentic AI solve for these structured data environments that it might not for unstructured data?
This is a crucial distinction. AI for unstructured data, like what you see with some competitors, is often focused on data curation or easing the deployment of AI training models. Our focus with FlashSystem.ai on block storage is about solving a different, more foundational set of problems. Think about structured data environments—databases, core banking systems, transaction processing. The performance and availability of these systems are non-negotiable. The challenges here are about operational integrity and massive scalability under strict service-level agreements. Agentic AI in this context is built to manage the underlying health and performance of the infrastructure itself. It’s not about analyzing the data in the files; it’s about ensuring the data is served with blistering speed and reliability. It addresses the operational burdens of performance tuning, capacity planning, and security auditing, which are paramount for these regulated, high-stakes applications.
The latest FlashCore Modules use FPGAs to run functions directly on the drive. How does this distributed approach to security and data management create a deeper root of trust compared to system-level management? Could you provide an example of a threat this might mitigate?
This is where the design gets incredibly powerful. By embedding FPGAs directly into each FlashCore Module, we distribute intelligence down to the lowest possible level. Instead of having all security and management functions managed solely by the system’s operating system, which can be a single point of attack, we push them onto the drives themselves. This creates what we call a deeper root of trust. The drive itself becomes an active participant in its own security. For instance, imagine a sophisticated malware attack that manages to compromise the storage array’s main OS. In a traditional system, that could give the attacker broad control. With our approach, even if the OS is compromised, the FPGAs on the individual drives are still running their own security checks, like ransomware detection or ensuring data integrity. It’s a much more resilient and fortified architecture because security isn’t just a layer on top; it’s woven into the very fabric of the hardware.
Many now consider AI-driven management to be table stakes, replacing older rules-based systems. Beyond individual arrays, how does this AI-powered, dynamic workload placement set the stage for predictive movement across entire hybrid cloud environments? What are the key steps to achieving that vision?
You’re absolutely right; intelligent, real-time management is the new standard. The work we’re doing with FlashSystem.ai is the essential foundation for a much grander vision. Today, the AI is making smart decisions within an array or across a fleet of arrays in a data center. The next logical evolution is to extend that intelligence across the entire hybrid cloud ecosystem. The key is to move from being reactive or proactive to being truly predictive. The first step is what we have now: dynamic placement based on real-time system health and SLAs. The next step involves the AI learning workload patterns over time to forecast future needs—predicting that a certain application will need more performance next Tuesday because it’s the end of the quarter. The ultimate vision is for the AI to orchestrate the predictive movement of entire workloads between on-premises infrastructure and public clouds, optimizing for cost, performance, and compliance automatically. It’s about creating a truly autonomous IT infrastructure.
What is your forecast for agentic AI in enterprise storage?
I believe we are on the cusp of a major shift. We’re moving beyond AI as a tool for simple optimization and into an era where agentic AI becomes a genuine partner to IT operations. In the near future, I forecast that these AI agents will not only predict and resolve storage issues but will also engage in sophisticated, multi-step problem-solving across the entire IT stack—from the application to the network to the storage. We’ll see AI agents that can automatically model the financial impact of moving a workload to the cloud versus keeping it on-prem, present the business case to an operator, and then execute the entire migration if approved. The future of enterprise storage isn’t just about faster or denser hardware; it’s about creating self-managing, self-healing, and self-optimizing data ecosystems, and agentic AI is the intelligence that will make that a reality.
