Chloe Maraina has spent her career turning messy operational data into clear, visual stories that leaders can act on. As a Business Intelligence expert with a data science toolkit, she’s been inside the planning cycles that connect product roadmaps, AI-enabled workflows, and vendor economics to hard outcomes. In this conversation, she unpacks how job cuts, agentic AI, and surging memory costs intersect—and what that means for enterprise buyers navigating refreshes, contracts, and service levels.
Summary of key themes: We explore why HP is targeting $1 billion in savings by 2028 and how those dollars show up on financial statements; where agentic AI is being embedded first and what process redesign really looks like; how $650 million in restructuring is phased to avoid delivery shocks; and how the company intends to split savings across innovation, customer satisfaction, and productivity. We also dive into protecting roadmap velocity amid headcount changes, separating AI narrative from cost containment, modeling a $0.30 EPS hit from memory inflation, and managing the handoff from covered H1 FY2026 inventory to rising costs in the back half. Along the way, Chloe shares lessons from earlier reductions affecting 1,000–2,000 roles in February and 9,400 in the 2022 program, and looks across the market as vendors simplify portfolios and centralize operations. She closes with how to measure AI’s real impact and the risks that could still throw plans off course.
HP plans to cut 4,000–6,000 jobs by 2028 to save $1 billion. What core problems is this solving right now, and how will the savings show up on the P&L and balance sheet? Walk us through milestones, with examples from product development and customer support.
The cuts are aimed at three immediate pain points: soft PC demand pressuring revenue, surging memory costs compressing gross margin, and people-heavy processes that slow cycle times in product development and customer support. On the P&L, you’ll see the gross run-rate savings flow through operating expenses over three years as the company targets $1 billion, while restructuring charges—about $650 million total, with $250 million in fiscal 2026—hit below the line during the transition. On the balance sheet, working capital should tighten as teams simplify SKUs and shorten service loops, which pulls down inventory days and stabilizes payables predictability. Milestones include redesigning product change control (e.g., reducing cross-functional sign-offs from weeks to days) and consolidating support tiers so that front-line agents resolve a higher share on first contact; I’ve sat in reviews where a single workflow had six handoffs—this program is about cutting those in half.
You’ve piloted AI for two years and now plan full deployment using “agentic AI.” What processes are being redesigned first, and why? Share a step-by-step example of a workflow before-and-after AI, plus the metrics you’re tracking to prove impact.
We’re starting where latency is visible to customers and where data quality is mature: case triage in support, test automation in product development, and demand/supply reconciliation in operations. Before AI in support triage: a ticket arrived, a human read logs, searched a knowledge base, routed to Tier 2, and waited; customers felt the dead air. After agentic AI: an agent parses device telemetry, validates known issues, suggests a fix or parts list, and only escalates if confidence falls below a pre-set threshold. Metrics we watch include first-contact resolution rate, average handle time, and escalation ratio; in engineering we track test coverage per build and defect escape rate; in ops we look at forecast error and exception rates. The point isn’t just automation—it’s fewer handoffs and faster, higher-confidence answers.
The company expects $650 million in restructuring costs, with $250 million in fiscal 2026. How do you phase those expenses without disrupting delivery? Describe the sequencing, contingencies you built in, and any lessons from the 2022 “Future Ready” program.
Sequencing starts with back-office consolidations and process redesign before touching customer-facing capacity. We phase costs alongside capability drops: design the new workflow, stand up interim tooling, train teams, then execute role changes. Contingencies include parallel run periods and capacity buffers in regional hubs so service levels don’t dip during cutovers. From the 2022 program that affected 9,400 people, we learned to decouple platform migrations from peak refresh windows and to pre-negotiate vendor support for data migrations; in plain terms, don’t pull the engine while the plane is in takeoff. That’s why the heaviest fiscal 2026 spend is matched to windows where we can monitor and correct.
HP aims for $1 billion in run-rate savings in three years, allocating 20% to innovation, 40% to customer satisfaction, and 40% to productivity. What specific initiatives sit in each bucket? Give concrete KPIs, timelines, and one anecdote of a trade-off you made.
For the 20% innovation slice, think faster product cycles and AI-infused features: we focus on release cadence and test coverage as near-term KPIs. The 40% for customer satisfaction prioritizes support responsiveness and predictability—first-contact resolution and time-to-RMA are front and center. The remaining 40% hits productivity through portfolio simplification, centralized operations, and automated workflows; internally we watch cycle time and rework rates. A trade-off we made: delaying a lower-volume SKU refresh to free engineering capacity for test automation; it meant saying no to a shiny launch, but it improved defect detection in the core line—exactly where customers feel quality.
Cuts will hit product development, internal ops, and customer support. How do you protect roadmap velocity and service levels during headcount changes? Share metrics like release cadence, defect rates, response times, and how you’ve adjusted team structures.
We redesigned team topology around smaller, cross-functional pods with explicit ownership of outcomes, not activities. In engineering that means pods own a feature from design through validation, which helps maintain release cadence even as roles shift; in support, pods own a segment and its knowledge base, keeping response times consistent. We track cadence at the feature level and defect rates post-release, and in support we monitor response time and backlog age daily during the transition. The mantra is fewer handoffs and clearer accountability; we’ve learned that clear ownership beats scale when headcount is in motion.
Analysts say this looks more like cost containment than AI-driven gains. How do you separate narrative from reality inside the business? Walk us through the data you used to decide, the thresholds for greenlighting cuts, and one decision you reversed.
We built a simple rule: no cuts greenlighted without a redesigned process and instrumentation to measure it. The decision set included demand trends, component cost scenarios, and pilot results from two years of AI experiments. We used thresholds like stable or improving service KPIs under the new workflow for a full cycle before staffing changes. One reversal: we paused a planned consolidation in a region after pilot telemetry showed warranty turnaround volatility; instead, we added a temporary buffer team, stabilized the process, then resumed the plan.
Memory chip costs could hit EPS by $0.30 in H2 FY2026. How are you modeling that impact across PCs and peripherals? Explain the mitigation stack—supplier diversification, reduced memory configs, and price actions—with expected margin effects by product line.
The model flows from bill-of-materials sensitivity to memory pricing, layered by mix and channel. We simulate the $0.30 per-share headwind in the back half by product line, then stack mitigations: qualify lower-cost suppliers, reduce memory configurations where performance envelopes allow, and apply targeted price actions. PCs feel the brunt; peripherals are less memory-dense, so margin pressure is milder. We don’t expect to fully neutralize the hit, but the goal is to slow the margin compression trajectory and keep critical SKUs competitive while being transparent about trade-offs.
You said inventory covers H1 FY2026, with pressure rising as it depletes. What’s your detailed playbook for the handoff from old to new cost bases? Share timelines, inventory aging assumptions, and how you’ll communicate price moves to enterprise buyers.
The playbook staggers transitions by SKU and region: consume covered inventory through the first half, then blend in new-cost builds to avoid cliff effects. Aging assumptions flag lots approaching cost crossover, so pricing, promotions, and channel allocations are sequenced to minimize write-downs and surprises. Communications to enterprise buyers happen in planned windows with clear effective dates and options—multi-year pricing, configuration choices, or alternative bundles—to maintain trust. The aim is steady, predictable shifts rather than abrupt changes that disrupt IT plans.
Some clients report slower warranty turnarounds and less predictable inventory updates since regional teams changed. What’s causing the friction, step by step? Describe the fixes in flight, target SLAs, and any interim workarounds account teams can offer.
Friction tends to come from re-routed escalations and immature data syncs after regional reorganizations. Tickets bounce when ownership boundaries aren’t crisp, and inventory feeds lag when connectors change. Fixes include re-binding escalation paths in the service desk, tightening integration with parts depots, and tuning the AI triage thresholds to reduce false escalations. We’re aligning to target SLAs that keep warranty turnarounds predictable and inventory updates on a known schedule; in the meantime, account teams can pre-book parts for high-risk fleets and schedule proactive check-ins so customers see fewer surprises.
For CIOs renegotiating support terms during this transition, what should they ask for specifically? List concrete clauses, measurement intervals, credit triggers, and escalation paths. Share one example of a customer who secured better outcomes by pushing on details.
Ask for SLAs tied to measurable milestones—response time, first-contact resolution, and RMA shipment time—with weekly reporting for the first quarter post-change. Include credit triggers when thresholds are missed over a defined interval, and insist on a clear escalation path with named owners. Add clauses for data access—telemetry and case history—so you can audit outcomes and help the vendor spot issues earlier. A customer recently pushed for dual-path escalation coverage during regional consolidation and secured steadier warranty outcomes by ensuring backups were documented, not implied.
Earlier reductions hit 1,000–2,000 roles in February and 9,400 under the 2022 program. What did those waves teach you about operational continuity? Give a before-and-after comparison of org design, handoff maps, and how you institutionalized change management.
Before, we were organized around functions with many interlocks; handoff maps looked like spaghetti. After, we shifted to cross-functional pods with tighter scope and accountability, which made handoff maps shorter and more legible. We institutionalized change management by standardizing playbooks—role transitions, tooling cutovers, and KPI guardrails—so teams knew what “good” looked like. The lesson was simple: structure and instrumentation trump heroics when the organization is moving under your feet.
Across HP, Dell, Lenovo, and HPE, vendors are simplifying portfolios and centralizing operations. How will that reshape refresh cycles, pricing transparency, and service models over the next 12–18 months? Share benchmarks, likely pitfalls, and what “good” looks like.
Expect fewer SKUs, longer-lived platforms, and clearer roadmaps, which should make refresh plans easier to sequence. Pricing transparency tends to improve as portfolios shrink, but you may see more dynamic adjustments tied to component markets—especially memory. Service models will lean into AI, with centralized knowledge and lighter regional footprints; the pitfall is uneven execution during the transition. “Good” looks like predictable release calendars, consistent SLAs, and vendors that communicate impacts early, not after the fact.
How are you measuring AI’s real contribution to productivity versus headcount cuts? Walk us through your attribution model, the baseline you chose, the control groups you’re using, and how you’ll publish results to customers and investors.
We built a matched-control framework: comparable teams, one with agentic AI deployed, one without, and a stable baseline period from our two-year pilots. We track output per labor hour, cycle time, and quality measures like defect escape or repeat tickets, attributing gains only where the control stays flat and the AI cohort improves. Headcount changes are modeled separately so we don’t double-count reductions as AI wins. We plan to publish cohort-level results and methodology summaries so customers and investors can see the impact with the same clarity we do internally.
What risks could derail this plan—supplier slippage, talent flight, or AI model performance? Rank the top three, describe your early-warning indicators, and give examples of mitigations that already changed an outcome.
First, supplier slippage on memory is the biggest variable; early warnings are fill rates and lead-time spreads breaking pattern. Second, talent flight during reorgs; we watch time-to-fill on critical roles and engagement signals. Third, AI model performance drifting under new data; we monitor confidence distributions and escalation rates. We’ve already qualified additional suppliers to blunt a spike, deployed retention measures in key pods, and rolled out model monitoring that flagged a triage error pattern we fixed before it hit SLA thresholds.
Do you have any advice for our readers?
Anchor your plans in data you can see and touch—inventory coverage, SLA trends, and component sensitivity—then negotiate support and pricing with those facts at the table. Ask vendors to show you their transition playbooks and the KPIs they are willing to be measured on in the next two quarters. Build optionality into your configurations, especially around memory, and stage refreshes to avoid cost cliffs in the back half of fiscal 2026. Most of all, treat AI as a process redesign tool, not a slogan—if the workflow hasn’t changed, the outcome won’t either.
