In part one of this series, How AI Broke the Memory Market, we looked at how AI data center demand turned memory into a bottleneck and why DRAM and NAND prices are unlikely to normalize quickly. Now we’ll explore how to operate in this environment. If you’re designing or sourcing hardware in 2026, you still need to make choices: which parts to spec, how to structure your designs for flexibility, and how to manage supply chain risk.
We'll cover “next-wave” memory components that are in the pipeline, then move on to some workhorse DRAM and flash components. From there, we'll lay out practical playbooks for both engineering and procurement.
For a broad exploration of memory components, Octopart's category pages for memory ICs and flash memory are good starting points for searching across manufacturers, packages, and availability.
Designed for on-device AI, automotive, and next-gen mobile and PC platforms, Samsung’s LPDDR6 delivers meaningful efficiency gains over LPDDR5X, expanded I/O architecture, and an initial speed of up to 10.7 Gbps, with the LPDDR6 standard designed to scale further as the ecosystem matures. You won’t see LPDDR6 on distributors’ shelves yet, but if you design around leading SoCs or flagship devices, you should expect to encounter it.
At the top of the stack, SK Hynix's 16-layer, 48 GB HBM4 devices promise more than 2 TB/s bandwidth, with mass production targeted around Q3 2026. Samsung is taking a different approach, using 4 nm logic and 1c DRAM to improve thermal performance. Engineers working on AI hardware won't typically source these from catalog distributors, but HBM4 matters to everyone because it's absorbing a large share of advanced DRAM capacity, which is one reason conventional DRAM remains tight.
With over 400 layers and a 5.6 GT/s interface, Samsung’s 10th-generation V-NAND targets PCIe 5.0 and future PCIe 6.0 SSDs for data-center and AI-class workloads. Expect high-density TLC based on this silicon to underpin many enterprise and high-end client drives over the next several years.
This 332-layer BiCS10 with a Toggle DDR 6.0 interface delivers 4.8 Gb/s per pin, targeting AI and hyperscale storage. According to EE Times, Kioxia has said its entire 2026 NAND output is already sold into AI-related applications, and it pulled its BiCS10 ramp forward from 2H 2027 into 2026 to meet demand.
These parts were available to order from major distributors in early March 2026. Availability is shifting quickly, so verify stock and lifecycle status on Octopart before you lock a BOM.
Against this backdrop, there are still plenty of actions hardware engineers can take to make designs more resilient.
The situation demands your attention. In late February 2026, Lenovo warned channel partners to place orders before the end of the month to beat March price hikes, while TrendForce projected blended PC DRAM (DDR4/DDR5) would rise 105–110% quarter-over-quarter in Q1 alone. The playbook below reflects this new reality.
In the first part of this series, we covered the why behind the memory crunch. And here we’ve explored the what now. The answer is the same whether you're an engineer or on the procurement side: flexibility is the best hedge. Design for substitution, qualify broadly, and use tools like Octopart to keep your options visible and up to date. The teams that come through this cycle in the best shape will be the ones who built optionality into their designs and supply chains early and keep adapting as supply and pricing evolve.
The current shortage is driven by wafer allocation, not technology limits. Memory vendors are prioritizing high-margin AI demand, especially HBM and data-center DRAM, under multi‑year contracts. Because HBM consumes significantly more wafer capacity per bit than conventional DRAM, less capacity remains for DDR5, LPDDR, and NAND, keeping availability tight.
LPDDR6 and HBM4 signal where platforms are headed, but most 2026 products will ship on DDR5, LPDDR5X, and mature NAND that’s available now. Engineers should design with forward compatibility in mind while selecting parts that can be sourced reliably during production, rather than betting on parts that aren’t yet in distribution.
Resilient designs focus on flexibility and substitution. This includes standardizing on mainstream interfaces, qualifying multiple densities and vendors, avoiding hard-coded memory assumptions in firmware, and using sockets or modules where possible. Supporting down‑binned memory options ensures products can still ship when higher-capacity parts are constrained.
Procurement should treat memory as a strategic resource, not a commodity. Best practices include locking long-term allocations for critical SKUs, building AVLs around families rather than single parts, monitoring lifecycle and alternates with tools like Octopart, and selectively holding inventory for long-lifecycle products to avoid forced redesigns.