Last Thursday, Korean chipmaker SK Hynix made history by developing a 10-nanometer (nm) transistor process for 16 GB Double Data Rate 5 (DDR5), the latest iteration of dynamic random-access memory (DRAM) chips. By allowing for greater memory capacity and processing speeds for the corresponding central processing units (CPUs) that are used for cloud computation, this development holds significant implications for the power of compute.
In 1965, Gordon Moore, the co-founder of Intel and a pioneer in the world of computer development, came up with the idea of “Moore’s Law.” The theory postulates that the number of transistors on an integrated circuit would double every year for over a decade—a prognostication that was revised down to every two years starting in 1975. The increasing need for computational power on integrated circuits as a result of new innovations in artificial intelligence (AI) and cloud computing is causing the demand for computing power to far outpace the current capabilities of hardware models. Notably, the need for computational power used on AI models grew by 1 million percent between 2012 and 2022.
For this reason, it has often been said that compute is the “new oil,” or even the “currency of the future.” To meet the needs of the exponentially increasing demand for processing power for AI and other applications, a higher concentration of transistors is one of the most critical improvements for compute as memory and logic chips scale to work with larger and more complex data computations. And to get to new levels of computational power, semiconductor designs need to shrink process nodes by adding more transistors or by redesigning the overall chip architecture to account for different materials and hardware integration.
DRAM, which temporarily stores data and program instructions for use by a computer chip’s CPU, allows for instant access to memory stored on a unit’s capacitors controlled by the transistors. The greater the number of transistors that can fit on a chip and the more material innovations that enhance the chip’s capacitors, the more memory capacity a DRAM chip can have. This, in turn, results in greater processing speeds for the overall CPU, allowing for increased computational power. As less voltage is then required for powering transistors on and off—a concept known as Dennard’s Law—the chip also becomes more energy efficient. This is critically important as the power consumption needs of data centers grow exponentially.
The transistor capacity of DRAM was thought by some industry analysts to have hit a wall. Industry suppliers worked to bring it beyond the 18-15nm process, but operating NAND or DRAM memory solutions on leading-edge nodes began to face scaling issues to advance the technologies needed to compete with the requirements of its newfound uses in AI and cloud.
With the latest innovation scaling it to a 10nm process, SK Hynix gets beyond that wall by adopting new materials in the cutting-edge lithography for semiconductor production known as extreme ultra-violet (EUV), and this has reportedly enhanced productivity for the chips by more than 30 percent. Set to be deployed to data centers beginning in 2025, the chips are expected to reduce electricity costs by another 30 percent, marking a milestone event for data center power needs and innovations in AI. Such a development represents a significant change that can take global computer power to a whole new level.
Tom Ramage is an Economic Policy Analyst at the Korea Economic Institute of America. The views expressed here are the author’s alone.
Photo from Shutterstock.
KEI is registered under the FARA as an agent of the Korea Institute for International Economic Policy, a public corporation established by the government of the Republic of Korea. Additional information is available at the Department of Justice, Washington, D.C.