Editor of the website for technological innovation – 11.10.2022
The hardware from the design is adapted to run artificial intelligence applications.
[Imagem: Xiwen Liu et al. – 10.1021/acs.nanolett.2c03169]
computing in memory
A team from the University of Pennsylvania and the Sandia and Brookhaven National Laboratories from the US has designed an ideal computer architecture for artificial intelligence.
In today’s computers, memory storage and actual computing take place in different parts of the machine, and data must be moved from memory to the CPU or GPU for processing. And that takes time, which is a problem when you consider the sheer amount of data needed to train machine learning algorithms.
Xiwen Liu and his colleagues then turned to an architecture known as “in-memory computing,” where processing and storage take place in the same location, eliminating transfer time and reducing power consumption.
There are already several experimental implementations of this type, but what stands out about the new design is that it is completely transistor-free.
“Even when used in in-memory computer architectures, transistors compromise data access times,” explained Professor Deep Jariwala. “They require a lot of wiring in the overall chip circuitry and therefore use more time, space and power than we would like for AI applications. The beauty of our transistorless design is that it’s simple, small and fast, and requires very little power.”
The structure of the component that replaces the transistor, functions as a memory and as a computer unit.
[Imagem: Xiwen Liu et al. – 10.1021/acs.nanolett.0c05051]
processor without transistors
To get rid of transistors, the team used a new semiconductor, scandium alloy aluminum nitride (AlScN), which enables ferroelectric switching, the physics of which is faster and more energy efficient than other memory components.
Ferroelectricity can be considered an analogue of ferromagnetism. A ferromagnetic material has permanent magnetism and, simply put, a meter with a north and south pole – and this allows it to store data. And since it also stores electrical charges, the component itself is sufficient for calculations.
The semiconductor was used to make a component called a ferrodiode or ferroelectric diode, which can switch up to 100 times faster than conventional transistors.
“One of the key attributes of this material is that it can be deposited at low enough temperatures to be compatible with silicon melts. Most ferroelectric materials require much higher temperatures. The special properties of AlScN mean that our memory components shown here can go on top of a silicon layer in vertical heterointegrated array.
“Think about the difference between a multi-story parking lot with capacity for a hundred cars and a hundred individual parking spaces spread over a large area. Which is more space efficient? The same is true for information and components on a highly miniaturized chip like This efficiency is just as important for resource-constrained applications like mobile or wearable devices, as well as for energy-intensive applications, such as data centers,” explained Professor Roy Olsson.
“It’s important to realize that all AI computing currently being done is software-enabled on silicon hardware architectures designed decades ago,” Jariwala said. “This is why artificial intelligence as a field has been dominated by computer and software engineers. The fundamental redesign of hardware for AI will be the next big change in semiconductors and microelectronics. The direction we are going now is the collaborative design of hardware. and software.”
Article: Reconfigurable in-memory computing on field-programmable ferroelectric diodes
Authors: Xiwen Liu, John Ting, Yunfei He, Merrilyn Mercy Adzo Fiagbenu, Jeffrey Zheng, Dixiong Wang, Jonathan Frost, Pariasadat Musavigharavi, Giovanni Esteves, Kim Kisslinger, Surendra B. Anantharaman, Eric A. Stach, Roy H. Olsson III, Deep Jariwala
Journal: Nano letters
Vol.: 22, 18, 7690-7698
Other news about: