Chinese scientists have developed a new Tensor Processing Unit (TPU) – a special type of computer chip that utilizes carbon nanotubes instead of traditional silicon semiconductors. They claim that this new chip could pave the way for more energy-efficient artificial intelligence (AI).
AI models require extensive data and a significant amount of computing power to operate. This poses a considerable challenge for training and scaling machine learning models, especially as the demand for AI applications continues to rise. This is why scientists are researching new components, from processors to computer memory, designed to consume less energy while performing necessary calculations.
Unlike conventional TPUs, this red computer chip is the first to utilize carbon nanotubes – tiny cylindrical structures made from carbon atoms arranged in a hexagonal pattern, replacing traditional semiconductor materials like silicon. (Photo: Sankai).
Google scientists created the TPU in 2015 to address this challenge. These specialized chips function as dedicated hardware accelerators for tensor operations, complex mathematical calculations used to train and run AI models. By offloading these tasks from the central processing unit (CPU) and graphics processing unit (GPU), TPUs enable AI models to be trained more quickly and efficiently.
However, unlike conventional TPUs, this new chip is the first to use carbon nanotubes – small cylindrical structures formed from carbon atoms arranged in a hexagonal pattern instead of traditional semiconductor materials like silicon. This structure allows electrons (charged particles) to flow through them with minimal resistance, making carbon nanotubes excellent conductors.
According to Chinese scientists, their TPU consumes only 295 microwatts (μW) of power (where 1 W equals 1,000,000 μW) and can deliver one trillion calculations per watt – a unit of energy efficiency. This makes China’s carbon-based TPU nearly 1,700 times more energy-efficient than Google’s chip.
“From ChatGPT to Sora, artificial intelligence is ushering in a new revolution, but traditional silicon-based semiconductor technology is increasingly unable to meet the demand for processing massive amounts of data. We have found a solution to this global challenge,” said Zhiyong Zhang, co-author of the paper and an electronics professor at Peking University.
The new TPU consists of 3,000 carbon nanotube transistors and is built on a concentric array architecture – a network of processors arranged in a grid. This allows the TPU to perform multiple calculations simultaneously by coordinating data flows and ensuring that each processor executes a small part of the task at the same time.
This parallel processing enables much faster calculations, which is crucial for AI models that handle large volumes of data. It also reduces the frequency of memory – specifically, a type referred to as static random-access memory (SRAM) – needing to read and write data, Zhang noted. By minimizing these operations, the new TPU can perform calculations much faster while using significantly less energy.
Researchers stated that in the future, similar carbon nanotube-based technology could provide a more energy-efficient alternative to silicon-based chips. They mentioned plans to continue refining the chip to enhance performance and scalability, including exploring ways to integrate the TPU into silicon CPUs.