Home » Tensor Processing Units

Tensor Processing Units

Machine learning is becoming more important and relevant every day. The traditional microprocessors are unable to handle it effectively, whether it’s training or neural network processing. The GPUs’ parallel architecture, which allows for rapid graphic processing, proved to be more efficient than CPUs but was still somewhat limited. Google created an integrated circuit for AI accelerators that would be used in its TensorFlow AI framework to address this problem. The device was named TPU (Tensor Process Unit). This chip is compatible with the Tensorflow Framework.

What is TensorFlow Framework?

TensorFlow, is an open-source library which is created by Google for internal use. It is used primarily in dataflow programming and machine learning. TensorFlow computations can be expressed as stateful graphs of data flow. TensorFlow has been named after the operations these neural networks perform on multidimensional arrays of data. These arrays are called “tensors.” TensorFlow can be used on Linux distributions, Windows and macOS.

TPU Architecture

The following diagram shows the physical architecture of units in a TPU.

Tensor Processing Units

These computational resources are part of the TPU:

  • Matrix Multiplier unit (MXU): It has 65. 536 8-bit multiply and add units for matrix operations.
  • Unified Buffer: It has 24MB SRAM which works as registers
  • Activation unit (AU): It is the Hardwired activation function.

Five major high-level instruction sets have been created to manage the operation of the resources. These are the five major high-level instruction sets.

TPU Instruction Function
Read_Host_Memory It is used for reading data from memory
Read_Weights It is used for reading Read weights from memory
MatrixMultiply/Convolve It is used for multiplying or convolving with the data and weights, accumulate the results
Activate It is used for appling activation functions
Write_Host_Memory Write result to memory

Here is the app stack that Google uses to maintain TensorFlow or TPU.

Tensor Processing Units

The Advantages of TPU

These are just a few of the many benefits that TPUs offer:

  1. This accelerates the performance of linear algebra computations, which are used extensively in machine learning applications.
  2. Training large, complex neural network models can reduce time to accuracy.
  3. TPUs allow models to converge in minutes on platforms that used to take weeks for training.

When should we use a TPU?

These are the best cases for which TPUs can be used in machine learning.

  • Matrix computations dominate models
  • Models that do not use custom TensorFlow operations within the main training loop.
  • It is trained for weeks or even months with models.
  • Larger models and more powerful batch sizes are possible.

Next Topic#

You may also like