Advertisement
X

IndiaAI Mission: Are GPUs Backbone of Native LLM Development

GPUs are specialised processors used to render high-resolution visuals in video games and AI development

Explained: Why GPUs Are Critical to India's AI Mission

The Indian technology ecosystem was recently involved in a debate about whether India should invest in building its own foundational (large language model) LLMs or build applications on top of existing AI (artificial intelligence) models. While many founders and tech executives are against the idea of building a homegrown LLM, some want to build it in India to tackle security concerns, foreign tech dependence and solve vernacular problems.

Advertisement

In order to build an LLM from scratch, it requires a combination of large datasets, compute power (GPUs), capital, and infrastructure.

High compute power is an important aspect of LLM development, since any strong LLM model is trained on large datasets for multiple parameters. To process such a vast amount of data effectively it requires a high compute power which is provided by GPUs (Graphics Processing Units).

GPUs, which were originally developed to render visuals, have become fundamental for processing these models. They are ideal for the complex calculations and fast data processing required by LLMs.

What Are GPUs?

GPUs are specialised processors used to render high-resolution visuals in video games and AI development. Unlike traditional CPUs (central processing units), which perform sequential tasks with a few powerful cores, GPUs are constructed with thousands of smaller cores that work concurrently. This parallel architecture makes them extremely effective at performing complex mathematical computations necessary in a variety of applications, including AI.

Advertisement

GPUs are vital in AI development as they can accelerate processes like neural network training and data processing. Training AI models comprises doing massive amounts of matrix multiplication and other mathematical operations on vast datasets. GPUs can handle several computations at the same time, which significantly reduces the time required to train complex models.

GPUs also have high memory bandwidth, allowing them to process large amounts of data quickly. This capability is essential for AI models, which rely on massive datasets to improve their accuracy and performance.

IndiaAI mission and GPUs

The Indian government is expanding its AI infrastructure through the IndiaAI Mission. Union Minister Ashwini Vaishnaw recently announced that around 19,000 GPUs have been sourced from various data centers, including high-performance units. He added that the government aims to support the development of six to eight large language models (LLMs) to meet India’s linguistic and economic needs.

The government is also working to develop a domestic GPU by adapting open-source or licensed chip technologies. Vaishnaw said this approach could enable India to produce its own GPU within three to five years.

Advertisement

NVIDIA & AMD: GPU Giants

NVIDIA and AMD (Advanced Micro Devices) are two leading producers of GPUs globally.

NVIDIA is the market leader in AI-focused GPUs, with the Tesla (now A100 and H100) and RTX series optimised for machine learning and deep learning tasks. The company's CUDA platform and technologies, such as cuDNN and TensorRT, set industry standards, while its A100 and H100 GPUs are designed for AI training and inference.

AMD, a formidable rival with its Instinct series, which includes the MI200 and MI300 devices, is gaining ground but still trails NVIDIA in market share and ecosystem support. Intel, a newcomer with its Intel Arc series and AI accelerators such as Habana Labs and Ponte Vecchio GPUs, is still trying to catch up with NVIDIA's established position in the AI GPU industry.

Show comments