Advertisement
X

India should build its own AI models and data centres, says Perplexity’s Aravind Srinivas

He noted that companies capable of delivering high-performance AI training GPUs at competitive prices could still carve out a strong market presence in India.

Aravind Srinivas, CEO of Perplexity AI

India should establish its own AI models and data centers to maintain data sovereignty and accelerate technological growth, according to Aravind Srinivas, CEO of Perplexity AI. Speaking on Nikhil Kamath's WTF Podcast, Srinivas emphasized the critical need for India to develop its own computing infrastructure rather than relying on external hyperscalers.

Advertisement


“India should have its own data centers because there’s no reason not to,” said Srinivas, adding that the country generates a share of global data due to its massive smartphone user base. The conversation touched upon the booming data center business in India, which has become a focal point for real estate developers looking to capitalize on the growing demand for AI training infrastructure.


He even highlighted that while the current data center ecosystem is expanding, the business remains largely commoditized. “At the end of the day, it's almost like a real estate company starting a warehouse,” he said, pointing out that only firms with strong vertical integration or unique technological advantages could stand out.


One of the concerns raised during the discussion was the possibility of major cloud providers, or hyperscalers, choosing to build their own data centers instead of relying on third-party vendors. Srinivas acknowledged that hyperscalers often construct their own infrastructure unless faced with constraints such as tight timelines or regulatory requirements.

Advertisement

However, he noted that companies capable of delivering high-performance AI training GPUs at competitive prices could still carve out a strong market presence in India.


The conversation also delved into Nvidia’s dominance in the AI chip market. He explained that Nvidia’s success stems from its general-purpose chip architecture, interconnect technology, and proprietary CUDA software stack, which has created a significant competitive moat. 


“CUDA is a big advantage which developers are already trained to use, and Nvidia keeps a lot of it closed-source, making it difficult to replicate,” he said.


Despite competitors like Cerebras and Groq making strides in inference processing, Nvidia continues to maintain its market leadership by consistently advancing its chip technology. The upcoming Blackwell chips, set to replace the H100 series, are expected to further strengthen Nvidia’s position.


While the discussion centered on the role of data centers and chipmakers, Srinivas also highlighted Google as the only major player to have successfully built an independent AI computing ecosystem. “They have their own chips, their own software stack, and their own data centers completely independent of Nvidia,” he noted.

Advertisement


Looking ahead, Srinivas stressed that India’s role in AI infrastructure will be crucial. If regulations mandate data to be stored within the country, even global AI companies may have to utilize Indian data centers. 

Show comments