So, I think that is where, of course, it has required us to explain the whole story and the whole stack. I must have spent countless meetings. Once you start, see, most of the people are seeing this industry sitting on a fence.
And most of the time you are getting exposed to the use cases of AI. But when you start explaining what is the raw brass stack which is going behind this whole thing, and see, I correlate that to the dot-com time.
You know, when the dot-com boom was there in the late 90s and early 2000s and that dot-com boom got busted, everybody thought that everything is doomed, everything is down, right? But what people didn't realize was that only the use case, that you are creating a website with some HTML, and you thought that that is the internet economy. It was not. People realized there is supposed to be a strong brick-and-mortar economy behind that. Those use cases may have gone. But the infrastructure which got built to support the internet at that time, the data center, the network—that infrastructure actually got used to support the internet of today.
The undersea optic fiber cables and the data centers. Same thing I talk about AI in today's environment. The use cases of AI may come and go. People may say, oh, it's a hype, whether AI will do this or not, whether people adopt it or not, and all that stuff. Some use cases may become very successful. Some use cases may die.
But whatever may be the case, use cases will keep on evolving. Fundamentally, this is a fact that if you have to go through the training life cycle of models—whether it will be a small model, large model, medium model—and later on when you are trying to put those models for inferencing for billions of people, you will require GPUs in both cycles. So, the infrastructure you will be building in terms of GPU stack, the software platform stack on the top, the underlying data center stack—they'll be here to stay.