Localisation is another crucial factor. Many AI models are trained mainly on English or Chinese data and fail to perform well in regions with different languages or cultural contexts. This is especially true in Southeast Asia and applies to parts of India as well. In response we are seeing the rise of “sovereign AI” models that are designed to reflect local realities. One example is Sea-Lion, a large language model created for Southeast Asia in a partnership between AI Singapore and Thoughtworks. It is trained in the region’s 11 official languages and better understands local use cases. Such models reduce bias and make AI more useful in the markets they serve.
Building trust in AI systems is critical. According to the 2025 Edelman Trust Barometer, an annual global trust survey conducted by the Edelman Trust Institute, there is still a significant trust gap. When AI agents are allowed to take action within systems, we need to be sure they are safe, fair and reliable. This means testing them with real-world data, using diverse training inputs, applying fairness checks and following clear ethical practices. Transparency builds trust and without trust adoption will always lag.