In India, Caste Not Race Is the Primary Social Reality, Yet Western AI Remains Blind to It: Microsoft’s Kalika Bali

Western AI models are built around race, but in India, caste shapes lived reality and ignoring that risks embedding invisible bias, Microsoft’s Kalika Bali said at India AI Summit 2026

Microsoft’s Kalika Bali
info_icon

At India AI Summit 2026, Microsoft researcher Kalika Bali argued that Western AI largely fails to recognise India’s caste dynamics, because models are built on datasets shaped by different social realities.


She said that in India, caste remains the dominant axis of inequality, and this blind spot can lead to biased or irrelevant AI outcomes.

Highlighting how inclusivity and safety standards vary across regions, Kalika Bali, Principal Researcher at Microsoft, shared India as an example where caste remains a central social reality and an area where many Western-developed models still fall short.

“We stopped treating ‘inclusion’ as something that can just be translated word-for-word, because we realized it doesn’t mean the same thing everywhere. The same applies to safety. For example, as an Indian, race is not my primary social concern. Caste is. Yet most Western-developed models have little understanding of caste dynamics,” she said.

She emphasized that such sensitive social structures must be carefully understood and responsibly integrated into AI systems.

Start-up Outperformers 2026

3 February 2026

Get the latest issue of Outlook Business

amazon

She also highlighted how the ‘Global South versus Global North’ framing often creates artificial distinctions, implying that societies in the Global South possess ‘special’ or exceptional characteristics that those in the Global North do not, an assumption that can further distort how technology is designed and deployed, she noted.

“So the mindset I would like to challenge is the idea that this is primarily a ‘Global South problem.’ It’s everybody’s problem. There is nothing that needs to be done only for the Global South. Whatever we build has to work for the entire world,” Bali added.

Bali mentioned that the world needs to think in terms of universal protocols, robust evaluation methods, and scalable benchmarks that can work across cultures and to do that, we have to pay attention to diversity in perspective.

“Most of the people building these models come from relatively similar, homogeneous backgrounds. I am making an assumption here, but often their norms are treated as universal. They are not. These systems are tuned to specific cultural values, and they need to be more respectful, inclusive, and adaptive to the full range of global norms,” she said.

Bali warned that the dominance of AI models developed by powerful countries and imposed on weaker ones could replicate patterns of digital colonialism, reinforcing global inequalities and creating a new form of technological dependence.

“The danger is that if we design AI primarily for powerful regions and then impose it on the rest of the world, we are creating a new form of colonialism. If you build something in one context and export it everywhere else without adaptation, that is, in effect, colonialism.

Bali was speaking at a session titled ‘Empowering Communities in the Age of Advanced AI: Inclusion and Safety for Sustainable Development,’ which framed AI safety as a cornerstone of sustainable growth in the Global South emphasizing the need for technologies that respect local contexts and actively work to reduce inequality.

Published At:

Advertisement

Advertisement

Advertisement

Advertisement

×