Exclusive | Meta Whistleblower Kelly Stonelake: Big Tech Treats India as a Scale Market, Safety is Secondary

Kelly Stonelake, a former Meta executive turned whistleblower, talks to Nabodita Ganguly about how Big Tech has engineered systems that make social-media platforms addictive and why they should take more accountability

Kelly Stonelake, a former Meta executive turned whistleblower
info_icon
Q

One argument that Big Tech firms often give is that they are not the ones posting content, so it is unfair to hold them accountable.

A

For years, social-media companies have hidden behind Section 230 of the Communications Decency Act in the US, claiming immunity for whatever users post on their platforms. 'We don’t create the content,' they argue, 'we just host it’. A Bill was introduced this year in the US Senate, proposing to sunset Section 230.

This current bellwether case at trial in Los Angeles is important because the legal strategy does not focus on any single piece of harmful content; they target the harmful product design itself. For once, Big Tech can’t claim Section 230 immunity.

Whether or not a self-harm video gets uploaded, these companies have engineered systems that actively deliver it to vulnerable kids because they know what keeps them hooked. These platforms connect children with adults selling counterfeit fentanyl, rewarding them for accepting friend requests from unknown adults and then showing their location to these predators on a map. They fail to address sextortion schemes that have driven teens to suicide. They amplify dangerous viral challenges.

Social-media apps use an array of design features—from infinite scroll feeds to video autoplay—to create a habit-forming experience. One unsealed internal document even shows an Instagram employee calling the app a 'drug’, with a colleague joking 'lol, we’re basically pushers'.

Q

Should we hold Big Tech platforms accountable for the content that they are publishing?

A

There are arguments that repealing Section 230, making platforms responsible for the content they post, could lead to a lack of moderation [due to liability] or over-moderation [that looks more like censorship].

While I support sunsetting Section 230 due to the way it’s been abused by tech companies, I’m much more interested in laws with a duty of care standard that require platforms to build with the wellbeing of their users in mind, with consequences for non-compliance that include monetary fines, open courtrooms for those impacted by harmful product design and personal criminal liability for executives.

In markets like India, the focus becomes user acquisition and product penetration. Safety becomes reactive rather than foundational
Q

Many claim developing countries like India are often sidelined by Big Tech firms, even though we have the largest user base.

A

One of the clearest patterns inside large tech companies is that advertising revenue, regulatory pressure and media scrutiny drive prioritisation. The US and Western Europe generate disproportionate revenue per user and disproportionate political risk. So, they receive disproportionate investment in policy, safety, language support and executive attention.

Markets like India are often treated as 'scale markets’. The focus becomes user acquisition, product penetration and engagement velocity. Safety becomes reactive rather than foundational. If something explodes into a headline or threatens regulatory backlash, resources surge temporarily. Otherwise, the baseline investment may not match the scale of impact. And that has real consequences.

Misinformation, harassment, political manipulation and communal-violence amplification often move fastest in markets with massive scale and uneven safeguards. This is why representation matters at technology firms, why the people building the products should reflect the diverse audiences they serve.

Geopolitics Shackles Green Switch

2 March 2026

Get the latest issue of Outlook Business

amazon
Q

Big Tech firms focus extensively on ads. What about media reports alleging that they encourage fake ads?

A

Illicit advertising, including fraudulent ads and [those] for illicit drugs persist at scale because the systems that optimise for revenue and engagement do not treat illegality or harm to users as disqualifying in practice. It’s the same pattern we see in Meta’s orientation to keeping kids safe online; too much impact to their profits to do the right thing.

A system can be highly competent at detection and still produce outcomes that are socially unacceptable if the institution sets decision thresholds and enforcement constraints primarily to protect revenue and engagement. If a company internally expects material revenue from prohibited categories, the question becomes less about 'can it catch everything?' and more about ‘what level of illicit activity is deemed tolerable and why?’

When enforcement is bounded by financial guardrails and when risk is managed through tactics like charging suspicious advertisers more, illicit activity begins to function as a priced-in segment rather than a violation to be eliminated. Drug ads do not need to be ubiquitous to be dangerous; they only need to reach the right people repeatedly.

The mechanics of ad personalisation matter too. Even if a platform removes ads once flagged, the underlying targeting and optimisation systems can keep reintroducing similar content if the system’s dominant objective is performance and if there is a lag in detection and enforcement.

Meta’s recent hiring choices are discomforting in this context. CNBC reports that Meta appointed Dina Powell McCormick as president and vice-chair, and hired Curtis Joseph Mahoney as chief legal officer, both of whom held senior roles in the [Donald] Trump administration.

Large firms often hire leaders with government experience for legitimate reasons: policy navigation, global operations, diplomacy and regulatory compliance. Still, context and timing matter.  

When a company faces intensifying scrutiny over fraud, illegal advertising and child safety, bringing in figures with deep political networks and institutional knowledge can reasonably be read as capacity-building for contested regulation. 

Accountability would need to take their incentives and motivation into consideration.  

Advertiser verification and traceability: Robust identity verification, meaningful limits on shell entities and preserved audit trails for ad purchasers. 

Liability aligned with scale: Penalties that exceed the profit generated by illicit advertising, rather than predictable fines that can be absorbed. 

Independent auditing of enforcement: Third-party evaluation of detection thresholds, false negative rates and enforcement 'guardrails’, with regulator access where warranted. 

Constraints on optimisation: Limits on ad personalisation dynamics that amplify exposure to risky categories after a user clicks once. 

Victim-centred remedies: Mechanisms for restitution in cases of platform-mediated harm. 

The aim of these suggestions is to shift the platform’s dominant incentive from 'manage legal risk at acceptable cost' to 'prevent illegal monetisation as a default condition of operating’. 

The question policymakers, researchers and the public should keep returning to is whether Meta’s governance choices make illegal and exploitative monetisation a recurring, manageable feature of its business.  

If the answer is yes, then the policy response should be designed accordingly: not as advice to improve moderation, not as a discussion about the burden on company of complying with law, but as an immediate shift of incentives and penalties around the only thing Meta cares about: their bottom line.

Q

What made you sue Meta?

A

I sued because the internal systems that are supposed to protect employees who raise concerns did not work. Because retaliation is a powerful silencing mechanism, and if it works, it teaches everyone watching to stay quiet. Because I believe the culture that tolerates misogyny and retaliation internally is inseparable from the harms we see externally in products.  

I sued because I had the privilege to tell the truth. When you benefit from proximity to power, you inherit a duty to confront it when it causes harm. 

I sued because I almost lost my life as a direct result of what I experienced at Meta and autistic people like me are vulnerable to experiencing burnout and its catastrophic impact on life and functioning when they encounter environments that are both demanding and invalidating, that cause moral injury and internal chasming. It’s not just that vulnerable people are expected to absorb those impacts while trillion-dollar corporations wipe their hands clean.

Q

You have spoken about how Big Tech culture regards safety concerns as threat to profits. Can you elaborate?

A

Meta is using words like 'free expression’, 'innovation’ or 'user empowerment' as smokescreens for profit and greed. And now, as momentum grows for legislation that could keep kids safe, they’re spending millions to stop it.

Meta has mobilised well-funded lobbying teams and front groups to paint regulation as censorship, when in truth, it’s accountability they fear.

When Meta rolled Teen Accounts in September last year, they said that ‘Teen Accounts are bringing parents more peace of mind’. They failed to mention that these products don’t actually work.

A report called 'Teen Accounts, Broken Promises' by researchers from NYU [New York University], Northeastern, groups like Fairplay and ParentsSOS [groups that advocate safety of children online], and former Meta executive Arturo Béjar says these tools don’t work. After testing 47 of the safety tools bundled into Instagram’s Teen Accounts, they found that just 17% worked as described. Nearly two-thirds were either broken, ineffective or quietly discontinued. 

With this contrast between Meta’s marketing promise and independent findings, Teen Accounts seem less about protecting teens and more about protecting Meta. Less cure and more sugar pill, meant to make parents and lawmakers feel better without adequately addressing the issue.

Let’s not forget that when the tobacco industry faced evidence that cigarettes caused cancer, it responded with light cigarettes and cartoon mascots. Meta’s Teen Accounts are the modern equivalent: a sop to worried parents and regulators, designed to preserve profit while avoiding real accountability.

If parents knew Instagram was unsafe, many would keep their teens off it. But Teen Accounts give the impression that guardrails are firmly in place. That false sense of security is exactly what Meta is selling: peace of mind for parents and plausible deniability for regulators, not safety for kids.

I recognise this pattern from my own time inside Meta. I spent nearly 15 years at the company, last as director of product marketing for Horizon Worlds, its virtual-reality platform. When I raised an alarm about product stability and harms to kids, the leadership’s focus was on decreasing risk to the company, not making the product safer. At one point, there was a discussion about whether or not it was appropriate to imply parental controls existed where they didn’t. 

Meta has the resources and technical capacity to more effectively innovate and it chooses not to. Instead, they provide ineffective solutions for kids while pouring billions into projects like circumnavigating the globe with subsea fibre to reach more users and make more money.

Q

Will it be safe to say that the focus is on content popularity over safety?

A

Exploitative social-media companies focus on engagement without a backstop, even when their own internal research as well as external research demonstrates the harms that come from that.