Advertisement
X

Apple, Google Under Fire for Hosting AI Apps That Create Non-Consensual Nude Images

Tech Transparency Project flags hundreds of millions of downloads of AI apps that digitally “undress” images despite platform safety policies.

Summary
  • A report by the Tech Transparency Project found 55 AI “nudify” apps on Google Play and 47 on Apple’s App Store that can generate non-consensual sexualised images.

  • The apps have collectively crossed 705 million downloads and generated an estimated $117 million in revenue through subscriptions and in-app purchases.

  • The findings come amid growing scrutiny of generative AI tools following controversy around xAI’s Grok, prompting calls for tighter regulation in markets including the UK.

Advertisement

Apple Inc. and Google continue to host mobile applications that enable the creation of non-consensual sexualised images, despite policies prohibiting such content, Bloomberg reported, citing a study released Wednesday by the Tech Transparency Project (TTP), the research arm of nonprofit Campaign for Accountability.

Searches for terms such as “nudify” and “undress” across the Apple App Store and Google Play Store surface apps that use artificial intelligence to digitally alter photographs of celebrities and private individuals, making them appear nude or partially unclothed. According to the report, the platforms also display advertisements promoting similar AI-powered “nudification” tools within search results, raising concerns about enforcement of content moderation policies.

AI “Nudify” Apps

In a January review, TTP identified 55 such applications on Google Play and 47 on Apple’s App Store. Combined, these apps have been downloaded more than 705 million times globally and have generated an estimated $117 million in revenue through subscriptions, in-app purchases, and advertising, highlighting the scale and commercial viability of such tools.

Advertisement

The findings come amid intensifying scrutiny of generative AI systems capable of producing sexualised or manipulated imagery without consent. Recent controversy centred on Elon Musk-owned xAI’s chatbot Grok, after users demonstrated prompts that could generate altered images depicting women and minors in sexually explicit ways.

]The incident sparked widespread backlash online and prompted renewed calls from policymakers and child safety advocates for stricter oversight of AI systems that enable image manipulation.

Regulators in the United Kingdom and elsewhere have signalled the possibility of tighter safeguards or restrictions targeting platforms that host or distribute such tools.

Advocacy groups argue that these applications can facilitate harassment, exploitation, and reputational harm, particularly for women and minors whose images may be misused without their knowledge or consent.

“Both companies say they are dedicated to the safety and security of users, but they host a collection of apps that can turn an innocuous photo of a woman into an abusive, sexualised image,” TTP said in its report.

Advertisement