Now, if we look at DeepSeek's content moderation architecture, it consists of two distinct layers. One is the model layer where the base model incorporates content policies directly into its training, shaping its default responses and behavior patterns. The other is the system layer where additional keyword-based filtering acts as a secondary safeguard, blocking content containing specific blacklisted terms if the model's built-in restrictions fail. This means that Deepseek R1, being open-source, allows others (third parties) to use and customize it however they like. These third parties don't have to include the extra filtering system that Deepseek uses on its own platform. “For example, Perplexity AI has deployed R1 on US servers through their Pro tier's "reasoning" mode, allowing users to interact with the base model without the additional filtering layers present on Deepseek's Chinese platform,” said Abhivardhan, Chairperson & Managing Trustee, Indian Society of Artificial Intelligence and Law.