tech
December 8, 2025
Quantum physicists have shrunk and “de-censored” DeepSeek R1
They managed to cut the size of the AI reasoning model by more than half—and claim it can now answer politically sensitive questions once off limits in Chinese AI systems.

TL;DR
- Multiverse Computing created DeepSeek R1 Slim, a smaller and reportedly uncensored version of the DeepSeek R1 AI model.
- The new model uses quantum-inspired tensor networks for significant size reduction and efficient data manipulation.
- Researchers claim to have precisely identified and removed Chinese censorship layers from the AI.
- Testing involved politically sensitive questions, with the modified model providing factual responses.
- The development is part of a broader effort to create smaller, more efficient AI models, saving energy and money.
- Other methods for model compression include distillation, quantization, and pruning.
- The ability to selectively remove bias or add behaviors at a granular level is a key feature.
- Chinese authorities mandate censorship in AI models, influencing the global information ecosystem.
- Academics are studying government-imposed censorship in large language models, noting higher rates in Chinese models.
- Perplexity previously released an uncensored variant of DeepSeek R1 using traditional fine-tuning methods.
- Challenges remain in fully reversing censorship due to its dynamic and complex nature within AI training.
Continue reading
the original article