With this release we are pushing the boundaries of SOTA model editing:
No architectural changes
No new parameters or training data
Identical behavior on non-censored prompts
The goal is to give researchers and developers access to powerful reasoning models without hard-coded political refusals, while keeping standard safety constraints intact.
We are also releasing the full code and datasets used for evaluating censorship and safety.
Model: https://huggingface.co/MultiverseComputingCAI/Qwen3-Next-80B-A3B-Thinking-Uncensored
Dataset: https://huggingface.co/datasets/MultiverseComputingCAI/llm-refusal-evaluation
Evaluation library: https://github.com/CompactifAI/LLM-Refusal-Evaluation
