Multiverse Computing is releasing HyperNova 60B 2605, an updated version of our open-source 60B signature model. This release improves coding performance and lifts accuracy across several benchmarks compared with the previous HyperNova 60B 2602, while preserving everything teams already rely on: native tool calling, OpenAI-style function-calling schemas, structured outputs, agent-style workflows, and configurable reasoning effort (low / medium / high).
A clear step up on coding
On LiveCodeBench, HyperNova 60B 2605 lands at 68.68, a clear step up from 51.53 in HyperNova 60B 2602, and ahead of gpt-oss-120B at 62.75. For developer workloads, code generation, and tool-augmented agentic tasks, this is the most visible upgrade in this release.
Higher accuracy across several benchmarks
Beyond coding, HyperNova 60B 2605 lifts results across general reasoning, agentic, and instruction-following benchmarks evaluated under high reasoning effort, including AIME25, GPQA-D, MMLU-Pro, IFBench, HLE, AA-LCR, τ²-Bench, Terminal-Bench Hard, and SciCode. The full picture, vs. HyperNova 60B 2602 and vs. gpt-oss-120B, is below.
Same agentic foundation, retained and improved
HyperNova 60B 2605 keeps the same developer experience as the 2602 release. It supports native tool use and is well suited for function calling with defined schemas, structured outputs, and agentic operations such as browser tasks and code execution. It detects when to invoke tools, emits structured JSON tool calls, and consumes tool outputs to continue generation. Agentic benchmarks (τ²-Bench, Terminal-Bench Hard, IFBench, AA-LCR) all move up under the high-reasoning setup compared with the previous release.
Available on Hugging Face
HyperNova 60B 2605 is available now from Multiverse Computing on Hugging Face, under the same Apache 2.0 license as previous HyperNova 60B releases.