Showing posts with label AIME. Show all posts
Showing posts with label AIME. Show all posts

11.9.25

Parallel-R1: Teaching LLMs to reason from multiple angles—permanently

 Modern large language models (LLMs) often reason sequentially—one thought chain at a time. Parallel thinking, in contrast, involves spawning multiple reasoning paths (or perspectives), then merging the insights. While prompting tricks can induce this behavior at inference, they carry heavy overhead and brittle generalization. Parallel-R1, a new paper by Tencent AI Lab Seattle with collaborators, pioneers a training-time RL framework for instilling parallel thinking as a native reasoning strategy. 


What is Parallel-R1

The key idea: don’t just prompt models to use parallel paths—train them to do so. Parallel-R1 has a progressive curriculum:

  1. Cold start (format learning via SFT) — teach the model the syntax/tags of parallel blocks (e.g. <Parallel>, <Path>...</Path>, <Summary>), using easier math problems (GSM8K) where high-quality parallel traces are easy to generate.

  2. Reinforcement learning (RL) on easy tasks, to explore usage of parallel thinking, with reward that combines correctness + usage of parallel structure. 

  3. RL on more difficult problems (e.g. DAPO, AMC, AIME), so the model generalizes both performance and the parallel thinking style. 

The architecture has two variants: a causal (structure-agnostic) version and a structured version. The structured version modifies the attention mechanism (via path-window masking, separate position encodings) so paths are more isolated during reasoning. But structured variants show trade-offs—good for generalization in some settings, but less robust under distribution shift.


Results & gains

On a battery of math benchmarks (MATH, AMC23, AIME24, AIME25), Parallel-R1 shows consistent improvements:

  • The “Seen” variant (causal) achieves ~48.9% average across benchmarks (Mean@16 / Pass@16, etc.), beating baseline GRPO RL on general math tasks. 

  • In particular, on AIME’25, Parallel-R1 raises accuracy by ~8.4% over a purely sequential RL model trained on the harder tasks directly. 

  • The structured (Unseen) variant also performs well under certain reward schedules; the “alternating ACC/PAR” reward schedule (switching between rewarding correctness and parallel structure periodically) helps balance parallel usage and performance. 

Beyond numerical gains, the authors observe a behavioral shift: early in training, the model heavily uses parallel paths as an exploration tool, branching in many places; as the model becomes stronger, it shifts to using parallel paths more conservatively, mostly for verification near the end of reasoning. This shift correlates with stronger final performance. 


Why this matters

  • Performance & efficiency trade-off: Parallel-R1 shows that training models for parallel thinking can yield higher reasoning ability without ballooning inference cost (since only when needed are parallel paths triggered).

  • Better than imitation: Many earlier works used supervised fine-tuning on synthetic parallel reasoning traces under teacher forcing; but those often over-fit to particular patterns. RL in Parallel-R1 helps models learn to decide when parallel paths help, not just how to mimic them.

  • Scaffolding exploration: The cold-start + easy tasks + alternating reward strategy functions as a scaffold, enabling RL to find a stronger policy space than direct RL on hard tasks.

  • Architecture designs matter: The structured variant shows that attention masking and position encodings can help or hurt depending on how well training data matches deployment tasks.


Limitations & future directions

  • The gains, though significant, still leave room before human-level performance in very hard math tasks.

  • The structured variants can struggle under domain shift; care needed in architectural changes that assume particular path structures.

  • Triggering parallel thinking (using <Parallel> blocks) costs some token and compute overhead, though the model learns to use it more sparsely over time.

  • There’s a balance tension between pushing for parallel structure (which encourages exploration) and maximizing accuracy (which sometimes pushes toward fewer divergences). Reward engineering is delicate.


Bottom line: Parallel-R1 is a breakthrough toward training LLMs that think in parallel, not just deeper. By combining curriculum learning, structured or causal variants, and reinforcement learning with rewards for both correctness and reasoning style, it unlocks better performance on challenging math tasks. As reasoning benchmarks and applications demand both correctness and robustness, methods like this will likely become a standard part of the toolkit.

Paper link: arXiv 2509.07980 (PDF)

ParaThinker: parallel minds beat longer monologues

 LLMs have ridden test-time compute—“think longer” chains of thought—but returns taper as early tokens lock models into bad trajectories. Tsinghua’s ParaThinker calls this Tunnel Vision and proposes native thought parallelism: generate several independent reasoning paths simultaneously, then fuse them into one answer. 

Instead of external voting, ParaThinker trains the model itself to branch and merge: specialized control tokens (<think i>) trigger distinct trajectories, path-specific positional embeddings keep streams separate, and a two-phase attention mask enforces independence during thinking and controlled integration during summarization. The KV cache from the thinking stage is reused, avoiding re-prefill costs. 

On AIME-24/25, AMC-23 and MATH-500, ParaThinker with 8 parallel paths boosts accuracy by +12.3 pts (1.5B) and +7.5 pts (7B) over sequential baselines under the same token budget, and still beats majority voting by +4.3/+2.0 pts—with only ~7.1% latency overhead. Generating up to 16 paths costs <2× single-path latency, thanks to better arithmetic intensity on GPUs. 

The takeaway: scale width, not just depth. ParaThinker shows that orchestrating compute across diverse, parallel thoughts unlocks latent reasoning ability and makes smaller models out-punch larger sequential ones. Code is available on GitHub. 

Paper link: arXiv 2509.04475 (PDF)

2.8.25

MetaStone-S1 makes “how long to think” a first-class dial—and it pays off

 Frontier models are learning to trade more inference compute for better answers. MetaStone-S1 turns that trend into a clean architecture: a Reflective Generative Form where the policy and a process reward model live in the same network, adding a light 53M-parameter scoring head instead of a separate, heavyweight judge. The scoring head is trained self-supervised from outcome rewards—no step-by-step human labels—so the system can generate multiple chains of thought and select the best one efficiently. 

Three “reasoning effort” modes, one model

Because the verifier is built-in, MetaStone-S1 exposes controllable thinking lengthslow, medium, high—implemented via different candidate counts (k = 2/8/32) at inference. That makes test-time scaling a product feature rather than a research trick. 

Benchmarks: o3-mini territory at 32B

Across AIME’24/’25 (math), LiveCodeBench (code), and C-Eval (Chinese reasoning), the 32B MetaStone-S1 variants lift accuracy over a strong 32B baseline and land comparable to OpenAI o3-mini (medium)—with the high mode leading math by a sizable margin. Example table slice (Pass@1): AIME’24 85.2, AIME’25 73.6, LiveCodeBench 64.2, C-Eval 89.7 for MetaStone-S1-32B-high vs. o3-mini-medium 79.6 / 74.8 / 67.4 / 75.9

At smaller scales, the 1.5B and 7B versions also beat peer open models (e.g., R1-Distill 7B/8B) on AIME and LiveCodeBench, showing the approach is not just a big-model hack. 

Why this matters

  • Unified policy+PRM = cheaper selection. Sharing the backbone removes a second giant model from the loop and still delivers strong external TTS gains. 

  • Label-free verifier training. The SPRM head learns step scoring from outcome signals, sidestepping costly, noisy process annotations. 

  • Production-ready knob. Teams can ship speed/quality dials (k=2/8/32) instead of maintaining separate models for different latency tiers. 

  • Open release. Code and checkpoints are public, inviting replication and adaptation. 

MetaStone-S1’s take-home: reasoning power isn’t only about bigger weights or longer chains—it’s about selecting the right trajectory at inference, with a verifier you can actually afford to run.

Paper link: arXiv 2507.01951 (PDF)

30.5.25

DeepSeek R1‑0528: The Open‑Source Challenger That Rivals GPT‑4o and Gemini 2.5 Pro

 Chinese startup DeepSeek has just released R1‑0528, a major update to its flagship reasoning model, positioning it as an affordable yet powerful open‑source alternative to OpenAI’s o3 and Google’s Gemini 2.5 Pro.

The new release, published on Hugging Face under the permissive MIT License, brings a host of enhancements to math, science, business, and coding reasoning—all while reinforcing its competitive edge.



🚀 What’s New in R1‑0528

  • Stronger Reasoning:
    On the AIME 2025 benchmark, accuracy surged from 70% to an impressive 87.5%, thanks to longer reasoning chains (average 23k tokens vs. 12k before). Code generation also jumped, with LiveCodeBench scores rising from 63.5% to 73.3% alongside doubling performance on the challenging “Humanity’s Last Exam.”

  • Developer-Friendly Features:
    R1‑0528 now supports JSON output and function calling, streamlining integration into developer pipelines and automation workflows.

  • New Model Variant:
    A distilled version—R1‑0528‑Qwen3‑8B—brings lightweight performance that's still on par with larger models in open benchmarks like AIME 2024.

🏆 Why This Matters

DeepSeek continues to challenge the perception that high performance requires closed-source models and massive budgets. R1‑0528 delivers competitive strength on par with expensive proprietary systems, but under an MIT license and at significantly lower cost—R1's API even cost just $0.14/1M tokens (peak) with local runtime options detailed on GitHub.

This open-access approach puts serious pressure on dominant U.S. models and fosters global collaboration—developers worldwide can use, modify, and deploy R1‑0528 freely.


🌍 Open-Source Renaissance in AI

Since its initial R1 model launch in January, DeepSeek has quickly become a key player in the global AI landscape. R1‑0528 maintains the open-source ethos and stakes its claim as a champion of community-driven innovation in areas where cost and licensing are bottlenecks.


🗣️ Community Buzz

Feedback from enthusiasts is bullish: voices from Reddit’s LocalLLaMA community noted that “DeepSeek is now almost on par with OpenAI’s o3 High model on LiveCodeBench! Huge win for opensource!”

Analysts also see this release as a strategic “Sputnik moment” that could disrupt AI dominance—similar to earlier 2025 reports on DeepSeek’s initial release.


✅ Final Verdict

DeepSeek R1‑0528 marks a significant milestone in open-source AI: powerful reasoning, developer utility, and community support—all while costing a fraction of proprietary counterparts. As a truly accessible yet competitive model, it nudges the AI ecosystem toward openness and transparency—without sacrificing performance.

27.5.25

NVIDIA Introduces AceReason-Nemotron: Enhancing Math and Code Reasoning through Reinforcement Learning

 NVIDIA has unveiled AceReason-Nemotron, a 14-billion-parameter open-source model designed to enhance mathematical and coding reasoning through large-scale reinforcement learning (RL). This model demonstrates that RL can significantly improve reasoning capabilities in small to mid-sized models, surpassing traditional distillation-based approaches.

Key Features and Innovations

  • Sequential RL Training Strategy: The model undergoes a two-phase RL training process—initially on math-only prompts, followed by code-only prompts. This approach not only boosts performance in respective domains but also ensures minimal degradation across tasks. 

  • Enhanced Benchmark Performance: AceReason-Nemotron-14B achieves notable improvements on various benchmarks:

    • AIME 2025: 67.4% (+17.4%)

    • LiveCodeBench v5: 61.1% (+8%)

    • LiveCodeBench v6: 54.9% (+7%) 

  • Robust Data Curation Pipeline: NVIDIA developed a comprehensive data curation system to collect challenging prompts with verifiable answers, facilitating effective verification-based RL across both math and code domains. 

  • Curriculum Learning and Stability: The training incorporates curriculum learning with progressively increasing response lengths and utilizes on-policy parameter updates to stabilize the RL process. 

Implications for AI Development

AceReason-Nemotron's success illustrates the potential of reinforcement learning in enhancing the reasoning abilities of AI models, particularly in mathematical and coding tasks. By releasing this model under the NVIDIA Open Model License, NVIDIA encourages further research and development in the AI community.

What Claude offers now From Anthropic’s announcements: Creates and edits real files directly in chats or the desktop app: Excel (.xlsx)...