Showing posts with label Reasoning. Show all posts
Showing posts with label Reasoning. Show all posts

11.9.25

The Majority Isn’t Always Right: AggLM Learns to Aggregate Better Than Voting

 When logic is tricky, the most common answer isn’t always the correct one. A new Meta/Fair & CMU paper titled “The Majority is not always right: RL training for solution aggregation” challenges the standard practice of combining LLM outputs via voting or reward-scored selection. Their method—AggLM—trains a dedicated aggregator model to review, correct, and synthesize among multiple LLM-generated candidate solutions via reinforcement learning from verifiable rewards (RLVR), yielding big gains over majority voting and reward model baselines. 


Solving it: learned reconciliation vs. counting

Standard aggregation in LLM reasoning often works like this: sample many candidate solutions, then pick the answer that's most frequent (majority voting) or highest scored by some reward model. While effective in many settings, these methods have a blind spot—when correct answers exist only among minority solutions. In contrast, AggLM treats aggregation itself as a reasoning task. It takes a set of candidate solutions, analyzes them, spots mistakes or partial correctness, then combines ideas or corrects missing steps to produce a final solution. Importantly, it’s trained using verifiable rewards—i.e. only when the aggregated output matches a known correct solution. 


Key ingredients & experiments

  • Dataset & training: Using Qwen3-1.7B as the solution generator, AggLM-1.7B is trained on ~446,000 examples drawn from a mixture of “easy” and “hard” sets. Hard sets are those where the majority answer among candidates is actually incorrect; the mix helps the model learn both to follow the majority and to rescue correctness from minority solutions. 

  • Aggregation via RLVR: The model uses Group-Relative Policy Optimization (GRPO), with a binary reward (1 for matching the ground truth, 0 otherwise). The aggregator is initialized from the Qwen3-1.7B model but is tuned via this RL signal. 

  • Benchmarks: Evaluated on four math contest datasets: AIME24, AIME25, HMMT24, HMMT25. AggLM was tested aggregating candidate solutions from both the same generator model (Qwen3-1.7B) and stronger ones (Qwen3-8B), in both thinking and non-thinking modes. 


Results & token-efficiency

  • On solutions from Qwen3-1.7B in thinking mode, AggLM-1.7B lifts accuracy significantly. For example, on AIME25, majority voting with 8 candidates yields ~67.9%, while AggLM pushes it to 50.0% in a different benchmark context (depending on the exact evaluation variant). More striking, when aggregating from the stronger 8B model, AggLM still outperforms majority voting, weighted voting, and reward-model selection baselines. 

  • In non-thinking modes (i.e. when the candidate-generating model is weaker or does not use chain-of-thought reasoning), AggLM retains its lead—showing that it generalizes beyond just cherry-picking strong or specifically-formatted inputs. 

  • Regarding cost, AggLM is more token efficient: instead of needing large numbers of candidate solutions (i.e. very large k) for majority voting to reach high accuracy, AggLM achieves similar or better accuracy with fewer candidate solutions, saving both inference time and compute. 


Implications & what’s next

AggLM shifts thinking in two ways:

  1. Aggregation as reasoning. Aggregation isn’t just picking among options—it’s an opportunity to correct, synthesize, and integrate partial truths. Models that can do that perform better, especially in instances where majority answers mislead.

  2. Balancing examples is key. Training on a mix of easy and hard cases was essential. If you train only on “easy” majority-correct groups, or only on “hard” ones, performance suffers. 

  3. Generalization beyond training generators. AggLM works well even when aggregating from stronger models than those used during training—implying aggregation skills are transferable, not just overfitted to particular output distributions. 

  4. Efficiency trade-off. Instead of scaling k (number of solutions) to very high values, using a learned aggregator yields larger gains per additional candidate, meaning happier ceilings on tokens/time. 


Bottom line: AggLM demonstrates that “the majority vote” should not be the default in reasoning aggregation. Models that are trained to look across candidate solutions—identify hidden truth, correct errors, and combine the best ideas—do better than simple heuristics. Especially in math and logic tasks where minority correct answers exist, learned aggregation via RL with verifiable reward is a strong lever. If you’re designing agents or reasoning pipelines, integrating an aggregator like AggLM can be a powerful performance boost with reasonable cost.

Paper link: arXiv 2509.06870 (PDF)

19.5.25

AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications, and Challenges

 A recent study by researchers Ranjan Sapkota, Konstantinos I. Roumeliotis, and Manoj Karkee delves into the nuanced differences between AI Agents and Agentic AI, providing a structured taxonomy, application mapping, and an analysis of the challenges inherent to each paradigm. 

Defining AI Agents and Agentic AI

  • AI Agents: These are modular systems primarily driven by Large Language Models (LLMs) and Large Image Models (LIMs), designed for narrow, task-specific automation. They often rely on prompt engineering and tool integration to perform specific functions.

  • Agentic AI: Representing a paradigmatic shift, Agentic AI systems are characterized by multi-agent collaboration, dynamic task decomposition, persistent memory, and orchestrated autonomy. They move beyond isolated tasks to coordinated systems capable of complex decision-making processes.

Architectural Evolution

The transition from AI Agents to Agentic AI involves significant architectural enhancements:

  • AI Agents: Utilize core reasoning components like LLMs, augmented with tools to enhance functionality.

  • Agentic AI: Incorporate advanced architectural components that allow for higher levels of autonomy and coordination among multiple agents, enabling more sophisticated and context-aware operations.

Applications

  • AI Agents: Commonly applied in areas such as customer support, scheduling, and data summarization, where tasks are well-defined and require specific responses.

  • Agentic AI: Find applications in more complex domains like research automation, robotic coordination, and medical decision support, where tasks are dynamic and require adaptive, collaborative problem-solving.

Challenges and Proposed Solutions

Both paradigms face unique challenges:

  • AI Agents: Issues like hallucination and brittleness, where the system may produce inaccurate or nonsensical outputs.

  • Agentic AI: Challenges include emergent behavior and coordination failures among agents.

To address these, the study suggests solutions such as ReAct loops, Retrieval-Augmented Generation (RAG), orchestration layers, and causal modeling to enhance system robustness and explainability.


References

  1. Sapkota, R., Roumeliotis, K. I., & Karkee, M. (2025). AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges. arXiv preprint arXiv:2505.10468.

4.5.25

Microsoft Launches Phi-4-Reasoning-Plus: Small Model, Big Reasoning Power

Microsoft has unveiled Phi-4-Reasoning-Plus, a compact yet highly capable open-weight language model built for deep, structured reasoning. With just 14 billion parameters, it punches far above its weight—outperforming much larger models on key benchmarks in logic, math, and science.

Phi-4-Reasoning-Plus is a refinement of Microsoft’s earlier Phi-4 model. It uses advanced supervised fine-tuning and reinforcement learning to deliver high reasoning accuracy in a lightweight format. Trained on 16 billion tokens—half of which are unique—the model’s data includes synthetic prompts, carefully filtered web content, and a dedicated reinforcement learning phase focused on solving 6,400 math problems.

What makes this model especially valuable to developers and businesses is its MIT open-source license, allowing free use, modification, and commercial deployment. It's also designed to run efficiently on common AI frameworks like Hugging Face Transformers, vLLM, llama.cpp, and Ollama—making it easy to integrate across platforms.

Key Features of Phi-4-Reasoning-Plus:

  • 14B parameters with performance rivaling 70B+ models in reasoning tasks

  • ✅ Outperforms larger LLMs in math, coding, and logical reasoning

  • ✅ Uses special tokens to improve transparency in reasoning steps

  • ✅ Trained with outcome-based reinforcement learning for better accuracy and brevity

  • ✅ Released under the MIT license for open commercial use

  • ✅ Compatible with lightweight inference frameworks

One of the standout results? Phi-4-Reasoning-Plus achieved a higher first-pass score on the AIME 2025 math exam than a 70B model—an impressive feat that showcases its reasoning efficiency despite a smaller model size.

Takeaway

Microsoft’s Phi-4-Reasoning-Plus marks a turning point in AI development: high performance no longer depends on massive scale. This small but mighty model proves that with smarter training and tuning, compact LLMs can rival giants in performance—while being easier to deploy, more cost-effective, and openly available. It’s a big leap forward for accessible AI, especially for startups, educators, researchers, and businesses that need powerful reasoning without the heavy compute demands.

What Claude offers now From Anthropic’s announcements: Creates and edits real files directly in chats or the desktop app: Excel (.xlsx)...