Showing posts with label Chinese AI. Show all posts
Showing posts with label Chinese AI. Show all posts

13.7.25

Moonshot AI’s Kimi K2: A Free, Open-Source Model that Tops GPT-4 on Coding & Agentic Benchmarks

 Moonshot AI, a Beijing-based startup backed by Alibaba, has thrown down the gauntlet to proprietary giants with the public release of Kimi K2—an open-source large language model that outperforms OpenAI’s GPT-4 in several high-stakes coding and reasoning benchmarks. 

What Makes Kimi K2 Different?

  • Massive—but Efficient—MoE Design
    Kimi K2 uses a mixture-of-experts (MoE) architecture: 1 trillion total parameters with only 32 B active per token. That means GPT-4-level capability without GPT-4-level hardware.

  • Agentic Skill Set
    The model is optimized for tool use: autonomously writing, executing and debugging code, then chaining those steps to solve end-to-end tasks—no external agent wrapper required. 

  • Benchmark Dominance

    • SWE-bench Verified: 65.8 % (previous open-source best ≈ 59 %)

    • Tau2 & AceBench (multi-step reasoning): tops all open models, matches some closed ones.

  • Totally Free & Open
    Weights, training scripts and eval harnesses are published on GitHub under an Apache-style license—a sharp contrast to the closed policies of OpenAI, Anthropic and Google.

Why Moonshot Is Giving It Away

Moonshot’s strategy mirrors Meta’s Llama: open weights become a developer-acquisition flywheel. Every engineer who fine-tunes or embeds Kimi K2 is a prospect for Moonshot’s paid enterprise support and customized cloud instances. 

Early Use Cases

DomainHow Kimi K2 Helps
Software EngineeringGenerates minimal bug-fix diffs that pass repo test suites.
Data-Ops AutomationUses built-in function calling to orchestrate pipelines without bespoke agents.
AI ResearchServes as an open baseline for tool-augmented reasoning experiments.

Limitations & Roadmap

Kimi K2 is text-only (for now) and lacks the multimodal chops of Gemini 2.5 or GPT-4o. Moonshot says an image-and-code variant and a quantized 8 B edge model are slated for Q4 2025. 


Takeaway
Kimi K2 signals a tipping point: open models can now match—or beat—top proprietary LLMs in complex, real-world coding tasks. For developers and enterprises evaluating AI stacks, the question is no longer if open source can compete, but how quickly they can deploy it.

3.7.25

Baidu Open-Sources ERNIE 4.5: A Full LLM Family from 0.3 B to 424 B Parameters

 

A Flagship Release for the Open-Source Community

On July 1 2025, Baidu announced the open-source launch of ERNIE 4.5, a complete large-language-model family scaling from 0.3 billion to 424 billion parameters. The weights, training code, and evaluation suites are now freely available to researchers and enterprises under the Apache 2.0 license.

Six Sizes, One Architecture

ModelDense / MoEContext WindowTarget Hardware*Intended Use
ERNIE-Tiny 0.3BDense16 KMobile/EdgeLightweight chat & IoT
ERNIE-Base 7BDense32 K1× A10 24 GBMainstream apps
ERNIE-Large 34BDense128 K2× A100 80 GBRAG & agents
ERNIE-XL 124BMoE (8 experts)256 K4× H100 80 GBMultimodal research
ERNIE-Mega 276BMoE (16)256 K8× H100 80 GBEnterprise AI
ERNIE-Ultra 424BMoE (24)1 MTPU v5p / 16× H100Frontier-level reasoning

*at int8 + FlashAttention-2 settings

Technology Highlights

  • FlashMask Dynamic Attention – a masking scheme that activates only the most relevant key-value blocks per token, cutting memory by 40 % while retaining context depth.

  • Heterogeneous Multimodal MoE – vision-audio experts share early layers with text, enabling cross-modal reasoning without separate encoders.

  • Knowledge-Centric Corpus – Baidu’s in-house “Wenxin KG-2” injects 4 T tokens of curated facts and regulations, boosting compliance answers.

  • Self-Feedback Post-Training – iterative reflection steps reduce hallucination rate by 28 % vs. ERNIE 4.0.

Benchmark Performance

Benchmark (June 2025)GPT-4.5*ERNIE 4.5-Ultra 424BERNIE 4.5-Large 34B
MMLU (5-shot)88.7 %89.3 %82.1 %
MathGLUE55.4 %57.2 %48.0 %
VQA-v2 (zero-shot)83.0 %84.6 %78.9 %
Code HumanEval+93.5 %94.1 %87.3 %

*closed model; public leaderboard values. ERNIE 4.5 data from Baidu release notes.

Why It Matters

  1. End-to-End Transparency – full training configs (FlashMask, MoE routing, safety filters) are published, enabling reproducible research.

  2. Scalable Deployment – identical API across sizes lets startups choose Tiny/7B locally and swap to 424B in the cloud without prompt changes.

  3. Multilingual & Multimodal – supports 34 languages and native image, audio, and short-video tokens out of the box.

  4. Cost Innovation – FlashMask and MoE shrink inference FLOPs by up to 55 % versus dense GPT-4-class models, lowering GPU bills for enterprise users.

Access & Tooling

  • Hugging Face Hub – weights and safetensors for all six checkpoints.

  • Docker & vLLM Images – ready-to-serve stacks with Triton / TensorRT-LLM.

  • Agent Starter Kits – sample Model-Context-Protocol (MCP) tools for retrieval, calculators, and code execution.

  • Chinese & English Docs – prompt templates, fine-tuning scripts, and safety policy examples.

Roadmap

Baidu’s research blog notes upcoming “ERNIE 4.6” experiments with FlashMask-2 and sparse Mixture-of-Experts vision heads, plus a policy-aligned Turbo variant targeting 80 % cheaper inference for chat applications.


Takeaway
With ERNIE 4.5, Baidu throws open the doors to a fully transparent, parameter-scalable, multimodal LLM family—giving practitioners a home-grown alternative to closed giants and pushing the frontier of what open-source models can achieve.

 Anyone who has watched today’s end‑to‑end robot policies fail a complex kitchen task knows the weakness: they map pixels to motors with no ...