Showing posts with label long-context. Show all posts
Showing posts with label long-context. Show all posts

2.9.25

Jet-Nemotron: NVIDIA’s post-training NAS makes small LLMs fast and smart

 For years, efficient-attention models traded speed for smarts. Jet-Nemotron, from NVIDIA researchers, tries to end that bargain with a pragmatic recipe: don’t pretrain a new architecture—start from a strong full-attention model, keep its MLPs, and search only the attention stack. They call it Post Neural Architecture Search (PostNAS), and the result is a 2–4B-parameter family that rivals or beats same-size full-attention baselines while massively upping tokens-per-second. 

What PostNAS actually does

PostNAS is a four-step, hardware-aware exploration loop layered on a pre-trained LLM: (1) learn where to keep or drop full-attention layers; (2) select the best linear-attention block; (3) optionally design a new block (“JetBlock”); and (4) tune hyperparameters for real GPUs. Freezing MLP weights keeps search cheap while letting attention do the heavy lifting. 

JetBlock in a sentence

JetBlock mixes linear attention with dynamic, input-conditioned causal convolutions on values (and trims redundant static convs on Q/K), yielding accuracy gains with little runtime overhead. 

The headline numbers

  • Throughput: On H100s, Jet-Nemotron-2B logs up to 53.6× decoding and 6.14× prefilling speedups at 256K context vs Qwen3-1.7B-Base—and still shows gains at shorter contexts. 

  • Accuracy: Despite being hybrid (mostly linear attention), Jet-Nemotron-2B/4B match or beat leading full-attention peers (Qwen2.5/3, Gemma3, Llama3.2) across MMLU/Pro, math, retrieval, coding, and long-context suites at similar scales. 

  • Coding & long-context: In the paper’s tables, Jet-Nemotron-4B leads average coding accuracy and outpaces Qwen3-1.7B-Base on long-context tasks while running ~21× faster

Why it’s fast (and why that matters)

A core finding is blunt but useful: KV-cache size, not parameter count, is the dominant limiter of long-context throughput. Keep KV small and you can batch more sequences; decoding is typically memory-bandwidth-bound. PostNAS bakes that into a hardware-aware search that tweaks heads/keys/values to hold speed while buying back accuracy. 

Why it’s interesting for builders

  • Upgrade path, not a moonshot. You can retrofit an existing model: freeze MLPs, swap/search attention, and ship meaningful speedups without full pretraining. 

  • Hybrid done right. Strategically retain a few full-attention layers (learned placement beats uniform) to keep retrieval and tricky benchmarks strong. 

  • Long-context economics. If you serve 128K–256K prompts, the 53.6× decoding and 6.14× prefilling gains translate directly into lower latency or higher concurrency. 

Bottom line

Jet-Nemotron reframes efficient LMs as an architecture-search problem on top of pre-trained backbones. With JetBlock and a KV-aware, GPU-realistic search, it shows you don’t have to choose between accuracy and speed—especially at long context lengths that crush classic Transformers. 

Paper link: arXiv 2508.15884 (PDF)

16.8.25

“Speed Always Wins” is the field guide to building faster, cheaper LLMs

 Transformers scaled LLMs to jaw-dropping capabilities—but quadratic attention and ballooning KV caches are throttling real-world deployment. A new survey from Shanghai AI Lab, HKUST(GZ) and collaborators takes stock of what’s next, categorizing the ecosystem of efficient LLM architectures and where each shines. Think of it as a build sheet for teams trying to cut latency and cost without giving up quality. 

The efficiency playbook, in seven parts

  • Linear sequence modeling: from linearized attention to linear RNNs and state-space models that drop KV cache and push complexity toward O(N)

  • Sparse sequence modeling: static, dynamic, and training-free sparsity to compute only the most useful token-token interactions. 

  • Efficient full attention: keep softmax attention but make it practical with IO-aware, grouped, mixture, and quantized attention variants. 

  • Sparse Mixture-of-Experts: routing, expert designs and MoE conversion to grow capacity without proportional FLOPs.

  • Hybrid architectures: inter-layer and intra-layer mixes that blend linear blocks with full attention for a better speed/quality trade-off. 

  • Diffusion LLMs: non-autoregressive generation, bridges back to AR, and early steps to extend diffusion approaches to multimodality. 

  • Beyond text: how these efficiency ideas transfer to vision, audio, and multimodal stacks. 

Why this matters now

Long-context patterns—RAG, agentic tool use, deliberate reasoning, and multimodal inputs—are pushing sequence lengths and memory pressure through the roof. The survey frames these usage patterns and argues that architectural efficiency, not just better prompts or hardware, is the lever that scales the next wave of applications. 

A roadmap, not just a reading list

Beyond taxonomy, the paper stitches trends into a blueprint: pick linear/sparse methods to kill KV bloat, use efficient-full-attention where fidelity matters, layer in MoE for capacity, and consider hybrids or diffusion LLMs where generation style allows. There’s also a companion GitHub “Awesome-Efficient-Arch” list to track the space as it moves. 

If you’re building agents that browse, reason and call tools all day—or multimodal systems juggling video and audio—this survey is a timely map of the fastest lanes through today’s LLM bottlenecks.

Paper link: arXiv 2508.09834 (PDF)

31.7.25

LangExtract: Google’s Gemini-Powered Library That Turns Raw Text into Reliable Data

 

A new way to mine insight from messy text

On July 30 2025 the Google Developers Blog unveiled LangExtract, an open-source Python package that promises to “unlock the data within” any text-heavy corpus, from clinical notes to customer feedback threads. Built around Gemini models but compatible with any LLM, the project aims to replace brittle regex pipelines with a single declarative interface for extraction, visualization and traceability. 

Why LangExtract stands out

LangExtract combines seven features that rarely appear together in one tool:

  1. Precise source grounding – every entity you pull out is linked back to its exact character span in the original document, so auditors can see where a value came from.

  2. Schema-enforced outputs – you describe the JSON you want, add a few examples, and the library leverages Gemini’s controlled generation to keep responses on-spec.

  3. Long-context optimisation – chunking, parallel passes and multi-stage recall tame “needle-in-a-haystack” searches across million-token inputs.

  4. Interactive HTML visualisation – one command turns results into a self-contained page where extractions glow inside the source text.

  5. Flexible back-ends – swap Gemini for on-device Ollama models or any OpenAI-compatible endpoint.

  6. Domain agnosticism – the same prompt-plus-examples recipe works for finance, law, medicine or literature.

  7. Apache-2.0 licence – no gating, just pip install langextract

How it works in practice

A “quick-start” script pulls Shakespeare characters, emotions and relationships in about a dozen lines of code, then writes an interactive HTML overlay showing each extraction highlighted inside the play. The same pattern scales: push the full Romeo and Juliet text through three extraction passes and LangExtract surfaces hundreds of grounded entities while keeping recall high. G

The GitHub repository already counts 200+ stars less than a week after launch, and ships with examples for medication extraction and structured radiology reporting—fields where provenance and accuracy are critical. A live Hugging Face demo called RadExtract shows the library converting free-text X-ray reports into structured findings, then color-coding the original sentences that justify each data point. 

Under the hood: Gemini plus controlled generation

When you pass model_id="gemini-2.5-flash" (or -pro for harder tasks), LangExtract automatically applies Google’s controlled generation API to lock output into the schema you defined. That means fewer JSON-parse errors and cleaner downstream pipelines—something traditional LLM calls often fumble. For massive workloads, Google recommends a Tier-2 Gemini quota to avoid rate limits. 

Why developers should pay attention

Information extraction has long oscillated between hand-tuned rules (fast but brittle) and heavyweight ML pipelines (accurate but slow to build). LangExtract offers a third path: prompt-programming simplicity with enterprise-grade traceability. Because it’s open-source, teams can audit the chain of custody and fine-tune prompts to their own compliance rules instead of black-box vendor filters.

Whether you’re structuring earnings calls, tagging sentiment in product reviews, or mapping drug-dosage relationships in EMRs, LangExtract turns unreadable text into queryable data—without sacrificing transparency. For AI enthusiasts, it’s also a practical showcase of what Gemini’s long-context and schema-control features can do today.

Bottom line: install the package, craft a clear prompt, add a few gold examples, and LangExtract will handle the rest—from parallel chunking to an HTML dashboard—so you can move straight from raw documents to actionable datasets.

22.7.25

Qwen3-235B-A22B-Instruct-2507: Alibaba’s New Open-Weight Flagship Redefines Efficient Megamodels

 When the Qwen team hit “post” on X announcing Qwen3-235B-A22B-Instruct-2507—plus a lightweight FP8 variant—the tweet felt less like routine release notes and more like a thunderclap across AI Twitter. The thread promised “better across the board” performance and immediate open-weights access, positioning Qwen as the most aggressive big-model vendor in the open ecosystem. 



Inside the Model

Under the hood, the new model keeps the mixture-of-experts (MoE) recipe that made earlier Qwen3 builds special: 128 experts, but only 8 fire on each forward pass, so just 22 B parameters are active even though the full network tops out at 235 B. That efficiency allows 256 K tokens of native context and enables consumer-grade deployments that once demanded datacenter GPUs. 

Benchmark Shockwaves

Numbers published with the release show why the community’s jaw dropped. On the notoriously tricky ARC-AGI benchmark, Qwen3-235B-A22B-Instruct-2507 scores 41.8 %, eclipsing Moonshot’s freshly minted Kimi K2 by nearly 29 points and edging ahead of Claude Opus 4 in non-thinking mode. Coding (LiveCodeBench v6) jumps to 51.8 %, and reasoning tasks like AIME25 leap to 70.3 %. In most rows of the evaluation table, the new Qwen flags sit comfortably ahead of DeepSeek-V3, o3-mini, and OpenAI’s o1 reference. 

Why an FP8 Build Matters

Alongside the bf16 release, Alibaba published a fully FP8-quantised version. Dropping to eight-bit floats slashes VRAM by roughly 40 % while preserving accuracy, paving the way for single-GPU inference or even multi-GPU laptop rigs. Apache-2.0 licensing means startups can bake the FP8 weights directly into commercial products without costly negotiations. 

Community Reception: K2 Who?

Reddit’s r/singularity lit up within minutes: “Kimi K2 is already irrelevant,” read the top-voted post, linking to the Qwen tweet and highlighting the model’s 4.2× smaller total size yet broader win-rate.  Analysts on Interconnects echoed the sentiment, framing the drop as part of a summer in which Chinese labs “continue to dominate” the open-weight leaderboard and openly court Western builders. 

Beyond Benchmarks: Agentic DNA

Qwen3’s team stresses that the instruct model is tuned for tool-calling and agent workflows. The official model card shows code snippets for integrating with Qwen-Agent and MCP config files, underscoring Alibaba’s push toward practical automation at 262 K-token scale—think mega-docs, legal contracts or multi-day chat histories without windowing hacks. 

Why It Matters

Qwen3-235B-A22B-Instruct-2507 sets a new bar for “open yet frontier-grade.” By decoupling “thinking” and “non-thinking” modes into separate models, Alibaba embraced community feedback while sidestepping latency complaints. The result is a release that:

  • outperforms larger proprietary models on knowledge, reasoning, and multilingual tests;

  • ships under a permissive license;

  • arrives in both bf16 and FP8 flavors for hobbyists and enterprises alike;

  • proves that giant MoEs can be resource-friendly—and, crucially, available today.

For AI enthusiasts and builders, the message is clear: grab the weights, spin up your agent stack, and see how far 22 B active parameters can take you. The open-source race just found a new pacesetter.

13.7.25

Microsoft’s Phi-4-mini-Flash-Reasoning: A 3.8 B “Pocket” LLM that Delivers 10× Faster Long-Context Logic at the Edge

 

🚀 Why This Release Matters

Microsoft’s Azure AI team has pushed its Phi small-model family forward with Phi-4-mini-Flash-Reasoning, a compact LLM purpose-built for latency-sensitive maths, logic and coding tasks. Despite running on as little as a single smartphone-class GPU or 4 GB of VRAM, the model matches—or beats—larger 6–8 B baselines in reasoning accuracy while generating tokens up to 10 times faster


🧩 Inside the Compact “Flash” Architecture

InnovationFunctionImpact
SambaY Self-DecoderFuses Mamba state-space layers with Sliding-Window Attention plus a single global-attention layerLinear-time pre-fill, local context capture, long-range memory without quadratic cost 
Gated Memory Unit (GMU)Lightweight gating layer that shares hidden states across decoder blocksUp to 40 % fewer FLOPs per token with no quality loss 
Decoder–Hybrid–Decoder LayoutAlternates full attention with fast Mamba/SWA blocksRetains a 64 K-token context window on edge devices 

📊 Benchmark Snapshot

Test (single A100-80 GB)Phi-4-mini-FlashPhi-4-miniLlama-3-8B-Instruct
Latency (256 tok)≈ 40 ms95 ms120 ms
Throughput (tok/s)> 1 000110240
Math500 Accuracy81 %78 %73 %
AIME-24/2572 %70 %68 %

The near-linear latency curve means generation remains snappy even as prompt length approaches tens of thousands of tokens—ideal for analytical workloads that feed entire textbooks or codebases into the model. 

🛠️ Developer Access & Tooling

  • Open Weights (MIT-style licence) on Hugging Face with sample notebooks and Docker images. 

  • Azure AI Foundry offers managed GPU endpoints, safety filters and function-calling out of the box. 

  • vLLM & TensorRT-LLM configs deliver the advertised speed on a single A100, H100, Jetson Orin or Apple M-series chip.


⚡ Real-World Use Cases

DomainBenefit
On-Device STEM TutorsInstant step-by-step maths explanations on tablets—no cloud round-trips.
Industrial IoT LogicLow-latency symbolic reasoning for quality checks and robotics arms.
AR/VR & GamingLocal puzzle-solving or NPC logic with < 50 ms response time.
Customer-Service BotsFast rule-based reasoning without expensive server farms.

🗺️ Roadmap

The Azure team hints that the SambaY + GMU blueprint will flow into a Phi-4-multimodal-flash edition later this year, bringing image and audio reasoning to the same edge-friendly footprint. 


🔑 Takeaway

Phi-4-mini-Flash-Reasoning proves that thoughtful architecture can outpace sheer parameter count. By marrying state-space efficiency with selective attention, Microsoft delivers GPT-class logic in a form factor small enough for phones and micro-servers—putting high-quality reasoning literally in your pocket.

For teams chasing ultra-low latency, privacy-preserving, or cost-sensitive deployments, this “flash” Phi is ready to plug in today.

8.7.25

Context Engineering in AI: Designing the Right Inputs for Smarter, Safer Large-Language Models

 

What Is Context Engineering?

In classic software, developers write deterministic code; in today’s AI systems, we compose contexts. Context engineering is the systematic craft of designing, organizing and manipulating every token fed into a large-language model (LLM) at inference time—instructions, examples, retrieved documents, API results, user profiles, safety policies, even intermediate chain-of-thought. Well-engineered context turns a general model into a domain expert; poor context produces hallucinations, leakage or policy violations. 


Core Techniques

TechniqueGoalTypical Tools / Patterns
Prompt Design & TemplatesGive the model clear role, task, format and constraintsSystem + user role prompts; XML / JSON schemas; function-calling specs
Retrieval-Augmented Generation (RAG)Supply fresh, external knowledge just-in-timeVector search, hybrid BM25+embedding, GraphRAG
Context CompressionFit more signal into limited tokensSummarisation, saliency ranking, LLM-powered “short-former” rewriters
Chunking & WindowingPreserve locality in extra-long inputsHierarchical windows, sliding attention, FlashMask / Ring Attention
Scratchpads & CoT ScaffoldsExpose model reasoning for better accuracy and debuggabilitySelf-consistency, tree-of-thought, DST (Directed Self-Testing)
Memory & ProfilesPersonalise without retrainingVector memories, episodic caches, preference embeddings
Tool / API ContextLet models call and interpret external systemsModel Context Protocol (MCP), JSON-schema function calls, structured tool output
Policy & GuardrailsEnforce safety and brand styleContent filters, regex validators, policy adapters, YAML instruction blocks

Why It Matters

  1. Accuracy & Trust – Fact-filled, well-structured context slashes hallucination rates and citation errors.

  2. Privacy & Governance – Explicit control over what leaves the organisation or reaches the model helps meet GDPR, HIPAA and the EU AI Act.

  3. Cost Efficiency – Compressing or caching context can cut token bills by 50-80 %.

  4. Scalability – Multi-step agent systems live or die by fast, machine-readable context routing; good design tames complexity.


High-Impact Use Cases

SectorHow Context Engineering Delivers Value
Customer SupportRAG surfaces the exact policy paragraph and recent ticket history, enabling a single prompt to draft compliant replies.
Coding AgentsFunction-calling + repository retrieval feed IDE paths, diffs and test logs, letting models patch bugs autonomously.
Healthcare Q&AContext filters strip PHI before retrieval; clinically-approved guidelines injected to guide safe advice.
Legal AnalysisLong-context models read entire case bundles; chunk ranking highlights precedent sections for argument drafting.
Manufacturing IoTStreaming sensor data is summarised every minute and appended to a rolling window for predictive-maintenance agents.

Designing a Context Pipeline: Four Practical Steps

  1. Map the Task Surface
    • What knowledge is static vs. dynamic?
    • Which external tools or databases are authoritative?

  2. Define Context Layers
    Base prompt: role, format, policy
    Ephemeral layer: user query, tool results
    Memory layer: user or session history
    Safety layer: filters, refusal templates

  3. Choose Retrieval & Compression Strategies
    • Exact text (BM25) for short policies; dense vectors for semantic match
    • Summaries or selective quoting for large PDFs

  4. Instrument & Iterate
    • Log token mixes, latency, cost
    • A/B test different ordering, chunking, or reasoning scaffolds
    • Use self-reflection or eval suites (e.g., TruthfulQA-Context) to measure gains


Emerging Tools & Standards

  • MCP (Model Context Protocol) – open JSON schema for passing tool output and trace metadata to any LLM, adopted by Claude Code, Gemini CLI and IBM MCP Gateway.

  • Context-Aware Runtimes – vLLM, Flash-Infer and Infinity Lite stream 128 K-1 M tokens with optimized KV caches.

  • Context Observability Dashboards – Startups like ContextHub show token-level diff, attribution and cost per layer.


The Road Ahead

As context windows expand to a million tokens and multi-agent systems proliferate, context engineering will sit alongside model training and fine-tuning as a first-class AI discipline. Teams that master it will ship assistants that feel domain-expert-smart, honest and cost-efficient—while everyone else will chase unpredictable black boxes.

Whether you’re building a retrieval chatbot, a self-healing codebase or an autonomous research agent, remember: the model is only as good as the context you feed it.

28.6.25

Google AI’s Gemma 3n Brings Full Multimodal Intelligence to Low-Power Edge Devices

 

A Mobile-First Milestone

Google has released Gemma 3n, a compact multimodal language model engineered to run entirely offline on resource-constrained hardware. Unlike its larger Gemma-3 cousins, the 3n variant was rebuilt from the ground up for edge deployment, performing vision, audio, video and text reasoning on devices with as little as 2 GB of RAM

Two Ultra-Efficient Flavors

VariantActivated Params*Typical RAMClaimed ThroughputTarget Hardware
E2B≈ 2 B (per token)2 GB30 tokens / sEntry-level phones, micro-PCs
E4B≈ 4 B4 GB50 tokens / sLaptops, Jetson-class boards

*Mixture-of-Experts routing keeps only a subset of the full network active, giving E2B speeds comparable to 5 B dense models and E4B performance near 8 B models.

Key Technical Highlights

  • Native Multimodality – Single checkpoint accepts combined image, audio, video and text inputs and produces grounded text output.

  • Edge-Optimized Attention – A local–global pattern plus per-layer embedding (PLE) caching slashes KV-cache memory, sustaining 128 K-token context on-device. 

  • Low-Precision Friendly – Ships with Q4_K_M quantization recipes and TensorFlow Lite / MediaPipe build targets for Android, iOS, and Linux SBCs.

  • Privacy & Latency – All computation stays on the device, eliminating round-trip delays and cloud-data exposure—critical for regulated or offline scenarios.

Early Benchmarks

Task3n-E2B3n-E4BGemma 3-4B-IT    Llama-3-8B-Instruct
MMLU (few-shot)            60.1        66.7        65.4            68.9
VQAv2 (zero-shot)    57.8        61.2        60.7            58.3
AudioQS (ASR)14.3 WER    11.6 WER      12.9 WER        17.4 WER

Despite the tiny footprint, Gemma 3n matches or outperforms many 4-8 B dense models across language, vision and audio tasks. 

Developer Experience

  • Open Weights (Apache 2.0) – Available on Hugging Face, Google AI Studio and Android AICore.

  • Gemma CLI & Vertex AI – Same tooling as larger Gemma 3 models; drop-in replacement for cloud calls when bandwidth or privacy is a concern.

  • Reference Apps – Google has published demos for offline voice assistants, real-time captioning, and hybrid AR experiences that blend live camera frames with text-based reasoning. 

Why It Matters

  1. Unlocks Edge-First Use Cases – Wearables, drones, smart-home hubs and industrial sensors can now run frontier-level AI without the cloud.

  2. Reduces Cost & Carbon – Fewer server cycles and no data egress fees make deployments cheaper and greener.

  3. Strengthens Privacy – Keeping raw sensor data on-device helps meet GDPR, HIPAA and other compliance regimes.

Looking Ahead

Google hints that Gemma 3n is just the first in a “nano-stack” of forthcoming sub-5 B multimodal releases built to scale from Raspberry Pi boards to flagship smartphones. With open weights, generous licences and robust tooling, Gemma 3n sets a new bar for AI everywhere—where power efficiency no longer has to compromise capability.

21.6.25

Mistral Elevates Its 24B Open‑Source Model: Small 3.2 Enhances Instruction Fidelity & Reliability

 Mistral AI has released Mistral Small 3.2, an optimized version of its open-source 24B-parameter multimodal model. This update refines rather than reinvents: it strengthens instruction adherence, improves output consistency, and bolsters function-calling behavior—all while keeping the lightweight, efficient foundations of its predecessor intact.


🎯 Key Refinements in Small 3.2

  • Accuracy Gains: Instruction-following performance rose from 82.75% to 84.78%—a solid boost in model reliability.

  • Repetition Reduction: Instances of infinite or repetitive responses dropped nearly twofold (from 2.11% to 1.29%)—ensuring cleaner outputs for real-world prompts.

  • Enhanced Tool Integration: The function-calling interface has been fine-tuned for frameworks like vLLM, improving tool-use scenarios.


🔬 Benchmark Comparisons

  • Wildbench v2: Nearly 10-point improvement in performance.

  • Arena Hard v2: Scores jumped from 19.56% to 43.10%, showcasing substantial gains on challenging tasks.

  • Coding & Reasoning: Gains on HumanEval Plus (88.99→92.90%) and MBPP Pass@5 (74.63→78.33%), with slight improvements in MMLU Pro and MATH.

  • Vision benchmarks: Small trade-offs: overall vision score dipped from 81.39 to 81.00, with mixed results across tasks.

  • MMLU Slight Dip: A minor regression from 80.62% to 80.50%, reflecting nuanced trade-offs .


💡 Why These Updates Matter

Although no architectural changes were made, these improvements focus on polishing the model’s behavior—making it more predictable, compliant, and production-ready. Notably, Small 3.2 still runs smoothly on a single A100 or H100 80GB GPU, with 55GB VRAM needed for full-floating performance—ideal for cost-sensitive deployments.


🚀 Enterprise-Ready Benefits

  • Stability: Developers targeting real-world applications will appreciate fewer unexpected loops or halts.

  • Precision: Enhanced prompt fidelity means fewer edge-case failures and cleaner behavioral consistency.

  • Compatibility: Improved function-calling makes Small 3.2 a dependable choice for agentic workflows and tool-based LLM work.

  • Accessible: Remains open-source under Apache 2.0, hosted on Hugging Face with support in frameworks like Transformers & vLLM.

  • EU-Friendly: Backed by Mistral’s Parisian roots and compliance with GDPR/EU AI Act—a plus for European enterprises.


🧭 Final Takeaway

Small 3.2 isn’t about flashy new features—it’s about foundational refinement. Mistral is doubling down on its “efficient excellence” strategy: deliver high performance, open-source flexibility, and reliability on mainstream infrastructure. For developers and businesses looking to harness powerful LLMs without GPU farms or proprietary lock-in, Small 3.2 offers a compelling, polished upgrade.

 Most “agent” papers either hard-code reflection workflows or pay the bill to fine-tune the base model. Memento offers a third path: keep t...