Showing posts with label context engineering. Show all posts
Showing posts with label context engineering. Show all posts

1.8.25

Wide Research: Manus Unleashes 100-Agent Parallel Processing for Lightning-Fast, Large-Scale Insight

 Manus—the Singapore-based startup behind the namesake autonomous AI agent—has flipped the research workflow on its head with Wide Research, a system-level mechanism that sends hundreds of parallel agents after every angle of a complex question. Whether you want a side-by-side on 500 MBA programs or a 360° scan of GenAI tools, Wide Research chews through the workload in a fraction of the time sequential agents would take. 


From Deep to Wide

Most “deep research” agents operate like meticulous librarians: a single high-capacity model crawls source after source, sequentially synthesising answers. It’s thorough—but agonisingly slow at scale. Wide Research replaces that linear approach with an agent-cluster collaboration protocol. Each sub-agent is a full Manus instance, not a narrow specialist, so any of them can read, reason and write. The orchestration layer splinters a task into sub-queries, distributes them, then merges the results into one coherent report. 

Why general-purpose sub-agents matter

Traditional multi-agent designs hard-code roles—“planner,” “coder,” “critic.” Those rigid templates break when a project veers off script. Because every Wide Research worker is general-purpose, task boundaries dissolve: one sub-agent might scrape SEC filings, another might summarise IEEE papers, and a third could draft executive bullets—then hand the baton seamlessly. 


Inside the Architecture

LayerFunctionDefault Tech
Task DecomposerSplits the master query into 100-plus granular promptsLLM-based planner
Agent FabricLaunches isolated, cloud-hosted Manus instances; scales elasticallyK8s + Firecracker VMs
Coordination ProtocolRoutes intermediate results, resolves duplicates, merges insightsProprietary RPC
Aggregator & FormatterSynthesises final doc, slides, or CSVManus core model

The entire pipeline is asynchronous; users can park a query (“compare 1 000 stocks”) and return later to a ready-made dashboard—no tab babysitting required. 

Performance Snapshot

ScenarioDeep-style Single AgentWide Research (100+ agents)
Analyse 100 sneakers for price, reviews, specs~70 min< 7 min
Rank Fortune 500 by AI spend, ESG score~3 h18 min
Cross-compare 1 000 GenAI startupsTime-out45 min

(Internal Manus demo data shown during launch.) 

Early Use Cases

  1. Competitive Intelligence – Product teams ingest hundreds of rival SKUs, markets and patents overnight.

  2. Financial Screening – Analysts filter thousands of equities or tokens with bespoke metrics—faster than spreadsheet macros can update.

  3. Academic Surveys – Researchers pull citations across disciplines, summarising 200+ papers into thematic clusters in a single afternoon.

Because Wide Research is model-agnostic, enterprises can plug in Anthropic Claude, Qwen, or local Llama checkpoints to meet data-sovereignty rules. 


Pricing & Roll-Out

  • Today: Wide Research is live for Pro subscribers (US $199/month).

  • Q3 2025: Gradual access for Plus and Basic tiers.

  • Future: Manus hints at an on-prem “WideKit” for regulated industries that can’t leave their firewall. 


Limitations & Trade-Offs

  • Compute Cost: Hundreds of VM-backed agents aren’t cheap; budget accordingly for very large jobs.

  • Cold-Start Results: Until sub-agents gather enough signal, early outputs can be uneven—iteration helps.

  • Benchmark Transparency: Manus hasn’t yet published formal speed/quality benchmarks vs. sequential baselines, though third-party analyses are emerging. 


The Bigger Picture

Wide Research is less a one-off feature than a proof-of-concept for “scaling laws of agentic AI.” Manus argues that throwing more capable agents—not merely larger context windows—can yield super-linear gains in throughput and idea diversity. It’s a thesis with broad implications for everything from autonomous coding swarms to AI-driven drug pipelines.

As parallel agent frameworks proliferate (think IBM’s MCP Gateway, Baidu’s AI Search Paradigm, Anthropic’s Claude tool plugins), context engineering and agent coordination will rival model size as the key levers of performance.


Key Takeaway

Wide Research reframes high-volume, messy analysis as a parallel rather than serial challenge—turning hours of manual slog into minutes of delegated computation. For teams drowning in data and deadlines, Manus just opened a wormhole to faster, broader insight—no prompt cajoling required.

21.7.25

Mirix: A Modular Memory Layer that Gives AI Agents Long-Term Recall and Personalized Reasoning

 

1 | Why “Memory” Is the Next AI Bottleneck

Large-language-model agents excel at single-turn answers, but forget everything once the context window scrolls out of sight. That results in repetitive conversations, lost project state, and brittle multi-step plans. Mirix, introduced by researchers from Carnegie Mellon and Tsinghua University, tackles the problem with a drop-in, modular memory layer that any agent framework (LangGraph, Autogen, IBM MCP, etc.) can call.


2 | How Mirix Works under the Hood

LayerPurposeDefault Tech Stack
IngestorsCapture raw events (chat turns, tool outputs, sensors).Web-hooks, Kafka, Postgres logical decode
CanonicalizerConvert heterogeneous events to a common MemoryEvent schema with type, timestamp, and embeddings.Pydantic, OpenAI embeddings-3-small
Memory StoresPluggable persistence engines. Ship with: • VectorDB (FAISS / Milvus) • Knowledge Graph (Neo4j) • Document Store (Weaviate hybrid).Drivers for each
RetrieversRoute agent queries to the right store; merge and de-dupe results; compress into 2-3 k tokens.Hybrid BM25 + vector; Rank-fusion
ReasonersOptional small models that label sentiment, importance, or user identity to prioritize what is stored or surfaced.DistilRoBERTa sentiment, MiniLM ranker
Key insight: memory need not live in a single DB; Mirix treats it as an orchestrated ensemble of stores, each optimised for a particular signal (facts vs. tasks vs. social cues).

3 | What It Enables

CapabilityExample
Long-Horizon PlanningA code-review agent tracks open pull-requests and test failures for weeks, not hours.
True PersonalizationA tutoring bot recalls a student’s weak areas and preferred explanations.
Contextual Tool UseAn enterprise helper chooses between Jira, Confluence, or GitLab based on past success rates with the same user.

Benchmarks on WikiChat-Memory (multi-episode conversations) show 58 % fewer repetitions vs. vanilla RAG and 3.4 × higher success on 15-step task chains.

4 | Plugging Mirix into an Existing Agent


from mirix.memory import MemoryClient
from agentic import Agent mem = MemoryClient( stores=[ "faiss://embeddings", "neo4j://graph", "weaviate://docs" ] ) agent = Agent(llm="mistral-small-3.2", memory=mem) response = agent.chat("Where did we leave the migration script last week?") print(response)

The memory layer runs async, so ingest and retrieval add <50 ms latency, even with three stores in parallel.


5 | Governance & Cost Controls

  • Policy Filters: PII redaction rules determine what is persisted.

  • TTL & Eviction: Events expire after a configurable horizon (default 90 days) or when embedding budget is hit.

  • Audit Log: Every retrieval is stamped for compliance, easing SOC 2 / GDPR audits.


6 | Limitations & Roadmap

  • Cold-start: Until enough signal accumulates, Mirix falls back to generic prompts.

  • Cross-user Contamination: Requires careful namespace isolation in multi-tenant deployments.

  • Upcoming: Graph-based reasoning (path-finding across memory) and a “Memory-as-Service” managed version on Azure.


Final Takeaway

Mirix turns stateless LLM calls into stateful, personalised experiences—without locking you into a single database or vendor. If your chatbot forgets what happened yesterday or your autonomous agent loses track of a multi-day workflow, Mirix may be the missing memory you need.

The rise of Context Engineering: why LLM performance now lives and dies on what you feed it

 Prompt tricks and vector databases used to feel like nice-to-have extras for chatbots. A sprawling new study argues they have matured into a discipline of their own. Titled “A Survey of Context Engineering for Large Language Models,” the 165-page report from the Chinese Academy of Sciences, UC Merced and seven other universities positions context selection, shaping and storage as the primary lever for squeezing more capability out of ever-larger models. The team sifted through 1,400-plus research papers to build the first comprehensive roadmap of the space.

From prompt hacks to a three-pillar stack

The authors split Context Engineering into three foundational components:

  1. Context retrieval & generation – everything from classic prompt templates to dynamic external-knowledge acquisition.

  2. Context processing – long-sequence handling, self-refinement loops and multimodal or structured context fusion.

  3. Context management – memory hierarchies, compression schemes and token-budget optimisation.

These pillars support four dominant system archetypes: Retrieval-Augmented Generation (RAG), long-lived memory agents, tool-integrated reasoning (function calling, code execution) and fully fledged multi-agent frameworks.

Why the stakes keep rising

  • Bigger models, harsher limits. Even GPT-class contexts choke on enterprise-scale corpora; smarter pruning and compression decide whether answers stay on-topic or derail.

  • Agents need persistence. As LLM agents stretch across hours or days, hierarchical memory and context-refresh policies become as critical as the policy network itself.

  • Tool use explodes token demand. Function calls and code snippets are powerful but verbose; context engineering keeps them from crowding out the original question.

A looming research gap

Despite dramatic gains in understanding long and complex contexts, models remain weak at generating equally long, logically coherent outputs—a mismatch the survey brands the field’s “defining priority for future research.”

Practical takeaways for builders

  • Treat context like a first-class system resource—budget, cache and monitor it the way you would GPU memory.

  • Mix retrieval styles. Hybrid pipelines (keyword, dense, graph) outperform single-method RAG on complex queries.

  • Plan for multi-layer memory. Short-term windows, episodic buffers and long-term stores each have distinct TTLs and compression trade-offs.

Published July 17 2025 with an accompanying GitHub “awesome list,” the survey is already circulating among infra and agent teams looking to squeeze more mileage out of existing checkpoints before the next trillion-parameter beast lands.

Paper link: arXiv 2507.13334 (PDF)

8.7.25

Context Engineering in AI: Designing the Right Inputs for Smarter, Safer Large-Language Models

 

What Is Context Engineering?

In classic software, developers write deterministic code; in today’s AI systems, we compose contexts. Context engineering is the systematic craft of designing, organizing and manipulating every token fed into a large-language model (LLM) at inference time—instructions, examples, retrieved documents, API results, user profiles, safety policies, even intermediate chain-of-thought. Well-engineered context turns a general model into a domain expert; poor context produces hallucinations, leakage or policy violations. 


Core Techniques

TechniqueGoalTypical Tools / Patterns
Prompt Design & TemplatesGive the model clear role, task, format and constraintsSystem + user role prompts; XML / JSON schemas; function-calling specs
Retrieval-Augmented Generation (RAG)Supply fresh, external knowledge just-in-timeVector search, hybrid BM25+embedding, GraphRAG
Context CompressionFit more signal into limited tokensSummarisation, saliency ranking, LLM-powered “short-former” rewriters
Chunking & WindowingPreserve locality in extra-long inputsHierarchical windows, sliding attention, FlashMask / Ring Attention
Scratchpads & CoT ScaffoldsExpose model reasoning for better accuracy and debuggabilitySelf-consistency, tree-of-thought, DST (Directed Self-Testing)
Memory & ProfilesPersonalise without retrainingVector memories, episodic caches, preference embeddings
Tool / API ContextLet models call and interpret external systemsModel Context Protocol (MCP), JSON-schema function calls, structured tool output
Policy & GuardrailsEnforce safety and brand styleContent filters, regex validators, policy adapters, YAML instruction blocks

Why It Matters

  1. Accuracy & Trust – Fact-filled, well-structured context slashes hallucination rates and citation errors.

  2. Privacy & Governance – Explicit control over what leaves the organisation or reaches the model helps meet GDPR, HIPAA and the EU AI Act.

  3. Cost Efficiency – Compressing or caching context can cut token bills by 50-80 %.

  4. Scalability – Multi-step agent systems live or die by fast, machine-readable context routing; good design tames complexity.


High-Impact Use Cases

SectorHow Context Engineering Delivers Value
Customer SupportRAG surfaces the exact policy paragraph and recent ticket history, enabling a single prompt to draft compliant replies.
Coding AgentsFunction-calling + repository retrieval feed IDE paths, diffs and test logs, letting models patch bugs autonomously.
Healthcare Q&AContext filters strip PHI before retrieval; clinically-approved guidelines injected to guide safe advice.
Legal AnalysisLong-context models read entire case bundles; chunk ranking highlights precedent sections for argument drafting.
Manufacturing IoTStreaming sensor data is summarised every minute and appended to a rolling window for predictive-maintenance agents.

Designing a Context Pipeline: Four Practical Steps

  1. Map the Task Surface
    • What knowledge is static vs. dynamic?
    • Which external tools or databases are authoritative?

  2. Define Context Layers
    Base prompt: role, format, policy
    Ephemeral layer: user query, tool results
    Memory layer: user or session history
    Safety layer: filters, refusal templates

  3. Choose Retrieval & Compression Strategies
    • Exact text (BM25) for short policies; dense vectors for semantic match
    • Summaries or selective quoting for large PDFs

  4. Instrument & Iterate
    • Log token mixes, latency, cost
    • A/B test different ordering, chunking, or reasoning scaffolds
    • Use self-reflection or eval suites (e.g., TruthfulQA-Context) to measure gains


Emerging Tools & Standards

  • MCP (Model Context Protocol) – open JSON schema for passing tool output and trace metadata to any LLM, adopted by Claude Code, Gemini CLI and IBM MCP Gateway.

  • Context-Aware Runtimes – vLLM, Flash-Infer and Infinity Lite stream 128 K-1 M tokens with optimized KV caches.

  • Context Observability Dashboards – Startups like ContextHub show token-level diff, attribution and cost per layer.


The Road Ahead

As context windows expand to a million tokens and multi-agent systems proliferate, context engineering will sit alongside model training and fine-tuning as a first-class AI discipline. Teams that master it will ship assistants that feel domain-expert-smart, honest and cost-efficient—while everyone else will chase unpredictable black boxes.

Whether you’re building a retrieval chatbot, a self-healing codebase or an autonomous research agent, remember: the model is only as good as the context you feed it.

 Most “agent” papers either hard-code reflection workflows or pay the bill to fine-tune the base model. Memento offers a third path: keep t...