Showing posts with label LLM Agents. Show all posts
Showing posts with label LLM Agents. Show all posts

4.8.25

The Agentic Web: when bots become the primary users of the internet

 Search boxes and feeds defined the first two web eras. A new position paper proposes the third: the Agentic Web, where autonomous software agents—often LLM-powered—act on our behalf, coordinate with other agents, and execute long-horizon tasks across services. The authors offer a working definition and argue the shift is already visible in consumer assistants that can plan purchases and book reservations end-to-end. 

A framework in three dimensions

The paper lays out a conceptual stack for this world: intelligence (reasoning, memory, planning), interaction (tools, APIs, multi-agent protocols), and economics (incentives, pricing, marketplaces). These dimensions, taken together, underpin capabilities like retrieval, recommendation, planning and collaboration that move beyond single-turn chat.

From retrieval to planning to coordination

Architecturally, the authors chart algorithmic transitions: user-issued queries give way to agentic retrieval; recommender systems evolve into agent planners; and isolated tools become multi-agent collectives able to decompose and delegate work. A worked example walks through agents co-planning a travel itinerary, highlighting orchestration and memory. 

New pipes: MCP and agent-to-agent messaging

HTTP and RPC weren’t built for autonomous, negotiated workflows. The paper surveys emerging Model Context Protocol (MCP) interfaces and purpose-built agent-to-agent (A2A) messaging layers to support capability discovery, tool brokering and structured negotiations between services—foundational plumbing for an internet of bots. 

The Agent Attention Economy

If algorithms once competed for human attention, services on the Agentic Web will compete to be selected by agents mid-plan. That reframes ranking, pricing and attribution around machine decision-makers—an attention market where tools, APIs and even other agents bid for inclusion in workflows. 

What breaks (and who pays)

The authors predict “agent browsers” will disrupt today’s user-centric browsing model, shifting interfaces from manual clicks to delegated execution. They also flag a looming billing problem for complex, multi-step agent services that span providers and time windows—who gets paid, and how, when dozens of tools contribute to one outcome? 

Risks, red teaming and defense

A full section maps threats across layers (prompt-/tool-injection, data exfiltration, compromised marketplaces), and compares human-in-the-loop versus automated red teaming for agent systems. The authors argue for hybrid approaches, inference-time guardrails, and controllable planning to keep autonomous workflows within safe bounds.

Why it matters

If the Agentic Web arrives, the primary “users” of the internet won’t be humans but agents negotiating with each other—demanding new protocols, marketplaces, governance and safety tooling. For startups, the opportunity is to build the pipes, policies and platforms that let those agents cooperate—and compete—reliably.

Paper link: arXiv 2507.21206 (PDF)

15.5.25

MLE-Dojo: A Gym-Style Framework for Training and Evaluating Autonomous Machine Learning Engineering Agents

 In a significant advancement for AI research, Georgia Tech and Stanford University have introduced MLE-Dojo, a Gym-style framework aimed at training, evaluating, and benchmarking autonomous machine learning engineering (MLE) agents. This innovative platform provides a realistic, interactive environment for agents to develop and refine their skills across a wide array of machine learning tasks.


What is MLE-Dojo?

MLE-Dojo is designed to simulate the iterative workflows of human machine learning engineers. It offers an environment where large language model (LLM) agents can write, execute, and debug code, receiving structured feedback to improve their performance over time. The framework is built upon over 200 real-world Kaggle competitions, encompassing diverse domains such as tabular data analysis, computer vision, natural language processing, and time series forecasting. 


Key Features

  • Interactive Environment: Agents engage in a loop of experimentation, debugging, and refinement, closely mirroring real-world engineering processes.

  • Comprehensive Task Suite: With over 200 curated tasks, MLE-Dojo provides a broad spectrum of challenges to test and improve agent capabilities.

  • Modular Architecture: Each task operates within its own Docker container, ensuring safety, reproducibility, and ease of integration with various tools and datasets.

  • Structured Feedback: Agents receive detailed observations, including datasets, execution results, and error messages, facilitating step-by-step learning and improvement.

  • Training Flexibility: Supports both supervised fine-tuning and reinforcement learning, allowing for diverse training methodologies. 


Benchmarking and Evaluation

MLE-Dojo serves as a benchmark to assess the performance of autonomous MLE agents. In evaluations involving eight frontier LLMs, the framework highlighted both the capabilities and limitations of current models, particularly in handling complex, long-horizon tasks and error resolution. 




Implications for AI Research

By providing a realistic and comprehensive environment, MLE-Dojo enables researchers to systematically train and evaluate autonomous agents in machine learning engineering tasks. This framework paves the way for the development of more robust, generalizable, and scalable AI agents capable of handling real-world engineering challenges


Access and Community Involvement

MLE-Dojo is open-source, encouraging community collaboration and innovation. Researchers and developers can access the framework and contribute to its ongoing development through the official GitHub repository: https://github.com/MLE-Dojo/MLE-Dojo.


Takeaway

MLE-Dojo represents a significant step forward in the training and evaluation of autonomous machine learning engineering agents. By simulating real-world tasks and providing structured feedback, it offers a valuable tool for advancing AI research and developing agents capable of complex problem-solving in dynamic environments.

 Google Research just unveiled MLE-STAR , a machine-learning engineering agent that treats model building like a guided search-and-refine lo...