19.5.25

Ultra-FineWeb: A Trillion-Token Dataset Enhancing LLM Accuracy Across Benchmarks

 Researchers from Tsinghua University and ModelBest have introduced Ultra-FineWeb, a large-scale, high-quality dataset comprising approximately 1 trillion English tokens and 120 billion Chinese tokens. This dataset aims to enhance the performance of large language models (LLMs) by providing cleaner and more efficient training data.

Efficient Data Filtering Pipeline

The creation of Ultra-FineWeb involved an efficient data filtering pipeline that addresses two main challenges in data preparation for LLMs:

  1. Lack of Efficient Data Verification Strategy:
    Traditional methods struggle to provide timely feedback on data quality. To overcome this, the researchers introduced a computationally efficient verification strategy that enables rapid evaluation of data impact on LLM training with minimal computational cost.

  2. Selection of Seed Data for Classifier Training:
    Selecting appropriate seed data often relies heavily on human expertise, introducing subjectivity. The team optimized the selection process by integrating the verification strategy, improving filtering efficiency and classifier robustness.

A lightweight classifier based on fastText was employed to efficiently filter high-quality data, significantly reducing inference costs compared to LLM-based classifiers.

Benchmark Performance

Empirical results demonstrate that LLMs trained on Ultra-FineWeb exhibit significant performance improvements across multiple benchmark tasks, including MMLU, ARC, CommonSenseQA, and others. The dataset's quality contributes to enhanced training efficiency and model accuracy.

Availability

Ultra-FineWeb is available on Hugging Face, providing researchers and developers with access to this extensive dataset for training and evaluating LLMs.


References

  1. Researchers from Tsinghua and ModelBest Release Ultra-FineWeb: A Trillion-Token Dataset Enhancing LLM Accuracy Across Benchmarks – MarkTechPost. 

  2. Ultra-FineWeb Dataset on Hugging Face. 

  3. Ultra-FineWeb: Efficient Data Filtering and Verification for High-Quality LLM Training Data















17.5.25

How FutureHouse’s AI Agents Are Reshaping Scientific Discovery

In a major leap for scientific research, FutureHouse—a nonprofit backed by former Google CEO Eric Schmidt—has introduced a powerful lineup of AI research agents aimed at accelerating the pace of scientific discovery. Built to support scientists across disciplines, these agents automate key parts of the research workflow—from literature search to chemical synthesis planning—reducing bottlenecks and enhancing productivity.

This suite includes four primary agents: Crow, Falcon, Owl, and Phoenix, each specialized in a unique aspect of the research pipeline. Together, they form a comprehensive AI-powered infrastructure for modern science.


Meet the AI Agents Changing Science

1. Crow – The Concise Search Specialist

Crow acts as a rapid-response research assistant. It provides short, precise answers to technical queries by intelligently retrieving evidence from full-text scientific papers. Designed for speed and accuracy, it’s especially useful for API-based interactions, where precision and performance matter most. Crow is built on top of FutureHouse’s custom PaperQA2 architecture.

2. Falcon – Deep Research Assistant

Falcon takes things further by conducting expansive literature reviews. It produces full-length research reports in response to broader or more open-ended scientific questions. By analyzing papers, data sources, and context-rich materials, Falcon allows researchers to dive deep into topics without manually sorting through endless PDFs.

3. Owl – Precedent Investigator

Owl helps scientists find out whether an experiment or research idea has already been executed. This is crucial for grant applications, patent filings, and ensuring that researchers don’t waste time reinventing the wheel. By surfacing related studies and experiments, Owl enables more informed, original work.

4. Phoenix – The Chemistry Innovator

Phoenix is built for early-stage chemistry research. Leveraging cheminformatics tools, it assists in designing molecules, suggesting synthetic routes, and evaluating chemical feasibility. It builds upon an earlier FutureHouse prototype called ChemCrow and remains in active development as a sandbox tool for chemists to explore and provide feedback.


Performance and Potential

In benchmark tests, Crow, Falcon, and Owl outperformed PhD-level biologists on scientific retrieval and reasoning tasks. Unlike many AI tools that only read paper abstracts or summaries, these agents consume and analyze full-text documents, allowing them to detect nuanced issues like methodological flaws or statistical limitations.

Although Phoenix is still in its experimental phase and may sometimes produce errors, it represents an important step toward automating complex tasks in synthetic chemistry.


Why This Matters

The bottlenecks of modern science often lie not in experimentation, but in navigating the overwhelming volume of prior work. By offloading repetitive and time-consuming research tasks to AI, FutureHouse's agents free up scientists to focus on creativity, innovation, and critical thinking.

These tools are also being made openly available for scientists and research institutions, fostering a collaborative environment for AI-augmented science.


Final Takeaway

FutureHouse’s AI agents aren’t just productivity boosters—they’re a vision of a new research paradigm. By augmenting human researchers with scalable, intelligent assistants, we’re witnessing the early stages of a revolution in how science is done. As these tools evolve, they hold the potential to dramatically accelerate scientific discovery across disciplines.


References

  1. Automate Your Research Workflows Using AI Agents for Scientific Discovery by FutureHouse – MarkTechPost

  2. FutureHouse Official Website

  3. FutureHouse Research Agent Platform

 

OpenAI Codex: A Cloud-Based AI Agent Transforming Software Development

 OpenAI has unveiled Codex, a groundbreaking cloud-based AI software engineering agent designed to revolutionize the way developers approach coding tasks. By handling multiple assignments simultaneously, Codex aims to enhance productivity and streamline the software development process.


What is OpenAI Codex?

Codex is an AI-powered agent integrated into ChatGPT, capable of performing various software engineering tasks such as:

  • Writing new features

  • Answering codebase-related questions

  • Running tests

  • Proposing pull requests for review

Each task operates within its own secure, isolated cloud environment, ensuring safety and context-specific operations. Codex leverages the codex-1 model, a specialized version of OpenAI's o3 model fine-tuned for software development tasks.


Key Features

  • Concurrent Task Management: Codex can handle multiple coding tasks in parallel, significantly reducing development time.

  • Secure Sandboxed Operations: Each task runs in an isolated environment preloaded with the user's code repository, enhancing security and context-awareness.

  • Transparent Action Logs: Developers receive detailed logs, test outputs, and citations for each action Codex performs, facilitating easy verification and review.

  • AGENTS.MD Integration: By creating AGENTS.MD files in the repository, users can instruct Codex on project-specific commands, testing procedures, and coding standards.

  • Codex CLI Updates: OpenAI has updated the Codex Command Line Interface (CLI), introducing a faster model (codex-mini-latest) and simplified authentication through ChatGPT accounts.


How to Use Codex

Accessing Codex is straightforward for ChatGPT Pro, Team, and Enterprise users:

  1. Navigate to the ChatGPT sidebar and select Codex.

  2. Assign coding tasks by typing prompts or asking questions related to your codebase.

  3. Codex processes each request independently, reading and editing files, running commands like test suites, linters, and type checkers.

  4. Upon task completion (typically within one to thirty minutes), review the changes, request further modifications, open a GitHub pull request, or integrate the changes into your local setup.


Security and Compliance

Security is a paramount concern for OpenAI. Codex operates in isolated containers without internet access during task execution, interacting only with the provided code and dependencies. It's trained to identify and refuse malicious software development requests, ensuring responsible AI usage in software engineering.


Final Takeaway

OpenAI Codex stands out as a secure, intelligent, and efficient AI coding companion. By enabling simultaneous software development tasks in isolated environments, Codex helps developers move faster and more confidently while maintaining full transparency and control over their codebase. It’s a glimpse into the future of software development, where AI agents work alongside humans to build better systems—faster.


References

  1. OpenAI Releases Codex: A Software Agent that Operates in the Cloud and Can Do Many Tasks in Parallel – MarkTechPost

  2. OpenAI: Introducing Codex

  3. OpenAI launches Codex research preview – VentureBeat

  4. OpenAI Launches New AI Coding Agent – WSJ

  5. OpenAI's New Codex Can Help You Code or Order Takeout – Business Insider

  6. OpenAI Launches an Agentic, Web-Based Coding Tool – Wired

  7. Codex – OpenAI API Documentation

  8. OpenAI Codex – Wikipedia

16.5.25

Top 6 Agentic AI Design Patterns: Building Smarter, Autonomous AI Systems

As artificial intelligence continues to evolve, the shift from simple chatbot interfaces to truly autonomous, intelligent systems is becoming a reality. At the core of this transformation are agentic design patterns—reusable frameworks that help structure how AI agents plan, act, reflect, and collaborate.

These six design patterns are the backbone of today’s most advanced AI agent architectures, enabling smarter, more resilient systems.


1. ReAct Agent (Reasoning + Acting)

The ReAct pattern enables agents to alternate between reasoning through language and taking action via tools. Instead of passively responding to prompts, the agent breaks down tasks, reasons through steps, and uses external resources to achieve goals.

  • Key feature: Thinks aloud and takes actions iteratively.

  • Why it matters: Mimics human problem-solving and makes AI more interpretable and efficient.


2. CodeAct Agent

The CodeAct pattern focuses on enabling agents to write, execute, and debug code. This is especially useful for solving complex, technical problems or automating workflows that require logic and precision.

  • Key feature: Dynamically generates and runs code in a live coding environment.

  • Why it matters: Automates developer tasks and enables technical reasoning.


3. Modern Tool Use

This pattern teaches agents how to smartly select and utilize third-party tools (like APIs or internal services). The agent becomes a manager of digital resources, deciding when and how to delegate tasks to tools.

  • Key feature: Picks the right tools based on task needs.

  • Why it matters: Gives agents real-world utility without overcomplicating internal logic.


4. Self-Reflection

Self-reflection equips agents with a feedback loop. After completing a task or generating an answer, the agent evaluates the quality of its response, identifies potential errors, and revises accordingly.

  • Key feature: Checks and improves its own output.

  • Why it matters: Boosts reliability and encourages iterative learning.


5. Multi-Agent Workflow

Rather than a single monolithic agent, this pattern involves multiple specialized agents working together. Each one has a defined role (e.g., planner, coder, checker), and they communicate to solve problems collaboratively.

  • Key feature: Division of labor between expert agents.

  • Why it matters: Scales well for complex workflows and enhances performance.


6. Agentic RAG (Retrieval-Augmented Generation)

Agentic RAG combines external information retrieval with generative reasoning, memory, and tool use. It allows agents to pull in up-to-date or task-specific data to guide their decision-making and output.

  • Key feature: Combines context-retrieval with deep reasoning.

  • Why it matters: Provides grounded, accurate, and context-aware outputs.


Key Takeaway

These six agentic AI design patterns provide a strong foundation for building autonomous, context-aware systems that can reason, act, collaborate, and self-improve. As AI agents move deeper into industries from software development to customer service and beyond, these patterns will guide developers in designing robust, intelligent solutions that scale.

Whether you're building internal tools or next-generation AI applications, mastering these frameworks is essential for developing truly capable and autonomous agents.


References

  1. Marktechpost – “Top 6 Agentic AI Design Patterns”: https://aiagent.marktechpost.com/post/top-6-agentic-ai-design-patterns

  2. ReAct (Reasoning and Acting): https://arxiv.org/abs/2210.03629

  3. CodeAct examples (various GitHub and research projects; see pattern 2 details on link above)

  4. Agentic RAG concept: https://www.marktechpost.com/2024/02/15/openai-introduces-rag-chain-and-memory-management-using-gpt/

  5. Self-Reflection agent idea: https://arxiv.org/abs/2302.03432

  6. Multi-Agent Collaboration: https://arxiv.org/abs/2303.12712

Ultra-FineWeb: A Trillion-Token Dataset Elevating LLM Performance Across Benchmarks

 In a groundbreaking development for artificial intelligence, researchers from Tsinghua University and ModelBest have unveiled Ultra-FineWeb, a massive, high-quality dataset designed to bolster the training of large language models (LLMs). Comprising approximately 1 trillion English tokens and 120 billion Chinese tokens, Ultra-FineWeb sets a new standard in dataset curation, emphasizing both scale and quality to enhance LLM performance across a spectrum of benchmarks.


Innovative Filtering Methodology

The creation of Ultra-FineWeb addresses two critical challenges in dataset preparation for LLMs: the need for efficient data verification and the selection of high-quality seed data for classifier training.

  1. Efficient Verification Strategy: To rapidly assess data quality, the researchers implemented a verification approach that evaluates the impact of data on LLM training with minimal computational overhead. This strategy enables timely feedback, facilitating the swift refinement of the dataset.

  2. Optimized Seed Selection: Recognizing the subjectivity in manual seed selection, the team developed a method to systematically choose positive and negative samples. By integrating the verification strategy, they enhanced the robustness and quality of the classifier used for data filtering.

A lightweight classifier based on fastText was employed to efficiently filter the dataset. This choice significantly reduced inference costs while maintaining high filtering precision, ensuring that only the most relevant and high-quality data were included in Ultra-FineWeb.


Benchmark Performance

LLMs trained on Ultra-FineWeb demonstrated remarkable improvements across various benchmarks:

  • English Benchmarks: Models exhibited substantial gains in tasks such as MMLU, ARC-C, ARC-E, and OpenbookQA, with average score increases of over 3% compared to those trained on previous datasets like FineWeb and FineWeb-Edu.

  • Chinese Benchmarks: On evaluations like C-Eval and CMMLU, models trained with Ultra-FineWeb-zh outperformed counterparts, indicating enhanced comprehension and reasoning in Chinese language tasks.

These improvements underscore the dataset's effectiveness in enhancing LLM capabilities across multiple languages and domains.


Implications for AI Development

Ultra-FineWeb's introduction marks a significant advancement in the field of AI, particularly in the training of LLMs. By addressing key challenges in data verification and seed selection, and by employing efficient filtering techniques, the dataset provides a robust foundation for developing more accurate and versatile language models.

The methodologies applied in creating Ultra-FineWeb offer a blueprint for future dataset curation efforts, emphasizing the importance of quality and efficiency in data preparation.


Access and Availability

Ultra-FineWeb is available for the research community through Hugging Face, promoting transparency and collaboration in AI development. Researchers and developers are encouraged to utilize this resource to further advance the capabilities of LLMs.


Takeaway

Ultra-FineWeb represents a pivotal resource in the evolution of large language models, combining extensive scale with meticulous quality control. Its innovative filtering methodologies and demonstrable performance enhancements across benchmarks position it as an essential tool for researchers and developers aiming to push the boundaries of AI language understanding.

ByteDance Launches Seed1.5-VL: A Compact Yet Powerful Vision-Language Model for Multimodal AI

 In a significant stride towards advancing multimodal artificial intelligence, ByteDance has unveiled Seed1.5-VL, a vision-language foundation model designed to excel in general-purpose understanding and reasoning tasks across various modalities. Despite its relatively compact architecture, Seed1.5-VL delivers state-of-the-art performance on a wide array of benchmarks, positioning itself as a formidable contender in the AI landscape.


Model Architecture and Design

Seed1.5-VL is composed of a 532 million-parameter vision encoder coupled with a 20 billion-parameter Mixture-of-Experts (MoE) large language model. This design enables the model to process and integrate information from both visual and textual inputs efficiently. The MoE architecture allows for activating only a subset of the model's parameters during inference, optimizing computational resources without compromising performance. 


Benchmark Performance

The model has demonstrated exceptional capabilities, achieving state-of-the-art results on 38 out of 60 public vision-language benchmarks. Notably, Seed1.5-VL excels in tasks such as:

  • Visual Question Answering (VQA): Providing accurate answers to questions based on visual content.

  • Optical Character Recognition (OCR): Accurately reading and interpreting text within images.

  • Diagram and Chart Understanding: Interpreting complex visual data representations.

  • Visual Grounding: Associating textual descriptions with corresponding regions in images.

  • 3D Spatial Understanding: Comprehending three-dimensional spatial relationships in visual inputs.

  • Video Comprehension: Analyzing and understanding temporal sequences in video data.

These capabilities underscore the model's versatility and robustness across diverse multimodal tasks.arXiv


Agent-Centric Abilities

Beyond traditional vision-language tasks, Seed1.5-VL exhibits advanced agent-centric abilities. It demonstrates strong performance in interactive tasks such as GUI control and gameplay, showcasing its potential in applications requiring real-time decision-making and interaction. 


Efficiency and Practical Applications

One of the standout features of Seed1.5-VL is its efficiency. By leveraging the MoE architecture, the model maintains high performance while reducing computational overhead. This efficiency makes it suitable for deployment in real-world applications, including:Surveillance Analysis: Interpreting and analyzing video feeds for security purposes.

  • User Interface Automation: Controlling and interacting with graphical user interfaces.

  • Educational Tools: Assisting in learning environments through multimodal content understanding.

The model's ability to handle complex reasoning and diverse input types positions it as a valuable asset across various industries.


Accessibility and Open-Source Commitment

ByteDance has made Seed1.5-VL accessible to the broader AI community. The model is available for testing via the Volcano Engine API and has been open-sourced on platforms like GitHub and Hugging Face. This commitment to openness fosters collaboration and accelerates advancements in multimodal AI research.


Conclusion

Seed1.5-VL represents a significant advancement in the field of multimodal AI, combining efficiency with high performance across a range of complex tasks. Its compact architecture, coupled with state-of-the-art results, makes it a compelling choice for researchers and practitioners seeking versatile and powerful AI solutions.

For more information and to explore the model further, visit the official GitHub repository and the technical report on arXiv.

15.5.25

MLE-Dojo: A Gym-Style Framework for Training and Evaluating Autonomous Machine Learning Engineering Agents

 In a significant advancement for AI research, Georgia Tech and Stanford University have introduced MLE-Dojo, a Gym-style framework aimed at training, evaluating, and benchmarking autonomous machine learning engineering (MLE) agents. This innovative platform provides a realistic, interactive environment for agents to develop and refine their skills across a wide array of machine learning tasks.


What is MLE-Dojo?

MLE-Dojo is designed to simulate the iterative workflows of human machine learning engineers. It offers an environment where large language model (LLM) agents can write, execute, and debug code, receiving structured feedback to improve their performance over time. The framework is built upon over 200 real-world Kaggle competitions, encompassing diverse domains such as tabular data analysis, computer vision, natural language processing, and time series forecasting. 


Key Features

  • Interactive Environment: Agents engage in a loop of experimentation, debugging, and refinement, closely mirroring real-world engineering processes.

  • Comprehensive Task Suite: With over 200 curated tasks, MLE-Dojo provides a broad spectrum of challenges to test and improve agent capabilities.

  • Modular Architecture: Each task operates within its own Docker container, ensuring safety, reproducibility, and ease of integration with various tools and datasets.

  • Structured Feedback: Agents receive detailed observations, including datasets, execution results, and error messages, facilitating step-by-step learning and improvement.

  • Training Flexibility: Supports both supervised fine-tuning and reinforcement learning, allowing for diverse training methodologies. 


Benchmarking and Evaluation

MLE-Dojo serves as a benchmark to assess the performance of autonomous MLE agents. In evaluations involving eight frontier LLMs, the framework highlighted both the capabilities and limitations of current models, particularly in handling complex, long-horizon tasks and error resolution. 




Implications for AI Research

By providing a realistic and comprehensive environment, MLE-Dojo enables researchers to systematically train and evaluate autonomous agents in machine learning engineering tasks. This framework paves the way for the development of more robust, generalizable, and scalable AI agents capable of handling real-world engineering challenges


Access and Community Involvement

MLE-Dojo is open-source, encouraging community collaboration and innovation. Researchers and developers can access the framework and contribute to its ongoing development through the official GitHub repository: https://github.com/MLE-Dojo/MLE-Dojo.


Takeaway

MLE-Dojo represents a significant step forward in the training and evaluation of autonomous machine learning engineering agents. By simulating real-world tasks and providing structured feedback, it offers a valuable tool for advancing AI research and developing agents capable of complex problem-solving in dynamic environments.

OpenAI Integrates GPT-4.1 and 4.1 Mini into ChatGPT: Key Insights for Enterprises

 OpenAI has recently expanded its ChatGPT offerings by integrating two new models: GPT-4.1 and GPT-4.1 Mini. These models, initially designed for API access, are now accessible to ChatGPT users, marking a significant step in making advanced AI tools more available to a broader audience, including enterprises.


Understanding GPT-4.1 and GPT-4.1 Mini

GPT-4.1 is a large language model optimized for enterprise applications, particularly in coding and instruction-following tasks. It demonstrates a 21.4-point improvement over GPT-4o on the SWE-bench Verified software engineering benchmark and a 10.5-point gain on instruction-following tasks in Scale’s MultiChallenge benchmark. Additionally, it reduces verbosity by 50% compared to other models, enhancing clarity and efficiency in responses. 

GPT-4.1 Mini, on the other hand, is a scaled-down version that replaces GPT-4o Mini as the default model for all ChatGPT users, including those on the free tier. While less powerful, it maintains similar safety standards, providing a balance between performance and accessibility.


Enterprise-Focused Features

GPT-4.1 was developed with enterprise needs in mind, offering:

  • Enhanced Coding Capabilities: Superior performance in software engineering tasks, making it a valuable tool for development teams.

  • Improved Instruction Adherence: Better understanding and execution of complex instructions, streamlining workflows.

  • Reduced Verbosity: More concise responses, aiding in clearer communication and documentation.

These features make GPT-4.1 a compelling choice for enterprises seeking efficient and reliable AI solutions.


Contextual Understanding and Speed

GPT-4.1 supports varying context windows to accommodate different user needs:

  • 8,000 tokens for free users

  • 32,000 tokens for Plus users

  • 128,000 tokens for Pro users

While the API versions can process up to one million tokens, this capacity is not yet available in ChatGPT but may be introduced in the future. 


Safety and Compliance

OpenAI has emphasized safety in GPT-4.1's development. The model scores 0.99 on OpenAI’s “not unsafe” measure in standard refusal tests and 0.86 on more challenging prompts. However, in the StrongReject jailbreak test, it scored 0.23, indicating room for improvement under adversarial conditions. Nonetheless, it achieved a strong 0.96 on human-sourced jailbreak prompts, showcasing robustness in real-world scenarios. 


Implications for Enterprises

The integration of GPT-4.1 into ChatGPT offers several benefits for enterprises:

  • AI Engineers: Enhanced tools for coding and instruction-following tasks.

  • AI Orchestration Leads: Improved model consistency and reliability for scalable pipeline design.

  • Data Engineers: Reduced hallucination rates and higher factual accuracy, aiding in dependable data workflows.

  • IT Security Professionals: Increased resistance to common jailbreaks and controlled output behavior, supporting safe integration into internal tools. 


Conclusion

OpenAI's GPT-4.1 and GPT-4.1 Mini models represent a significant advancement in AI capabilities, particularly for enterprise applications. With improved performance in coding, instruction adherence, and safety, these models offer valuable tools for organizations aiming to integrate AI into their operations effectively

Building a 100% Local, Private, and Secure MCP Client with Lightning AI

 In an era where data privacy is paramount, the ability to operate AI applications entirely offline is a significant advantage. Akshay Pachaar's recent guide on Lightning AI's platform offers a comprehensive walkthrough for building a 100% local, private, and secure MCP (Model Control Panel) client. This approach ensures that sensitive data remains within your infrastructure, eliminating dependencies on external cloud services.


Why Go Local?

Operating AI models locally offers several benefits:

  • Enhanced Privacy: Data never leaves your premises, reducing exposure to potential breaches.

  • Compliance: Easier adherence to data protection regulations like GDPR.

  • Reduced Latency: Faster processing as data doesn't need to travel to and from the cloud.

  • Cost Efficiency: Eliminates recurring cloud service fees.


Step-by-Step Guide to Building Your Local MCP Client

Akshay's guide provides a detailed roadmap for setting up your local MCP client:

  1. Environment Setup:

    • Prepare your local machine with necessary dependencies.

    • Ensure compatibility with Lightning AI's framework.

  2. Offline Installation:

    • Download all required packages and models in advance.

    • Install them without any internet connection to guarantee isolation.

  3. Implementing Encryption:

    • Utilize encryption protocols to secure data at rest and in transit.

    • Configure SSL certificates for any local web interfaces.

  4. User Authentication:

    • Set up robust authentication mechanisms to control access.

    • Implement role-based permissions to manage user privileges.

  5. Testing and Validation:

    • Run comprehensive tests to ensure the system operates as intended.

    • Validate that no external connections are made during operation.


Best Practices for Maintaining Security

  • Regular Updates: Even in an offline environment, periodically update your system with the latest security patches.

  • Audit Logs: Maintain detailed logs of all operations for accountability.

  • Access Controls: Limit physical and digital access to the system to authorized personnel only.

  • Backup Strategies: Implement regular backups to prevent data loss.


Conclusion

Building a local, private, and secure MCP client is not only feasible but also advantageous for organizations prioritizing data privacy and control. By following Akshay Pachaar's guide on Lightning AI, you can establish a robust AI infrastructure that operates entirely within your secure environment.

  Anthropic Enhances Claude Code with Support for Remote MCP Servers Anthropic has announced a significant upgrade to Claude Code , enablin...