Showing posts with label Microsoft. Show all posts
Showing posts with label Microsoft. Show all posts

3.6.25

OpenAI's Sora Now Free on Bing Mobile: Create AI Videos Without a Subscription

 In a significant move to democratize AI video creation, Microsoft has integrated OpenAI's Sora into its Bing mobile app, enabling users to generate AI-powered videos from text prompts without any subscription fees. This development allows broader access to advanced AI capabilities, previously available only to ChatGPT Plus or Pro subscribers. 

Sora's Integration into Bing Mobile

Sora, OpenAI's text-to-video model, can now be accessed through the Bing Video Creator feature within the Bing mobile app, available on both iOS and Android platforms. Users can input descriptive prompts, such as "a hummingbird flapping its wings in ultra slow motion" or "a tiny astronaut exploring a giant mushroom planet," and receive five-second AI-generated video clips in response. 

How to Use Bing Video Creator

To utilize this feature:

  1. Open the Bing mobile app.

  2. Tap the menu icon in the bottom right corner.

  3. Select "Video Creator."

  4. Enter a text prompt describing the desired video.

Alternatively, users can type a prompt directly into the Bing search bar, beginning with "Create a video of..." 

Global Availability and Future Developments

The Bing Video Creator feature is now available worldwide, excluding China and Russia. While currently limited to five-second vertical videos, Microsoft has announced plans to support horizontal videos and expand the feature to desktop and Copilot Search platforms in the near future. 

Conclusion

By offering Sora's capabilities through the Bing mobile app at no cost, Microsoft and OpenAI are making AI-driven video creation more accessible to a global audience. This initiative not only enhances user engagement with AI technologies but also sets a precedent for future integrations of advanced AI tools into everyday applications.

27.5.25

Microsoft's Aurora AI Revolutionizes Environmental Forecasting with High-Speed, Accurate Predictions

 Microsoft has introduced Aurora, an advanced AI foundation model designed to enhance environmental forecasting capabilities. Trained on over a million hours of diverse atmospheric data—including satellite imagery, radar readings, and weather station reports—Aurora delivers rapid and accurate predictions for various environmental phenomena. 

Key Features and Achievements

  • High-Speed Forecasting: Aurora generates forecasts in seconds, a significant improvement over the hours required by traditional supercomputer-based systems.  

  • Enhanced Accuracy: In tests, Aurora outperformed the National Hurricane Center in forecasting five-day tropical cyclone tracks for the 2022–2023 season and accurately predicted the landfall of Typhoon Doksuri in the Philippines four days in advance. 

  • Versatile Environmental Predictions: Beyond weather forecasting, Aurora has been fine-tuned to predict air quality, ocean wave heights, and other atmospheric events, demonstrating its adaptability to various environmental forecasting tasks. 

  • Public Accessibility: Microsoft has made Aurora's source code and model weights publicly available, promoting transparency and collaboration within the scientific community. 

Implications for the Future

Aurora represents a significant advancement in the field of meteorology and environmental science. Its ability to provide rapid, accurate forecasts can aid in disaster preparedness, environmental monitoring, and climate research. By making the model publicly accessible, Microsoft encourages further innovation and application of AI in understanding and responding to environmental challenges.

24.5.25

Microsoft's NLWeb: Empowering Enterprises to AI-Enable Their Websites

 Microsoft has introduced NLWeb, an open-source protocol designed to transform traditional websites into AI-powered platforms. Announced at the Build 2025 conference, NLWeb enables enterprises to embed conversational AI interfaces directly into their websites, facilitating natural language interactions and improving content discoverability.

Understanding NLWeb

NLWeb, short for Natural Language Web, is the brainchild of Ramanathan V. Guha, a pioneer known for co-creating RSS and Schema.org. The protocol builds upon existing web standards, allowing developers to integrate AI functionalities without overhauling their current infrastructure. By leveraging structured data formats like RSS and Schema.org, NLWeb facilitates seamless AI interactions with web content. 

Microsoft CTO Kevin Scott likens NLWeb to "HTML for the agentic web," emphasizing its role in enabling websites and APIs to function as agentic applications. Each NLWeb instance operates as a Model Control Protocol (MCP) server, providing a standardized method for AI systems to access and interpret web data. 

Key Features and Advantages

  • Enhanced AI Interaction: NLWeb allows AI systems to better understand and navigate website content, reducing errors and improving user experience. 

  • Leveraging Existing Infrastructure: Enterprises can utilize their current structured data, minimizing the need for extensive redevelopment. 

  • Open-Source and Model-Agnostic: NLWeb is designed to be compatible with various AI models, promoting flexibility and broad adoption. 

  • Integration with MCP: Serving as the transport layer, MCP works in tandem with NLWeb to facilitate efficient AI-data interactions. 

Enterprise Adoption and Use Cases

Several organizations have already begun implementing NLWeb to enhance their digital platforms:

  • O’Reilly Media: CTO Andrew Odewahn highlights NLWeb's ability to utilize existing metadata for internal AI applications, streamlining information retrieval and decision-making processes. 

  • Tripadvisor and Shopify: These companies are exploring NLWeb to improve user engagement through AI-driven conversational interfaces. 

By adopting NLWeb, enterprises can offer users a more interactive experience, allowing for natural language queries and personalized content delivery.

Considerations for Implementation

While NLWeb presents numerous benefits, enterprises should consider the following:

  • Maturity of the Protocol: As NLWeb is still in its early stages, widespread adoption may take 2-3 years. Early adopters can influence its development and integration standards. 

  • Regulatory Compliance: Industries with strict regulations, such as healthcare and finance, should proceed cautiously, ensuring that AI integrations meet compliance requirements. 

  • Ecosystem Development: Successful implementation depends on the growth of supporting tools and community engagement to refine best practices. 

Conclusion

NLWeb represents a significant step toward democratizing AI capabilities across the web. By enabling enterprises to integrate conversational AI into their websites efficiently, NLWeb enhances user interaction and positions businesses at the forefront of digital innovation. As the protocol evolves, it holds the promise of reshaping how users interact with online content, making AI-driven experiences a standard component of web navigation

8.5.25

Microsoft Embraces Google’s Standard for Linking AI Agents: Why It Matters

 In a landmark move for AI interoperability, Microsoft has adopted Google's Model Coordination Protocol (MCP) — a rapidly emerging open standard designed to unify how AI agents interact across platforms and applications. The announcement reflects a growing industry consensus: the future of artificial intelligence lies not in isolated models, but in connected multi-agent ecosystems.


What Is MCP?

Developed by Google, Model Coordination Protocol (MCP) is a lightweight, open framework that allows AI agents, tools, and APIs to communicate using a shared format. It provides a standardized method for passing context, status updates, and task progress between different AI systems — regardless of who built them.

MCP’s primary goals include:

  • 🧠 Agent-to-agent collaboration

  • 🔁 Stateful context sharing

  • 🧩 Cross-vendor model integration

  • 🔒 Secure agent execution pipelines


Why Microsoft’s Adoption Matters

By integrating MCP, Microsoft joins a growing alliance of tech giants, including Google, Anthropic, and NVIDIA, who are collectively shaping a more open and interoperable AI future.

This means that agentic systems built in Azure AI Studio or connected to Microsoft Copilot can now communicate more easily with tools and agents powered by Gemini, Claude, or open-source platforms.

"The real power of AI isn’t just what one model can do — it’s what many can do together."
— Anonymous industry analyst


Agentic AI Is Going Cross-Platform

As companies shift from isolated LLM tools to more autonomous AI agents, standardizing how these agents coordinate is becoming mission-critical. With the rise of agent frameworks like CrewAI, LangChain, and AutoGen, MCP provides the "glue" that connects diverse agents across different domains — like finance, operations, customer service, and software development.


A Step Toward an Open AI Stack

Microsoft’s alignment with Google on MCP suggests a broader industry pivot away from closed, siloed systems. It reflects growing recognition that no single company can dominate the agent economy — and that cooperation on protocol-level standards will unlock scale, efficiency, and innovation.


Final Thoughts

The adoption of MCP by Microsoft is more than just a technical choice — it’s a strategic endorsement of open AI ecosystems. As AI agents become more integrated into enterprise workflows and consumer apps, having a universal language for coordination could make or break the usability of next-gen tools.

With both Microsoft and Google now on board, MCP is poised to become the default operating standard for agentic AI at scale.

4.5.25

Microsoft Launches Phi-4-Reasoning-Plus: Small Model, Big Reasoning Power

Microsoft has unveiled Phi-4-Reasoning-Plus, a compact yet highly capable open-weight language model built for deep, structured reasoning. With just 14 billion parameters, it punches far above its weight—outperforming much larger models on key benchmarks in logic, math, and science.

Phi-4-Reasoning-Plus is a refinement of Microsoft’s earlier Phi-4 model. It uses advanced supervised fine-tuning and reinforcement learning to deliver high reasoning accuracy in a lightweight format. Trained on 16 billion tokens—half of which are unique—the model’s data includes synthetic prompts, carefully filtered web content, and a dedicated reinforcement learning phase focused on solving 6,400 math problems.

What makes this model especially valuable to developers and businesses is its MIT open-source license, allowing free use, modification, and commercial deployment. It's also designed to run efficiently on common AI frameworks like Hugging Face Transformers, vLLM, llama.cpp, and Ollama—making it easy to integrate across platforms.

Key Features of Phi-4-Reasoning-Plus:

  • 14B parameters with performance rivaling 70B+ models in reasoning tasks

  • ✅ Outperforms larger LLMs in math, coding, and logical reasoning

  • ✅ Uses special tokens to improve transparency in reasoning steps

  • ✅ Trained with outcome-based reinforcement learning for better accuracy and brevity

  • ✅ Released under the MIT license for open commercial use

  • ✅ Compatible with lightweight inference frameworks

One of the standout results? Phi-4-Reasoning-Plus achieved a higher first-pass score on the AIME 2025 math exam than a 70B model—an impressive feat that showcases its reasoning efficiency despite a smaller model size.

Takeaway

Microsoft’s Phi-4-Reasoning-Plus marks a turning point in AI development: high performance no longer depends on massive scale. This small but mighty model proves that with smarter training and tuning, compact LLMs can rival giants in performance—while being easier to deploy, more cost-effective, and openly available. It’s a big leap forward for accessible AI, especially for startups, educators, researchers, and businesses that need powerful reasoning without the heavy compute demands.

  Anthropic Enhances Claude Code with Support for Remote MCP Servers Anthropic has announced a significant upgrade to Claude Code , enablin...