Showing posts with label GPT. Show all posts
Showing posts with label GPT. Show all posts

19.6.25

Andrej Karpathy Declares the Era of Software 3.0: Programming in English, Building for Agents, and Rewriting the Stack

 Andrej Karpathy on the Future of Software: The Rise of Software 3.0 and the Agent Era

At a packed AI event, Andrej Karpathy—former Director of AI at Tesla and founding member of OpenAI—delivered a compelling address outlining a tectonic shift in how we write, interact with, and deploy software. “Software is changing again,” Karpathy declared, positioning today’s shift as more radical than anything the industry has seen in 70 years.

From Software 1.0 to 3.0

Karpathy breaks down the evolution of software into three stages:

  • Software 1.0: Traditional code written explicitly by developers in programming languages like Python or C++.

  • Software 2.0: Neural networks trained via data and optimized using backpropagation—no explicit code, just learned weights.

  • Software 3.0: Large Language Models (LLMs) like GPT-4 and Claude, where natural language prompts become the new form of programming.

“We are now programming computers in English,” Karpathy said, highlighting how the interface between humans and machines is becoming increasingly intuitive and accessible.

GitHub, Hugging Face, and the Rise of LLM Ecosystems

Karpathy draws powerful parallels between historical shifts in tooling: GitHub was the hub for Software 1.0; Hugging Face and similar platforms are now becoming the repositories for Software 2.0 and 3.0. Prompting an LLM is no longer just a trick—it’s a paradigm. And increasingly, tools like Cursor and Perplexity represent what he calls partial autonomy apps, with sliding scales of control for the user.

In these apps, humans perform verification while AIs handle generation, and GUIs become crucial for maintaining speed and safety.

AI as Utilities, Fabs, and Operating Systems

Karpathy introduced a powerful metaphor: LLMs as a new form of operating system. Just as Windows or Linux manage memory and processes, LLMs orchestrate knowledge and tasks. He explains that while LLMs operate with the reliability and ubiquity of utilities (like electricity), they also require the massive capex and infrastructure akin to semiconductor fabs.

But the most accurate analogy, he claims, is that LLMs are emerging operating systems, with multimodal abilities, memory management (context windows), and apps running across multiple providers—just like early days of Linux vs. Windows.

Vibe Coding and Natural Language Development

Vibe coding—the concept of programming through intuition and natural language—has exploded, thanks in part to Karpathy’s now-famous tweet. “I can’t program in Swift,” he said, “but I built an iOS app with an LLM in a day.”

The viral idea is about empowerment: anyone who speaks English can now create software. And this unlocks massive creative and economic potential, especially for young developers and non-programmers.

The Next Frontier: Building for AI Agents

Karpathy argues that today’s digital infrastructure was designed for humans and GUIs—not for autonomous agents. He proposes tools like llm.txt (analogous to robots.txt) to make content agent-readable, and praises platforms like Vercel and Stripe that are transitioning documentation and tooling to be LLM-native.

“You can’t just say ‘click this’ anymore,” he explains. Agents need precise, machine-readable instructions—not vague human UX metaphors.

He also showcases tools like Deep Wiki and Ingest to convert GitHub repos into digestible formats for LLMs. In short, we must rethink developer experience not just for humans, but for machine collaborators.

Iron Man Suits, Not Iron Man Robots

Karpathy closes with a compelling analogy: most AI applications today should act more like Iron Man suits (human-augmented intelligence) rather than fully autonomous Iron Man robots. We need GUIs for oversight, autonomy sliders to control risk, and workflows that let humans verify, adjust, and approve AI suggestions in tight loops.

“It’s not about replacing developers,” he emphasizes. “It’s about rewriting the stack, building intelligent tools, and creating software that collaborates with us.”


Takeaway:
The future of software isn’t just about writing better code. It’s about redefining what code is, who gets to write it, and how machines will interact with the web. Whether you’re a developer, founder, or student, learning to work with and build for LLMs isn’t optional—it’s the next operating system of the world.




10.6.25

OpenAI Surpasses $10 Billion in Annual Recurring Revenue as ChatGPT Adoption Skyrockets

 OpenAI has crossed a significant financial milestone, achieving an annual recurring revenue (ARR) run rate of $10 billion as of mid-2025. This growth marks a nearly twofold increase from the $5.5 billion ARR reported at the end of 2024, underscoring the explosive rise in demand for generative AI tools across industries and user demographics.

According to insiders familiar with the company’s operations, this growth is largely fueled by the surging popularity of ChatGPT and a steady uptick in the use of OpenAI’s APIs and enterprise services. ChatGPT alone now boasts between 800 million and 1 billion users globally, with approximately 500 million active users each week. Of these, 3 million are paid business subscribers, reflecting robust interest from corporate clients.


A Revenue Surge Driven by Strategic Products and Partnerships

OpenAI’s flagship products—ChatGPT and its developer-facing APIs—are at the heart of this momentum. The company has successfully positioned itself as a leader in generative AI, building tools that range from conversational agents and writing assistants to enterprise-level automation and data analysis platforms.

Its revenue model is primarily subscription-based. Businesses pay to access advanced features, integration capabilities, and support, while developers continue to rely on OpenAI’s APIs for building AI-powered products. With both individual and corporate users increasing rapidly, OpenAI’s ARR has climbed steadily.


Strategic Acquisitions Fuel Growth and Innovation

To further bolster its capabilities, OpenAI has made key acquisitions in 2025. Among the most significant are:

  • Windsurf (formerly Codeium): Acquired for $3 billion, Windsurf enhances OpenAI’s position in the AI coding assistant space, providing advanced code completion and debugging features that rival GitHub Copilot.

  • io Products: A startup led by Jony Ive, the legendary former Apple designer, was acquired for $6.5 billion. This move signals OpenAI’s intent to enter the consumer hardware market with devices optimized for AI interaction.

These acquisitions not only broaden OpenAI’s product ecosystem but also deepen its influence in software development and design-forward consumer technology.


Setting Sights on $12.7 Billion ARR and Long-Term Profitability

OpenAI’s trajectory shows no signs of slowing. Company forecasts project ARR reaching $12.7 billion by the end of 2025, a figure that aligns with investor expectations. The firm recently closed a major funding round led by SoftBank, bringing its valuation to an estimated $300 billion.

Despite a substantial operating loss of $5 billion in 2024 due to high infrastructure and R&D investments, OpenAI is reportedly aiming to become cash-flow positive by 2029. The company is investing heavily in building proprietary data centers, increasing compute capacity, and launching major infrastructure projects like “Project Stargate.”


Navigating a Competitive AI Landscape

OpenAI’s aggressive growth strategy places it ahead of many competitors in the generative AI space. Rival company Anthropic, which developed Claude, has also made strides, recently surpassing $3 billion in ARR. However, OpenAI remains the market leader, not only in revenue but also in market share and influence.

As the company scales, challenges around compute costs, user retention, and ethical deployment remain. However, with solid financial backing and an increasingly integrated suite of products, OpenAI is positioned to maintain its leadership in the AI arms race.


Conclusion

Reaching $10 billion in ARR is a landmark achievement that cements OpenAI’s status as a dominant force in the AI industry. With a growing user base, major acquisitions, and a clear roadmap toward long-term profitability, the company continues to set the pace for innovation and commercialization in generative AI. As it expands into hardware and deepens its enterprise offerings, OpenAI’s influence will likely continue shaping the next decade of technology.

  Anthropic Enhances Claude Code with Support for Remote MCP Servers Anthropic has announced a significant upgrade to Claude Code , enablin...