A small group of people that the majority of the world will never meet make some of the most important decisions regarding human intelligence in a glass-faced, unremarkable building in San Francisco. You wouldn’t notice anything out of the ordinary if you strolled past it on a Tuesday afternoon. A messenger. A food delivery bike. The typical noise of the city.
However, models are being trained on data sets that are so large that they use more electricity than some mid-sized cities. Furthermore, no one outside that building has the opportunity to verify what those models truly learned or who they were subtly shaped to serve.
| Category | Details |
|---|---|
| Topic | Open-Source AI vs. Proprietary (Walled Garden) AI — The Global Power Struggle |
| Key Players (Closed AI) | OpenAI (GPT-4, o3), Google DeepMind (Gemini), Microsoft (Copilot, Azure AI), Anthropic (Claude) |
| Key Players (Open-Source AI) | Meta AI (LLaMA), Mistral AI, Stability AI (Stable Diffusion), Falcon (TII), EleutherAI |
| Market Valuation Context | OpenAI valued at ~$300B (2025); SoftBank committed $40B investment; Nvidia acquired Groq licensing for ~$20B |
| Regulatory Landscape | EU AI Act (2024–2026 enforcement rollout); U.S. CDAO established; semiconductor export controls on advanced chips |
| Upcoming IPOs (2026) | OpenAI, Anthropic, Cohere, Databricks, Cerebras Systems, MiniMax, Zhipu AI among expected listings |
| Military AI Dimension | U.S. DoD “Replicator” project; China’s PLA “intelligentized warfare” doctrine; dual-use AI data centers |
| Reference & Further Reading | Andrej Karpathy’s 2025 LLM Year in Review — published December 19, 2025 |
| Core Ethical Debate | Transparency, algorithmic bias, data lock-in, censorship risk, and the “intelligence gap” between rich and poor users |
| Outlook | Hybrid AI landscape likely; open-source momentum growing; corporate consolidation accelerating simultaneously |
This is the walled garden’s quiet reality. Additionally, it is one half of the most significant technological dispute of our day.
On GitHub repositories, university servers, French startups, Chinese research labs, and bedroom computers running quantized models at midnight, the other half takes place in a different way. The movement for open-source AI is quiet. It doesn’t have a glass-faced headquarters.
However, the tension between these two worlds—one decentralized and obstinately public, the other proprietary and well-funded—has been building for years and has now reached the kind of intensity that usually results in history.

Because the battle is being fought in a language that most of us don’t speak well, it’s possible that most people haven’t noticed yet: parameter counts, training data licensing agreements, chip export controls. However, the stakes translate fairly easily.
In a society that is rapidly outsourcing its cognition to machines, whoever controls the most powerful AI models ultimately determines what information is revealed, which businesses prosper, which decisions are automated, and how power is distributed.
Famously, OpenAI began as a nonprofit. The origin story is somewhat idealistic, with researchers combining their skills, promising to help humanity as a whole, and freely disseminating their findings. That was short-lived. The partnership with Microsoft, the shift to a capped-profit structure, and the tiered API pricing all made financial sense and, in some cases, improved the models. However, it also reduced their accessibility. GPT-4 is protected by a paywall.
The weights are not available to the public. Whether they realize it or not, companies that use OpenAI’s infrastructure are tenants in someone else’s building. It’s not exactly a moral failing. It’s merely a business plan. However, it’s important to be truthful about its meaning.
In the meantime, something changed when Meta published the LLaMA weights, initially by mistake and later by policy. The French startup Mistral AI, which appeared out of nowhere, started releasing models that were on par with closed competitors on metrics that matter to real developers.
The same point had already been demonstrated in the image space by Stable Diffusion: open weights, freely distributed, iterated on by thousands of unpaid individuals. The open-source community seems to have found its voice in AI in a similar manner to how it did in operating systems twenty years ago. Linux was also not expected to defeat Unix.
The ethical boundaries don’t run where you might think, which is what makes this conflict truly complex. Open-source proponents emphasize transparency, and they are correct that it is important—you can’t fully trust an AI model that you can’t examine.
However, open weights also make skills that could be dangerous in unskilled hands widely accessible. From a safety perspective, it’s still unclear if open-sourcing powerful models will have a positive or negative overall impact, and anyone claiming certainty in either direction has probably not given it enough thought.
An additional layer of unease is introduced by the military aspect. The People’s Liberation Army of China has spent years developing what it refers to as “intelligentized warfare”—AI systems incorporated into logistics, propaganda production, and battlefield decision-making. In response, the U.S. Department of Defense established the Chief Digital and Artificial Intelligence Office, and thousands of inexpensive autonomous systems created especially to redefine deterrence are being deployed as part of the Replicator project.
Export restrictions on semiconductors have attempted to limit China’s access to the hardware required for training frontier models. It’s partially working. Because they can’t always obtain the raw hardware, Chinese companies are creating algorithms that are more efficient. It turns out that creativity is fostered by constraint. Anyone who is betting on the export control strategy will find that to be an unsettling lesson.
It seems like there will be some sort of reckoning in 2026. In what appears to be the IPO cycle of the decade, OpenAI, Anthropic, Cohere, and Databricks are all anticipated to go public. When that occurs, whatever more general mission language remains from the founding documents will be clearly positioned alongside the financial interests of shareholders. It will be fascinating to observe that tension. It’s always the case.
Observing all of this, it seems like the open-source movement needs to advance more quickly than it has. Although it hasn’t completely closed, the difference between frontier closed models and the best open alternatives has significantly decreased.
Additionally, businesses building on closed infrastructure are silently locking in dependencies that will be difficult to unwind later if the terms or pricing change. These dependencies include training workflows, fine-tuning pipelines, and institutional memory.
The most likely near-term scenario is probably the hybrid future that many researchers now envision, in which some potent models are open, others are proprietary, and users switch between them based on need and budget. However, it presumes a degree of informed decision-making that the majority of companies and people do not currently have. There is a real intelligence gap.
Not everyone is able to run their own refined version locally, assess a model’s training data, or audit its outputs for bias. The walled garden’s commercial durability and the open-source movement’s perceived significance outside of the technical community are both due to this asymmetry.
The headline’s trillion-dollar figure isn’t symbolic. The infrastructure investment going into data centers, the government contracts, the defense spending, and the total market capitalizations of the firms vying for AI supremacy all quickly add up. In the end, influence over the systems that will increasingly mediate human experience is what that money is purchasing. Even from the outside of a glass building on a Tuesday afternoon, that is something to be aware of.
