.
Image: Getty Images, OpenAI, Google, Claude, and Grok
We’ve been here before.
At so many pivotal moments in our adoption of digital technology, people and businesses mistake a company’s walled garden for the broader, more powerful network underneath. In the 1990s, many people genuinely believed AOL was the internet. When I left Facebook in 2013, hundreds of people asked how I would function “without the web.” Over and over, packaged products—operating systems, app stores, streaming services—eclipse quieter, less expensive, bottom-up alternatives like Linux or torrents. We forget they exist.
Today, we’re making the same mistake with large language models.
To many of us, “AI” now means choosing among a handful of commercial LLMs such as ChatGPT, Claude, Gemini, or Grok—and perhaps even choosing the one that matches our cultural or political sensibilities. But these systems share important structural limitations: they are centralised, expensive, energy-intensive operations that depend on massive data centers, rare chips, and proprietary data stores. Because they’re trained on roughly the same public internet, they also tend to generate the same generalised, flattened results. Companies using them wholesale often end up substituting their own expertise with recombinations of whatever is already out there.
This is how AI will do to businesses what social media did to publications, and what the early web did to retailers who went online without a strategy. Using the same generic tools as everyone else produces the same generic results. Worse, outsourcing core knowledge processes to a black-box service replaces the long-term development of internal capacity—especially junior employees learning through real practice—with cheaper but future-eroding automation.
Commercial language models are optimised for generality and scale. That scale is impressive, but it creates real constraints for organisations. Centralised LLMs require:
For many companies, these models become another outsourced dependency. Every time a commercial LLM updates itself—which can happen weekly—your workflows change underneath you. Your proprietary data may be exposed to third-party APIs. And your differentiation erodes, because the model’s knowledge is drawn from the same public corpus available to your competitors.
Meanwhile, the narrative surrounding AI has encouraged businesses to believe that this centralised path is the only viable one—that achieving meaningful AI capability requires enormous data centers, billion-dollar training runs, and participation in a global race toward Artificial General Intelligence.
But none of this is a requirement for using AI productively.
You do not need frontier-scale models to benefit from AI. A growing ecosystem of open-source, locally deployable language models provides organisations with far more autonomy, privacy, and control.
A $100 Raspberry Pi—or any modest home or office server—can run a compact open-source model using tools like Ollama or GPT4All. These models don’t “learn” on the fly the way people do, but they can produce high-quality responses while remaining completely contained within your own environment. More importantly, they can be paired with a private knowledge base using retrieval systems. That means the model can reference your own research library, internal documentation, or curated public resources like Wikipedia—without training on the entire internet, and without sending your data to an external provider.
These systems build on your own data instead of extracting it, strengthen your institutional memory instead of commoditising it, and run at a fraction of the cost.
This approach allows an organisation to create an AI system aligned with its actual priorities, values, and domain expertise. It becomes a private assistant rather than a generalised product shaped by the incentives of a trillion-dollar platform. And the alternative doesn’t have to be a solitary effort.
advertisement
Neighborhoods, campuses, or company departments can form a “mesh network”—a set of devices connected directly through Wi-Fi or cables rather than through the public internet. One node can host a local model; others can contribute or withhold their own data stores. Instead of a single company owning the infrastructure and the knowledge, you get something closer to a community data commons or a digital library system.
Projects like the High Desert Institute’s LoreKeeper’s Guild are already experimenting with this approach. Their “Librarian” initiative envisions local libraries acting as the data hubs for mesh-networked AI systems—resilient enough to function even during connectivity disruptions. But their deeper innovation is architectural. These systems give organisations access to powerful language capabilities without subscription costs, lock-in, data extraction, or exposure of proprietary information.
Local or community models enable organisations to:
And they do so using energy and computing resources that are orders of magnitude lower than those required by frontier-scale models.
The more institutions adopt localised or mesh-based AI, the less they are compelled to fund the centralised companies racing toward AGI. Those companies have made an effective argument: that sophisticated AI is only possible through their services. But much of what organisations pay for is not their own productivity—it is the construction of massive server farms, procurement of rare chips, and long-term bets on energy-intensive infrastructure.
By contrast, in-house or community-run systems can be deployed once and maintained indefinitely. A week of setup can eliminate a decade of subscription payments. A small rural library has already demonstrated the feasibility of operating a self-hosted LLM node; a Fortune 500 company should have no trouble doing the same.
Still, history suggests that most organisations will choose the convenient option rather than the autonomous one. Few people accessed the early internet directly; they chose AOL. Today, many will continue to choose centralised AI services, even when they offer the least control. But what social media companies did to businesses that mistook them for “the internet” will be mild compared to what comes when companies mistake these proprietary interfaces for “AI” itself.
Decentralised AI already exists. The question now is whether we’ll choose to use it.
ABOUT THE AUTHOR
Douglas Rushkoff is Scholar-in-Residence at ANDUS Labs, the host of the Team Human podcast, and the author of more than twenty books about technology and society.