Watercolor of a solitary figure standing on a shore looking out across open water toward a distant landmass
groundwork·4 min read

The Island

AI without the Internet doesn't get stale. It gets stranded. It learned everything from us, and we haven't stopped talking.

Share
Copied!

The Brief

This article argues that AI models have converged to equivalent capability, making the model itself irrelevant as a differentiator. Through personal experience running open-source models with web search and research on model collapse, it makes the case that AI without internet access is not limited but useless.


Why are AI models no longer a competitive advantage?
Stanford's 2025 AI Index shows models that once needed 540 billion parameters now need 3.8 billion for the same scores. Inference costs dropped 280-fold. The gap between top models collapsed to measurement error. When every model performs the same, the model is not the differentiator.
What makes AI useful if the model doesn't matter?
Internet access. Connection to the living, constantly updating output of human beings, including research, code, news, and forum posts. A free open-source model with web access outperforms a premium model without it. The connection is the capability.
What is model collapse and why does it matter?
Model collapse occurs when AI trains on its own synthetic output instead of human-generated content. Researchers call it the computer-science version of inbreeding. Without fresh human contribution, models degrade rather than improve, proving AI's fundamental dependency on ongoing human output.
What should businesses focus on instead of choosing an AI model?
Focus on what the AI can connect to. The question is not which model to use but what live information the model can access. Internet connectivity and real-time human-generated content separate useful AI from useless AI.

Two years ago I downloaded Llama 3.0 from HuggingFace and ran it on my own machine. Meta's open-source model. Free. At the time, ChatGPT was the product everyone was paying for. Billions in funding. Massive infrastructure. The most famous AI on the planet.

Llama beat it.

Not because it was smarter. Llama 3.0 was a smaller model by every measure. But my setup gave it something ChatGPT didn't have yet. Access to the internet.

That's it. A free model on a home computer, plugged into the web, outperformed the most funded AI product in history. The difference wasn't intelligence. It was a cable.

Watercolor of a modest home desk with a computer and a single glowing cable stretching out through a window toward a vast open landscape Billions in funding. One Ethernet cable.

I figured OpenAI would catch up, and they did. When they finally added web search, they described the plugin as giving ChatGPT "eyes and ears."1 Think about that. The most talked-about AI product in the world shipped without the ability to look anything up. The fix was described, by the people who built it, as giving their creation the ability to see and hear.

Then Google added AI to search. Then Bing. And suddenly everyone had the same realization I'd had on my home machine a year earlier.

The model was never the point.

I started paying attention to the benchmarks after that. Stanford's 2025 AI Index put numbers to what I'd already seen.2 Models that once needed 540 billion parameters to hit a score now needed 3.8 billion to match it. A 142-fold reduction. Inference costs dropped 280-fold in eighteen months. The gap between every top model collapsed to what researchers called "measurement error territory." MIT Sloan asked the obvious question. "How can AI be the centerpiece of a sustained competitive advantage when everyone has it?"3

They can't. The models have plateaued. Every major model converged to roughly the same capability, and the race to be the smartest in the room ended in a tie.

So what actually makes the difference? The same thing that made Llama beat ChatGPT on my kitchen table. Connection to what human beings are actually producing. The arguments, the corrections, the code commits, the forum posts, the news.

Watercolor of countless handwritten letters, typed pages, and glowing screens flowing together like tributaries into a wide river Seven billion people still typing.

Cut that connection and something worse than stale data happens. Researchers found that when models train on their own synthetic output instead of human content, they degrade.4 They called it "model collapse." One researcher described it as "the computer-science version of inbreeding." We built this intelligence from everything humanity ever wrote, and it turns out it can't survive without us continuing to write.

If the internet went dark tomorrow, every AI on the planet would be stranded. Not limited. Not outdated. Useless.

John Donne wrote it four hundred years ago. No man is an island entire of itself. He meant that isolation doesn't just limit a person. It diminishes them. I keep coming back to that line. Turns out the same is true for the intelligence we built from a million human voices.

People keep asking me which model to use. Wrong question. They're all the same engine now. The only question that matters is what you connect it to.

No model is an island. The ones that work are a piece of the continent.


References

Footnotes

  1. CMSWire. (2023). "OpenAI Incorporates Web Search Into ChatGPT With Web Browser Plugin." CMSWire

  2. Stanford HAI. (2025). "AI Index 2025: State of AI in 10 Charts." Stanford HAI

  3. Wingate, D., Burns, B.L., & Barney, J.B. (2025). "Why AI Will Not Provide Sustainable Competitive Advantage." MIT Sloan Management Review

  4. The Week. (2024). "All-Powerful, Ever-Pervasive AI Is Running Out of Internet." The Week

Found this useful? Share it with others.

Share
Copied!

Browse the Archive

Explore all articles by date, filter by category, or search for specific topics.

Open Field Journal