Watercolor illustration of translucent layers revealing hidden patterns and code beneath
groundwork·3 min read

From Algorithm Opacity to AI Opacity

Explore how the questions we asked about search engines and social feeds are resurfacing with AI, and why familiarity might be our greatest advantage.

Share
Copied!

The Brief

This article traces the progression of algorithmic opacity from Google search rankings to Facebook's News Feed to modern AI language models. It argues that the critical thinking skills people developed questioning search results and social media feeds transfer directly to evaluating AI outputs, while identifying what makes AI opacity distinctly more challenging.


What is algorithmic opacity?
Algorithmic opacity is the inability to see why a system produced a particular output. It has been present since the early days of Google search, deepened with Facebook's News Feed ranking, and reached a new level with AI language models whose billions of parameters make outputs difficult for even their own engineers to fully explain.
How does AI opacity differ from search engine or social media opacity?
AI adds sharper edges to familiar opacity problems. AI models can generate plausible falsehoods with perfect confidence, default to perspectives that may carry hidden bias, and cannot identify their own sources. Unlike search results, there is no list of links to verify against.
What skills from social media help with evaluating AI?
The same critical instincts people developed questioning search rankings and viral content transfer to AI evaluation. Asking why certain results appeared, checking sources, and wondering who benefits from a particular output are the same mental muscles needed to assess AI-generated answers.
Why does the article say we already have a playbook for AI opacity?
People have spent two decades navigating opaque systems, from Google search to social media feeds. The core questions remain the same: Why did I see this? Who benefits from this output? Can I trust it? The article argues we already have these antibodies and need to remember to apply them to AI.

A friend called me last week, unsettled. "I asked ChatGPT a question and got a confident answer. But I have no idea why it said what it said. How am I supposed to trust that?"

I laughed. Not at her, but at the familiar shape of the question.

A timeline showing the evolution from search to social to AI The black box got bigger, but it was never transparent

Twenty years ago, I had the same feeling about Google. Type a query, get results. Why those results? Why in that order? Nobody outside Google really knew.1 We trusted the black box because the outputs seemed good. Good enough, anyway.

Then came Facebook's News Feed. What determines what you see first thing in the morning? Engagement metrics, social signals, advertiser interests, and a cocktail of factors the company itself couldn't fully explain. The opacity deepened, but so did our dependence. By the time we thought to question it, the feed had become our window to the world.

Now we have AI models with billions of parameters. Why did it say that? The honest answer, even from the engineers who built it: we're not entirely sure.

The Muscle We Already Have

Here's what I told my friend: she's been training for this moment her whole digital life.

She learned to question why certain search results appeared at the top. She learned to doubt viral content, to check sources, to wonder who benefits when a particular story shows up in her feed. These instincts didn't vanish. They transferred.

A person looking at their reflection in a dark screen We've always been looking at ourselves through systems we don't fully understand

The person who questions why an AI said something is using the same mental muscle as the person who questions why a headline appeared in their timeline. We've developed antibodies, however imperfect.

What's Actually New

The old questions still apply: Why did I see this? Who benefits from this output? Can I trust it? But AI adds sharper edges.

AI can generate plausible falsehoods with perfect confidence.2 When AI writes, whose perspective does it default to? And perhaps most important: what's the source, when the model itself can't tell you?

We've navigated opacity before. The playbook isn't new. We just need to remember we have one.


References

Footnotes

  1. Brin, S. & Page, L. (1998). "The Anatomy of a Large-Scale Hypertextual Web Search Engine." Stanford InfoLab

  2. Weidinger, L., et al. (2021). "Ethical and social risks of harm from Language Models." DeepMind

Found this useful? Share it with others.

Share
Copied!

Browse the Archive

Explore all articles by date, filter by category, or search for specific topics.

Open Field Journal