Exclusive: From Search Engine to Answer Engine, Who Decides What You Know?

Ai

Exclusive: From Search Engine to Answer Engine, Who Decides What You Know?

Kasun Illankoon

By: Kasun Illankoon

10 min read

Something quiet and enormous happened to the internet over the last two years, and most people missed it because it looked like a convenience upgrade.

[For more news, click here]

You type a question. Instead of a list of ten blue links pointing to ten different websites — each with a different author, a different perspective, a different set of facts — you get a single, confident paragraph. An answer. The search engine has become an answer engine. And that shift, as mundane as it looks on your screen, is one of the most profound changes to how humanity accesses knowledge since the printing press.

The numbers tell the story. As of mid-2025, Google's AI Overviews were appearing on nearly 20% of all searches — AI-generated summaries that sit above every other result, often making further clicking unnecessary. A landmark study by SparkTorro found that 58.5% of all Google searches now end with zero clicks — the user gets their answer from Google's interface itself and never visits a single website. Meanwhile, ChatGPT has 800 million weekly active users as of early 2026, many of whom use it as their primary means of finding information. Perplexity AI — which explicitly brands itself an "answer engine" — processed 780 million queries in May 2025 alone.

We are no longer navigating the web. We are being told what it says.

The Death of the Blue Link

To understand why this matters, you need to understand what the old system actually was — and why it was more fragile, and more important, than it looked.

The original search engine bargain was simple and bilateral. Publishers — newspapers, universities, bloggers, independent researchers, niche experts — created content and allowed Google to index it. In return, Google sent them readers. Traffic. The lifeblood of the digital economy. It was an ecosystem, not unlike a coral reef: imperfect, often chaotic, but extraordinarily biodiverse.

Google shattered that bargain. In August 2024, a U.S. federal judge ruled that Google had unlawfully monopolized online search — a ruling that affirmed what anyone paying attention already knew. Then, even as the legal proceedings continued, Google quietly accelerated the very behavior at the heart of the complaint. It rolled out AI Overviews: summaries that sit at the top of search results, synthesize content from publishers' websites, and deliver the answer without ever sending the user to the source.

In June 2025, Penske Media Corporation — publisher of Rolling Stone, Variety, and dozens of other major titles — filed a federal antitrust suit against Google. The language in the filing was blunt: "Google's search monopoly leaves publishers with no choice: acquiesce — even as Google cannibalizes the traffic publishers rely on — or perish."

The Digital Content Next member survey, covering 19 major media companies from national newsrooms to global entertainment brands, found the same pattern across eight weeks in May and June 2025: median Google Search referral traffic was down nearly every single week. News brands fell 16% in one week alone. The publishers whose content trains the AI models that now replace them are watching their audiences disappear into a walled garden that benefits only Google.

The Independent Publishers Alliance put it starkly in a formal antitrust complaint to the European Commission in July 2025: "Google's core search engine service is misusing web content for Google's AI Overviews, which have caused, and continue to cause, significant harm to publishers, including news publishers in the form of traffic, readership and revenue loss."

Who Gets to Be the Answer?

The publisher question is urgent. But it is not the deepest question. Beneath the economics lies something more unsettling: a philosophical problem about who controls the flow of knowledge.

When ten blue links appeared on your screen, you made a choice. You decided which source to trust. You might read three of them and notice they disagreed. You encountered friction — the productive kind, the kind that forces independent thought. The answer engine removes that friction entirely. It has already decided. One answer. Delivered with authority. Presented in fluent, confident prose.

Researchers at Google DeepMind, publishing in 2024, warned that because AI systems can generate realistic content indistinguishable from human-produced material, they could — in their own words — "accelerate the spread of misinformation and prevent truth discernment."

A landmark 2025 scoping review published in AI & Society, analyzing 24 empirical studies on AI and misinformation, reached a damning conclusion: large language models can generate convincing misinformation by exploiting cognitive biases, and users over-trust AI outputs — especially when the text is fluent and linguistically sophisticated. The researchers introduced a concept they called "epistemic ambivalence" — the capacity of AI to simultaneously construct and erode public knowledge.

Mark Coeckelbergh, a philosopher of technology at the University of Vienna, published research in 2025 arguing that AI recommendation and search systems are actively diminishing what he called epistemic agency — the human capacity to form and revise our own beliefs. "The use of artificial intelligence and data science, while offering more information, risks influencing the formation and revision of our beliefs in ways that diminish our epistemic agency," he wrote in the journal Social Epistemology.

Put plainly: when an AI gives you an answer, it is not just saving you time. It is making decisions about what counts as knowledge, which sources are reliable, which perspectives are worth including, and which are not. Those decisions are made by engineers, product managers, training datasets, and the companies that wrote the code. They are not made by you.

Bias Baked In

This is not a hypothetical. Researchers have already documented what happens when you let AI models decide what counts as credible.

A peer-reviewed study in Philosophy & Technology in 2025 traced the political biases embedded in large language models back to the process of Reinforcement Learning from Human Feedback (RLHF) — the training step in which human labelers rate AI outputs for quality. The research found that AI models tend to reflect the views of people who are, in the words of one cited study, "liberal, high income, well-educated, and not religious." Those demographics align closely with the people OpenAI and other major AI companies actually employ as annotators.

The paper concluded bluntly: "Because of inherent biases we cannot see language models as distinct or as autonomous agents free from those who have created and trained them."

This matters enormously on contested political, scientific, and social questions. If you ask a search engine whether the minimum wage should be raised, it shows you ten articles from ten different viewpoints. If you ask an answer engine the same question, it synthesizes a response shaped by the values and training choices of a small team of engineers in San Francisco. That is not neutrality. It is invisible editorializing at planetary scale.

The Pew Research Center's 2025 survey on AI's long-term impact found that 56% of AI experts believe AI will negatively affect the news people receive over the next 20 years. Just 18% believe it will be positive. For elections specifically — where what voters know shapes the fate of democracies — 61% of AI experts expect harm, and only 11% expect benefit.

The Web That Feeds Itself — Then Starves

There is a slow-moving catastrophe embedded in this system that most people have not yet noticed: AI answer engines are consuming the very ecosystem that makes their answers possible.

The models powering today's answer engines were trained on the open web — decades of journalism, academic writing, expert blogs, forum discussions, investigative reporting. That knowledge base exists because, for a generation, search engines sent traffic to publishers and publishers reinvested that traffic revenue into producing more knowledge. Now, AI Overviews and answer engines siphon that traffic back to the platforms, cutting off the funding that produces the next generation of content.

Digital Content Next described the dynamic precisely: "If we allow the search monopoly to wall off the web behind AI-generated summaries, we'll end up with fewer sources, weaker journalism, and a less informed public."

Research published in Wiley's journal on data mining in 2025 identified what it called "token recycling syndrome" — the phenomenon where large language models increasingly encounter their own outputs recycled back into training data, amplifying stylistic quirks and factual errors. In other words: if the web dies because no one can afford to produce original journalism, AI models will increasingly train on AI-generated content, producing an epistemological feedback loop of compounding error and homogenized perspective.

The Brookings Institution, analyzing the Google antitrust case in October 2025, identified this fracture clearly. The open web has effectively split into three distinct information markets: traditional search, AI search that summarizes, and conversational AI that replaces search altogether. Each transition concentrates power further. Each reduces the diversity of what gets seen.

Who Decides — And Who Should?

The question at the center of all of this is deceptively simple: in a world where a single AI interface mediates between billions of people and all of human knowledge, who gets to control that interface — and who is accountable when it gets something wrong?

Right now, the answer is: a handful of corporations, operating under the loosest possible regulatory frameworks, guided primarily by the incentive to keep users inside their platforms. Google's revenue from traditional search advertising is estimated at $175–200 billion annually. Every answer delivered by AI Overviews that prevents a user from clicking through to a publisher is a user who stays on Google's platform, inside Google's advertising ecosystem, generating revenue for Google.

Perplexity — valued at $20 billion by 2026 — was sued by major publishers for reproducing paywalled content without credit. In mid-2024, Forbes publicly accused the platform of essentially plagiarizing a proprietary investigative article. The company processed those complaints, continued growing, and went on to raise more money.

The U.S. federal courts moved against Google's search monopoly — slowly, with remedies that stopped short of the structural breakup the DOJ sought. The EU opened a probe into Google's use of online content for AI training in December 2025. But enforcement lags innovation by years, and the market consolidates faster than any regulator can keep pace with.

Judge Amit Mehta, writing in the Google antitrust case, captured the problem with rare judicial candor: "Unlike the typical case where the court's job is to resolve a dispute based on historic facts, here the court is asked to gaze into a crystal ball and look to the future."

That is the honest situation we are all in. We are navigating a transformation of epistemic infrastructure — the systems by which societies learn, debate, and decide — in real time, without adequate maps.

The Real Question

For decades, the internet was legitimately revolutionary because it democratized access. Anyone could publish. Anyone could search. Anyone could, in theory, find the dissenting view, the obscure expert, the original source. That was not a perfect system. It was noisy, manipulable, and prone to misinformation. But it was plural. It contained multitudes.

The answer engine collapses that plurality into a single voice. Confident. Fluent. Omnipresent. And answerable to no one but its shareholders.

The question of who decides what you know is not a technology question. It is a democracy question. It is a question about whether we are comfortable outsourcing the curation of reality to a small number of companies whose core interest is your continued engagement — not your epistemic autonomy, not your critical thinking, not your exposure to views that challenge and complicate your existing beliefs.

The printing press created a centuries-long crisis about who controlled the means of information — one that eventually resolved, messily and imperfectly, in favor of wider access and greater diversity of thought. We are at a similar inflection point now, moving at a dramatically faster pace, with far more concentrated ownership, and with almost no public conversation about what we are giving up.

The answer engine will tell you what you need to know. The question is whether you'll notice it's deciding — and whether you'll think to ask who told it.

Share this article

Related Articles