SEO in the Agentic Age: Is There Still Room for Humans?

Do we want a web that rewards individuality and creativity? Or do we want a web that expunges messy, genuine experiences but relentlessly optimizes for results?

SEO in the Agentic Age: Is There Still Room for Humans?
Photo by Juan Ordonez from Unsplash
audio-thumbnail
SEO in the Agentic Age: Is There Still Room for Humans?
0:00
/734.304

Mike King, one of the top SEO experts in the industry today, wrote a sprawling piece on the state of search engine optimization in the agentic age, an era accelerating by the second.

In the article, he says that traditional SEO strategies are being rendered obsolete as Google actively tries to replace the 10 blue links with AI Mode as the default experience for searchers. AI Mode uses generative, reasoning-driven information retrieval with fan-out queries, personalized embeddings, and passage-level analysis, none of which are directly addressed by current SEO practices.

Some SEOs disagree with that view to varying degrees. They think following traditional ranking techniques are still your best shot at winning in search engine results pages (SERP) today. This was corroborated by Google when they updated their documentation to stress the same old, goody-two-shoes advice for site owners and publishers, that the best way to rank for AI search is just to focus on creating unique, valuable content for people, not just search engines.

Of course, most serious SEO professionals have known for a long time that's largely disingenuous (yes, buying links still works).

But while conventional techniques are still at least partially effective at influencing search results, King is absolutely right to point out that most SEO professionals are unprepared for the agentic age.

Many are still probably trying to make their clients understand why their websites' organic search traffic crashed and was never revived again since the infamous Helpful Content Update.

I just don't think anybody's in the proper mindset to even think about LLM-based reasoning chains right now.

I wanted to write about King's excellent analysis, not so much to distill any practical SEO strategies from it (since he already explicitly listed them in the article), but to reflect on what it all means for content generation and the web in general.

And as I'll clarify in this piece, what worries me is that this new kind of SEO, which is going to be a response to the microscopic granularity of AI agents, might completely take search engine optimization and content creation itself out of the realm of people, and abandon them to bots.

The Need for New SEO Tools

I won't go exhaustively through King's technical explanation of how generative information retrieval works, and how that complicates SEO by a factor of a thousand. In fact, he says that adopting new strategies to address the complexities of AI search does not even qualify as "SEO" anymore in the strictest sense, but is "relevance engineering."

The primary difficulty is that the accelerating transformation of the search experience could lead to users not visiting websites at all (note: that is happening right now with CTRs dropping like a rock across all industries). Instead of people interfacing with websites, the new age basically designates bots as the primary consumers because they will interpret information for people.

So all the tried-and-tested methods of traditional SEO, such as lexical matching (keywords), page-level optimization, logged-out tracking (as opposed to tracking personalized, logged-in customer data), and click-based attribution are almost like a crate of stone tools and wooden axes in the middle of a laser blaster shootout.

But if King's revelations are hard to grapple with, the new tactics he recommends are even more difficult to imagine.

These new tactics will require SEO tools not yet available now.

For just one example, describing how Google's AI search works, King says:

Every query, subquery, document, and passage is converted into a vector embedding. Google... calculates similarity between these vectors to determine what gets selected for synthesis. What matters is no longer just “ranking for the query,” but how well your document, or even an individual passage within it, aligns semantically with the hidden constellation of queries.

Due to this passage-level relevance calculation, your content might be cited not because your page ranks high, but because a single sentence or paragraph outperforms a competing one in LLM comparisons.

So in a near-futuristic SEO environment, how does he propose to address such a challenge?

Well, he advocates for an extremely granular scoring system wherein search engine optimizers would have to break content into individual passages, and then use an embedding model to convert each passage into a vector embedding (a numerical representation of the semantic relevance of a piece of content for comparison with query embeddings).

For each query, a cosine similarity score between the query vector and the passage vector must then be computed. Once you've got the scores, you may then optimize low-scoring passages to increase their semantic clarity and specificity, improve their citation-worthiness, and match reasoning formats.

If all that sounds like a convoluted, unreadable maze of complexity, it's because it is. And this will only be achievable through a tool not yet available now, but must be developed by somebody.

Otherwise, as I said, we're essentially walking in a battlefield of criss-crossing laser beams with caveman weapons.

Content as a Massive Spreadsheet of Experiences

But think about what King is proposing here for a second. Optimizing scores on the passage level seems so granular, so microscopic, I wonder if it can even conceivably be done by human search engine optimizers, site owners, and content creators.

Sure, this near-futuristic SEO/relevance-engineering tool may help professionals optimize passages in the way that, well, Yoast SEO and SurferSEO have been doing semantic scoring now (titles, descriptions, and whatnot). But in practice, the whole plan seems far more elaborate than that.

To illustrate, he says:

Search is no longer a one-shot decision. It’s a session-driven sequence of related questions, many of which are generated by the system itself. Query fan-out, DeepSearch, and reasoning chains all reflect this evolution. - Mike King

So not only will you have to optimize content on the atomized passage level, you will also have to account for the super specific, personalized logged-in session experience of each user, and the query fan-out (which is a "constellation of related subqueries" that the AI agent prepares or predicts for the user), among other things.

Look: I think it's brilliant that King encountered a problem, and he proposed a concrete solution to solve it. But going back to my main thesis: this level of granularity and complexity will transform the entire practice of SEO and content creation to such an extent that only AI itself will be able satisfy these innumerable optimization parameters.

For sure, in this futuristic environment, it does not sound like we're talking about traditional web pages in the sense that a page has content readable by people. Like a blog article. Or a customer review. Or a personal travel vlog.

To account for as many micro optimizations as possible, we are possibly describing more of a database of content. A database of experiences.

I'm imagining more like a a massive spreadsheet of experiences that delineates and optimizes as many semantically relevant content passages as possible to conform to every subquery, user journey, and profile. If you could build such a database and develop a way to score and optimize all passages automatically (setting aside for now if that can be done "authentically" without being tagged by Google and other search engines as "spammy"), and then feed that dynamic, ever evolving data to AI platforms, wouldn't you have built the perfect SEO machine that has a high probability of winning every head-to-head comparison game?

But if you do that, you're essentially shutting the door to individual content creators and small businesses who do not have the technical expertise and budget to access and use such machinery.

Sure, they can still publish their content. But if they are almost guaranteed to lose every LLM passage-level comparison because their content is not optimized, what's there to incentivize them to build their content library on the web at all?

The only ones who could potentially develop such enormous content processing systems are huge businesses with plenty of capital for hiring agencies or developing in-house marketing capabilities powered by such tools.

If incentivizing organic content creation now among individual content creators is a problem for Google (as evidenced by them losing users to social media platforms like TikTok where content creation is so much easier), just imagine how these "small players" would look at this new barrier to entry for publishing on the web, which will be a hundred times more difficult than present-day requirements.

Ad revenue is dwindling to begin with. Setting up Google Ads is still a pain for most entry-level users. And while, as I mentioned, it's easier to create content on social media, for example, short-form videos, not everyone prefers that format for content consumption. And, of course, that format is not ideal for all business cases.

Not to mention, AI-generated content is swiftly proliferating across all forms of media: text, videos, images.

It seems everywhere, content creation and consumption are encountering daunting technical barriers due to AI. Far from making the web more accessible, it's making it more restrictive.

Deserts of Human Experience Roamed by Colossal Content Machines

It's true: SEO professionals need much better tools to understand how AI platforms are processing content, and how they can succeed at delivering the best services to their clients in this new setting.

It's also true that we need to take a critical stance about our current strategies because they're admittedly losing their relevance every second.

But there's another thing that we need to admit: SEO has made publishing a lopsided game in favor of players with capital for technical expertise and volume content production for a long time. Through their industry tools and tactics (whether "white-hat" or "black hat"), they have, for the most part inadvertently, limited the visibility of small players on the web. Google, obviously, has everything to do with that, too.

And I'm afraid the agentic age is not going to correct this flaw in the system. By introducing another complex, requirement-heavy, technical layer of optimization onto the publishing process, it can only get worse.

What we need are more individual creators creating content about their lived experiences, optimized or not. Not less.

There should be a rich diversity of human content to draw from. You may remember that was the original vision for the web. Think as far back as to when we were publishing pages about ourselves on GeoCities. We wanted other people to get to know us better, to discover our presence, our unique individuality online. Gaudy fonts and all.

That drive to compose our presences online through our own content, however clunky and inefficient measured by modern engagement metrics, has been the web's fuel for growth up to this time.

That was how the web grew because we kept finding ways to publish about ourselves, and other people kept discovering what we had to offer.

But again, once upon a time, it was a fairer and more inclusive web.

I realize trying to turn back after all this time will be extremely hard to do, but as we enter the agentic age, and interrogate the practices that have sustained websites for the past decade, it is more than ever crucial to review our original goal as connected people. Do we want a web that rewards individuality and creativity? Or do we want a web that expunges messy, genuine experiences but relentlessly optimizes for results?

On the horizon are barren deserts of human experiences roamed by colossal content machines, never dreaming, never allowing for faults, but forever optimizing. Let's not go there.