The truth about the future of SEO and why SEO is more alive than ever (and it is ChatGPT’s fault)

Published in ,
This article is a bit different, where I analyze the future of SEO and the impact that AI is having and will have.

One of the big unknowns in the world of SEO today is what happens as AI gets adopted by the masses. There’s plenty of data, but no one has a fully formed or clear opinion. Neither do I.

That’s why this article serves as an exercise for me to bring together data and thoughts in one place, so we’ll do it together, here we go! 💪

Introduction

Even though ChatGPT already has 200 million weekly users, it’s still a far cry from the more than 3.2 billion users who use Google monthly.

And if we look at the data from tools like Claude (70 million) or Perplexity (72 million), they barely accumulate a small (comparatively) number of users. Google’s Gemini on SimilarWeb reports fewer than 300 million visits in September.

As of today, it’s no longer possible to analyze traffic on one of these platforms by looking at SimilarWeb. It used to be possible initially, but now they all know they need to be multi-device, getting as close to the user as possible, which is why they all have apps for iOS, Android, Mac, and Windows.

One of the first conclusions that data supports 100% is that AI, for now, is not a widespread phenomenon. Obviously, 200 million weekly users is a lot, but we’re talking about less than 10% of Google’s user base, and even less if we consider the 5.5 billion internet users worldwide.

As a good investor, I know that phenomena aren’t evaluated by their current numbers, but by their future growth expectations. And clearly, AI-powered products already have validation that they will grow significantly over the next 5-10 years.

Just consider that ChatGPT grew from 0 to 200 million weekly users in less than 24 months; the growth rate is absurd.

What interests us the most as SEOs is whether we’ll still have jobs in a few years, which I think is the big question raised by the current situation.

SEO has “died” many times, and this time is no different. Although there are clearly some elements still to be defined and others that are potentially negative regarding AI, I personally remain quite optimistic that SEO is, today, more important than it was 2 years ago.

I’m going to divide my thoughts into two parts: the ‘reasons to believe‘ that validate this idea, and the ones that go against it. Signals on one side and the other. And then, as a reader, you can form an opinion for both sides.

But before that, I want to lay down some basics about the types of solutions in the AI world, especially those that we’ve been able to categorize in the last few weeks:

  1. Conversational chats with an LLM (foundational models): This includes models used in products like ChatGPT, Meta AI, Gemini, Claude, and many others. They are very versatile tools that can be used for a multitude of use cases, some of which involve generating information based on a question. Although I would say that generating information is not their strongest suit—due to reasons such as not being updated in real-time and being prone to hallucinate—they are still used for it quite a lot. But they are used even more for all kinds of tasks like code generation, content creation, tone checking for an email, and more.
  2. Search engines enhanced with AI: This includes products like Perplexity, which could be considered the main AI-based search engine, but also the new search functionality ‘Search‘ that ChatGPT introduced last week. Bing has also been doing this quite a bit since last year, and Google itself with its AI Overviews. In this case, the RAG models use LLMs to generate a response (write it), but the data comes from websites that rank for the necessary keywords to answer the user’s question. They provide unstructured content to formulate a summary/answer.
  3. Others: There could be an endless number of categorizations, tools, platforms, etc. I won’t focus on these since the first two are already large enough to cover 10 editions of Seopatía.

The first two are the ones I’ll refer to, and to simplify them, I’ll call them ‘AI-augmented search engines‘ and ‘conversational LLMs‘. It’s worth noting that I believe in the future, the distinction between these two products will become increasingly blurred (as we see with ChatGPT’s Search functionality).

Reasons to believe

Reason 1: The Internet isn’t usually a ‘zero-sum game’

Game theory studies how different actors make decisions that affect each other. In economics, it helps us understand whether the arrival of a new technology or service creates a “zero-sum game” (where one wins what another loses), a “positive-sum game” (where the market grows and everyone can benefit), or a “negative-sum game” (where some lose more than others gain).

For example, when Uber entered the transport market, it didn’t take users from traditional taxis but expanded the total market—a clear example of a positive-sum game. In contrast, the arrival of the automobile was a negative-sum game for the horse transport industry, which virtually disappeared despite the exponential growth of the transport market.

This perspective is crucial for understanding how AI might impact SEO.

In this case, there are reasons to believe that the arrival of LLMs is a positive-sum game for SEO. It needs to be broken down into two parts:

  1. More ‘searches’: For me, this part is 100% clear. The arrival of AI means more searches are being made. Because it improves usability and usefulness, people search more times a day. I myself find myself consuming much more information because of it. Also, AI is dramatically expanding the voice conversation space, so not all searches will be written.
  2. More ‘visits’: This part is less clear, but if we assume the data published by SparkToro, for every 100 searches on Google, about 40 generate a click, and of those 40, approximately 75% go to organic sources, so 30% of Google searches result in organic clicks.

Understanding what will happen with SEO and AI essentially comes down to seeing how this formula evolves:

  • What percentage increase in ‘unique queries’ will there be in the long term due to AI?
  • What percentage of these ‘unique queries’ will result in a click to a website in the context of AI?
  • What percentage of these ‘unique queries’ will result in an organic click vs. a paid one? (for now, all of them, as no AI-based search engine has ads).

If we wanted to represent it mathematically, the formula would look like this:

C_{m} = B_{d} \times 30 \times (1 + \alpha_{AI}) \times (CTR_{base} \times (1 - \beta_{decay}))

Where:

    \begin{align*} C_{m} &= \text{Monthly organic clicks} \\ B_{d} &= \text{Base daily searches (8.5B)} \\ \alpha_{AI} &= \text{Growth factor due to AI} \\ CTR_{base} &= \text{Current CTR (30\%)} \\ \beta_{decay} &= \text{CTR degradation factor due to AI} \end{align*}

My intuition tells me that there will be more ‘unique queries’ and for now, all of them lead to organic results, so it’s very likely that SEO today is more relevant than it was two years ago (before the release of ChatGPT). Many of the current queries are incremental, meaning they wouldn’t have existed if this technological revolution hadn’t happened.

In the long term, it remains to be seen, but surely there will be more queries, and it’s likely that the average CTR of new AI-generated queries will be lower. Google says that CTR is equal to or higher. So far, it’s unclear; independent studies say that CTR increases by 4% and others that it decreases by 9%. Personally, I believe it decreases based on my own experience, but I also know I’m not the average user.

To understand this model, it’s better to do it with a small example.

From there, it would be very easy to model the future if we had real data to play with:

  • If in 5 years Google sees a 35% increase in total searches due to the AI phenomenon (reaching 344.25 billion per month), and CTR decreases by 15% (down to 25.5%), we’d have 87.783 billion organic clicks, significantly more than today.

For example, applying the formula and using the numbers mentioned:

    \begin{align*} B_{d} &= 8.5 \text{billion} \\ \alpha_{AI} &= 35\% = 0.35 \\ CTR_{base} &= 30\% = 0.30 \\ \beta_{decay} &= 15\% = 0.15 \end{align*}

Which would result in:

C_{m} = 8.5B \times 30 \times (1 + 0.35) \times (0.30 \times (1 - 0.15)) \approx 87.78B%

Obviously, these numbers might be wrong. Perhaps CTR decreases by 50%, and queries don’t increase as much, but as I said, I believe there are reasons to think that in the short and medium term, SEO is already more relevant than before, and in the future, it’s likely to remain so.

Reason 2: Without a clear monetization via ads, SEO is more relevant than ever

The initial data shows that AI is driving significant traffic volumes, though many of them don’t use referrer, making the visits unidentifiable for now (probably to avoid being compared and scrutinized for their impact).

And all those clicks are organic. There isn’t a single paid click. It’s clear that sooner or later, search results will be monetized in both AI-augmented search engines and conversational LLMs, but for now, everything is “organic.“ And moreover, it’s largely based on search results, meaning it’s very aligned with the work of an SEO.

Although this data doesn’t exist yet, according to Semrush, Google still leads by more than 23 times the volume of ChatGPT.

Semrush Data – Source
Semrush Data – Source

Note: These numbers are only for Google.com, but Google uses domains for every country/language it operates in, so they are very incomplete, with the reality likely being two or three times higher.

Reasons not to believe

Reason 3: There’s no revenue share system that benefits informational content creation

All the data mentioned earlier in reasons 1 and 2 encompass all types of searches: informational, navigational, and transactional. The ones most affected by AI, by far, are informational. And there, the data could potentially be much harder to digest.

For example, StackOverFlow’s traffic, which was once an internet giant, has dramatically decreased since ChatGPT’s release, though that case is more about the reinvention of the concept thanks to AI. The cannibalization wasn’t so much due to the traffic it received from Google, which was collateral, but rather the decline in interest in the product itself.

But again, this is clearly a positive-sum case. Their traffic has dropped by more than 40%, but the number of people programming with ChatGPT or similar tools is probably more than 10 times the number who used to visit StackOverFlow.

It’s a similar case to encyclopedias and the creation of Wikipedia. Even though the encyclopedia companies went under (I imagine), the benefits to humanity from having Wikipedia are exponentially greater.

Without going too far off track, I think one of the big unknowns in the AI world is what potential revenue-share system the AI companies creating foundational models will offer, especially in the context of conversational LLMs and their training, which uses primarily public data but where the website sometimes may not receive anything in return (at least directly/obviously). Paying a few publishers as is being done now doesn’t seem scalable.

To opt out of training, there’s the use of robots.txt, though obviously not everyone uses it. Even Cloudflare has launched a solution to opt-out of AI training with one click.

In the model of AI-augmented search engines, I think it’s very similar to what we have now, where the visit is the reward, although it may be a smaller percentage in the long term. I don’t see any changes needed there.

I understand that in the future, some kind of reward for using data to train a model will be created. But on the other hand, I think AI models themselves will tend to become commodities, and the value will lie more in the application and use cases.

That’s why I believe open-source models like Meta’s Llama for foundational models are the best. Even if public data is used, the benefit is common and public.

That’s why I find closed models like Google, Anthropic, or OpenAI far less commendable, and I believe that although they will exist, in the long term, it’s quite likely that there will be much homogenization toward open source, even if the best models remain private.

It would be something similar to the Android vs. iOS model, where Android has far more use due to its open model, but there’s still an audience interested in these closed models.

Reason 4: AI tends toward omnipotence, and with it, the homogenization of content

This topic isn’t as related to SEO itself, but I find it relevant to mention. If you open five AI models and ask them the same question, they will give you very similar answers.

One of the greatest virtues of the internet is that there are a multitude of opinions, and I think AI tends to homogenize answers, all arriving at the same “unique” conclusion. In part also because you only get one answer, not 10 results from 10 different websites.

I find it hard to understand what implications this has for the future, but if AIs are increasingly trained on content created by AIs, which are themselves very similar, it’s likely that at some point viewpoints will diminish and everything will tend to become very homogeneous.

Just as index funds and ETFs represent a large part of the stock market, the increasingly smaller but significant number of individually purchased stocks is what moves the market on a day-to-day basis. So I understand something similar could happen in this case.

The content that is more different will be the most valued. This is already evident with the rise of forums like Reddit or Quora in English, where sources that almost certainly have humans behind them, with real opinions, are highly valued.

As SEOs, we are responsible for creating content for many websites, and I think continuing to advocate for differential content that adds value and is better than the competition will be key. Whether written by a human or by AI with human supervision.

Final conclusions

This has perhaps been one of the most interesting articles I’ve ever written, and I’m sure I’ve missed a thousand angles, details, and nuances. If something is wrong, I apologize in advance. I’m sure I’ll revisit this article in the future and see that things turned out very differently.

But, after all this analysis, I think there are some pretty clear conclusions for me:

  1. SEO isn’t dying; it’s evolving. As it always has. Although there are significant challenges ahead, especially in informational content, current numbers suggest there are more opportunities than threats in the long term.
  2. It’s probably a positive-sum game: more total searches = more opportunities. Even if CTR may decrease, the total volume of interactions (including search engines and chatbots) seems to be rising. And for now, all that traffic is organic.
  3. We need to be attentive to the evolution of two parallel worlds. Each will have its own rules and its own opportunities:
    • AI-augmented search engines (like Perplexity)
    • Conversational LLMs (like Claude).
  4. The big question remains monetization and revenue-sharing. Especially for informational content that feeds these systems. We’ll likely see hybrid models evolve, similar to what happened with social media content.
  5. The homogenization of content is a real risk, but also an opportunity. Just as in the stock market, there will always be room (and need) for different voices and specialized (and private) content.

What will happen in the long run? No one knows for sure.

For now, the only certainty is that SEO remains relevant, perhaps more than ever, as it’s the second key piece in the AI revolution (after the foundational models themselves). And as always, those who adapt best to change will be the ones who benefit the most from this new technological revolution.

And I want to close by letting you think…

What do you think? 😀

How do you see the future of SEO, positive, or are you more of a skeptic?