Dan says: “One thing I want more SEOs to focus on throughout the next year and even years to come is the older concept of Time to Result (TTR).
Google has spoken previously about using TTR as an internal metric for how well they are doing, which means how well they are potentially satisfying user queries with the content they're producing and using the assumption that your page first leads to the searcher's intent.
But then it's also leading into the optimization opportunities from there, like how well can we improve the time to resolve that query, and how can we make it easy or faster for users to find the answer? I think Google started to focus more on this when they introduced speed-jump-links to content where there are wonderful hash anchors about two or three years ago.
While much of the SEO industry saw that, we can optimize for this and get these jump-links, headers, etc. I saw that Google has to put another mechanism here because we feel that a part of this process is giving users the elements.
An example I've always used internally goes back a couple of years to when Arsenal played Molde in one of the European competitions. I literally just Googled the query, ‘what channel will Arsenal and Molde be on’. So my query is obviously a very short-term focus, because after the game is going to happen, no one's really going to be searching that your objective about queries, but I want to find a channel. That is the value proposition: when the content refreshes, the query service freshness file also comes into play.
For Radio Times in the UK, which is quite a relatively respected publication, most people buy the Christmas edition as a hard copy. But the first ranking is behind Featured Snippets, YouTube, etc. When I went to that page, the answer was about 180 words deep and about 16, 17 inches of scroll behind about five or six ad blocks. From a Google perspective, the answer to my query is nestled deep, so that's why Google makes those jump links. What makes it even worse is that the article's title leads with ‘what channel is this on?’. We've helped Google look at content quality and user ranking volatility in order to create user satisfaction, I think we're on the TTR side of it will be more important.
I have some publishers and publications that do this. They do not necessarily want to bury the lede intentionally, but they want people to scroll past the ads, and they want time engagement. In some way, that's happening, but it's not valuable to me; as a user, because I've had to really dig into the article to find the answer because you almost have a bait and switch with an ad.”
It's probably worthwhile diving a little deeper into what TTR means because some people may be thinking Largest Contentful Paint is similar, but it's not similar to TTR.
“No. Time to Result refers to the result that satisfies that user's objective. That could be a piece of the equation, a diagram, or a series of bullet points, but it looks explicitly at resolving without query volume. So, that's what the result is.
However, Time is probably the wrong way of saying it because it's more distance on a page. Some studies have shown these updates at the end of last year and correlated with some of the stuff that happened in March, even though it only officially rolled out a few days ago. We're waiting till after the quarterly earnings call. For some reason, helpful-content dragged out the value propositions and has slightly correlated better performances because they're taking that result. We're taking the value proposition and the beneficial purpose of what we use as looking to accelerate to the top of that content rather than 800 words down and getting us on track.
I think this also potentially ties in with what we've seen with some of the SGE testing, which I believe have been commentated on in the SEO world. That is where you can open the Chrome browser's side panel for those who haven't seen it with SGE enabled and have SGE create a top-level summary of the page your on and the other questions this content is working to answer. It is trained to take that value proposition and beneficial purpose and front-load it as much as possible for users in that regard.”
I guess you're talking about search engines and users here. So, make sure that search engines don't have to hunt for content or find it difficult to find a precise answer to a particular query. Is it primarily search engines that you're talking about to hear from early users or just a mixture of both?
“I think it's a mixture of both because I believe search engines have tried to be the user in this regard for a number of years. That's also why we've had things like CLS as a Core Web Vital metric looking at the ‘janky-ness’ of moving pages.
I mean, we can probably get back to 2016 and The Quality Rater Guidelines when they first started talking about things like main content and supporting content on a page, and having identifiable main content versus supporting, and that comes down to how we position things on the page, how we use certain headers, how we draw attention to being the main.
It also ties in with another concept of the same quality rater guidelines or beneficial purpose. I think about two or three months ago, John Mueller called out a really good example of X amount happening in the SaaS space all the time. We will have a CDN and we'll write a blog post for the 10 Best CDNs of 2024, and it'll have about a couple of nicely nestled CTAs and diagrams, making the most piecemeal effort of what we have. However, the beneficial purpose here isn't to provide a valuable result for the best CDNs in 2024. Every copy is content that has a beneficial purpose. But here's how that aligns in terms of a template structure to get to those elements. And I think it's about piecemeal arbitrage, which has worked historically. But now Google and other search engines are looking at it from more of a user perspective in that is it providing use for a user, or is it a thinly veiled advertorial?”
What does this mean for future business models and content sales? Because of the example you're using earlier on about football, their business model is to drive people through as many ads or distractions as possible before delivering the actual answer to the query that the user is looking for. Then, the example that you were more recently given was the table, but obviously, the table was skewed towards driving people towards the particular business the result is on. So, do both of those business models and content styles not have a place in the future?
“It depends on how quickly your competitors and what your space also shifts to. So, in that kind of space, where publishers might rebill themselves around that business model, obviously for ad impressions, clicks, and whatnot. Unless the entire industry starts to shift away from that, Google has to provide some sort of results to serve, and this is just one factor of a number of factors. A good percentage of our industry is starting to move away from that model, and that's where we will become more impactful.
And now, in a more e-commerce, SaaS-driven space where people are burying information, you've only got a look at it across my cohort if there are very few sites that are incomparable to the information, then we can go inside the other metrics using things like that. It creates that nobody else is doing this, so Google has more options to provide results.
In contrast, it's not in the business of cutting its nose off to spite it’s face. It's not just going to go I'm not going to use any of these publishers and publications which people recognize because they’re barely 200 words and take a minute to read. The moment good content comes around to change that model, that's going to see more of a shift, in my opinion, than across other sectors where it's more open for briefs and adequate content to rank.”
Essentially, it depends on which country in the world you happen to be in, how people operate within that country, and also what target market your business operates in and what your competitors are doing. However, it depends on the larger landscape, and Google is moving towards SGE; there will be many more AI-driven, singular results that websites have to optimize for. Does that mean a complete change in how we optimize in the future?
“I think one of the key things we've had for a number of years on AI, and just the ability of content generation, is probably accelerating this. I'm hesitant to use EEAT as the example here because I think EEAT is greatly misunderstood, like, Okay, we've got an author box, and there's some checklist not far away, because there’s always a checklist for this kind of thing. Well, great. You named who your author is, but what makes them an author, what makes them tied to certain entities, and what works?
I think having a validatable value proposition and perspective that goes into content. It’s been needed for a number of years, and it's a differentiator because AI, right now, can't create perspectives, and if it does, we're asking you to hallucinate and come up with garbage because it's lying if you're asking it to provide a perspective.
Overlaying it with expertise, but then validating the expertise, will always run true. For years we’ve been having well-known people and subject matter experts writing on different blogs, because we’re not just leveraging their expertise, we’re leveraging their audience, we're leveraging different elements.
With how AI has just accelerated to write this content, Google has had take one step back and one step forward, but it's nothing new. If you think about Panda, when it first came out, we had a Unique Article Wizard; all Google had to do was go: ‘Well, the Unique Article Wizard has just been turbocharged.’ Now, we've just got a nicer relationship with every piece of content we write, so let's change it up and make that more challenging.”
Another important aspect of understanding user experience and delivering that result as quickly as possible is understanding the difference between the UX and using a desktop and a mobile device. What are your thoughts on maybe best practices that don't make that much difference nowadays in giving you a better opportunity to rank high for mobile results? Do you do things like reducing content at the top of your page or perhaps your menu functionality at the top?
“I think it depends on the actual website itself. I still have websites in our portfolio, which are 60, 70% desktop user-agent crawled over mobile, and given who their users are, it still makes sense that desktop is the primary crawler still because it’s doctors and people like that who are doing it from lockdown Lenovo’s in the medical section. It also then depends on the content, because if we go back to my example, I want to know what channel this is on and what time kickoff is for a football match.
Having that content at the top of the page as bullet points and having that top heavy might provide more value to me, especially on mobile, than having to do 15 and 16 inches of scroll, trying to look for a header that’s nestled in different ads and stuff. I think very much your UX has to go back to what your actual audience is and what space they’re in, because if I want to diagram, I'm probably expecting text explaining what the diagram is, and then the block itself. The menu still has headers on mobile; I think the web has generally plateaued where they are in that regard.
If you load a blog post on a mobile and the entire first viewport is just a header image and a title, then I'm never a fan of that. But it’s what people are used it to at the end of the day, which means that we have to look at more than just what your competitors are doing. I sometimes think that we forget where people go, which means that for more than just search queries, we are trying to target people who will go and do Netflix things, YouTube things, or research things outside of their jobs.
So that web experience that they have there means they can semi-translate that into what they were looking at anyway because it's what they're used to and it's already conditioned for them. That's why everybody likes big cameras and search bars at the top of the website now, especially in e-commerce, because people know they can go to Amazon and they know what a search bar is because it’s a conditioning mechanism.”
People are used to using websites like Amazon. So, I guess perhaps from a design perspective, it's not the right thing to design your website in a manner that's completely the opposite of the way that current successful websites design their sites.
“It ties back to everything in marketing in my opinion, it’s what people are exposed to the most, what they become used to the most, and how they go about things. I remember Ask Jeeves, and people were trying heavily to coin that ‘Asked Jeeves’ as being a verb versus ‘Google it’, and they had varying degrees of success. I can remember the old adverts ITV growing up with, which was weird with the animated Butler, but that eats more into what people get conditioned to.
Everybody has Amazon as an app on their phones, everyone mostly has Facebook, or some people still have X, Gmail, anything like that. Those UX points their used to makes sense to translate those across other parts of the Internet so that it's a more seamless experience. That build the web savviness of the people and users as well, as everything is familiar, which is important because that familiarity builds an element of trust.”
You shared what SEO should be doing in 2024. Now, let's talk about what SEO shouldn't be doing. What’s something seductive in terms of time but ultimately counterproductive? What's something that SEO shouldn't be doing in 2024?
“One thing I hate at the moment is the immediate reliance on AI for something or finding an AI workaround for something. My main bugbear with this is that let’s say you’re working on log file analysis, for example, if you've never done log file analysis before and you open up ChatGPT and you ask, how do I do this? What's the script? What's a bullet point list? Then taking those shortcuts without actually learning what you're doing has two effects.
You might deliver something which is the desired output and you save time, and that's great. But then what if there's a bug in the output? What if it doesn't work? You don't know really what accurate looks like. You might not fully understand what you're doing it for and you might not be able to extrapolate that additional value.
I love AI and I use it for a lot of stuff, but it's that old argument of learning how to do it. Previously, before AI, it was going on the Internet and immediately just trying to find a Google Sheet checklist for something versus actually learning how to do it without that background knowledge. You're not actually bettering yourself and can't debug or really understand when something goes wrong, and that creates a cascade effect.”
Dan Taylor is the partner and Head of Technical SEO at SALT.agency, and you can find him over at SALT.agency.