Use a test and learn approach to define your business objectives
Si says: “My number one tip is to use a test and learn approach to define business objectives. This will help you gauge the true value of SEO activities against business outcomes.
The reason why you do a test and learn approach is that you can forecast into the future and understand what works and what doesn’t work. Forecasting is one thing that SEOs detest, so there is a love-and-hate relationship between SEOs and forecasting.
Nowadays, SEO is part of a marketing remit within large organisations, and you have to treat it professionally. Conducting test and learn, and increasing conversion rates or customer satisfaction as a result of your tests, can have a massive impact on the bottom line.”
Where do you recommend that SEOs get started with test and learn?
“Before you even get to the test and learn approach, the main thing that you need to do is control your data. This is for in-house SEOs and agencies as well. You need to get as much information as possible and warehouse it. As SEOs, we love to use Google Search Console, but you’ve only got 16 months’ worth of data there unless you pipe it into an API and warehouse it in Snowflake or BigQuery - which I highly recommend that you do.
If your agency is not doing it already, please get them to do it. If you’re not doing it in-house it would also be worthwhile, although I do understand that it is expensive to use large warehousing platforms.
What that will mean is that you’ve got a lot more control over your data, including your keyword rankings. With some large platforms, they only keep the information or data for your rank positions for up to about three years, at most. Warehouse as much of that as possible because, when you run your models and you test and learn, you want to look at historic data on any changes that you made on the website, and how that impacted performance.
There are obviously a lot of factors at play within SEO, and algorithm updates that happen. However, if you’ve got historic data, you can layer that in. Then, when you use these machine learning algorithms that I will recommend in a minute, you can normalize against either the changes or the algorithm updates. You can see whether a change that you made on a particular section of your website actually had an impact - negative or positive.
Control your information and control your data first. That will allow you to set the scene for any sort of testing and learning approaches that you put in play.”
What’s your go-to piece of software that you use to number crunch that data once you have stored it?
“Forecast Forge. I’m not affiliated with them in any way, shape, or form - and they don’t pay me to say this - but Forecast Forge is really good in terms of actually helping you crunch all the data and information. You can input your variables, in terms of what you want to forecast or have a look at, and analyse it using their machine learning tools or platforms. It’s also relatively cheap to purchase, so it’s accessible across SEO team sizes. They also support Google Sheets, which makes it easy to share things online.”
What is it about machine learning that makes it so useful for a test and learn approach?
“People make machine learning sound sexier than it actually is. Basically, it’s an algorithm that can predict certain patterns, and that’s all it is. When you’re plugging in 2, 3, 4, or even 5 years’ worth of GSC data, the model can take that span of information and it can look at seasonality, trends, monthly data, etc., and it can create something that’s a lot more robust.
You can look at the things that will help you create a better forecast, for instance. If you create summer campaigns, you can put them into the model, and it can show you what that would look like if you ran the same kind of campaign next year. You can also get different bounds - so you can get upper, medium, or lower bounds. This is very surface level, in terms of the capabilities of what you can do with the information.
If you are a brand, it can give you product launch outcomes. If you’ve launched a product in the past and you want to launch a new one, you can use its modelling information. Machine learning is just learning what happened in the past and using that to predict what might happen in the future. It can give you modelling information about what a product launch might look like, for instance.
This can help you align your SEO activities and give you bigger picture thinking across your marketing planning, and with other activities that you might be running. You could be running a social campaign or an out-of-home campaign, and you want to know how SEO will pick that up. You can layer all the information from the past into a model, and you can start to see the outcome.
Also, when you’re creating new pages, you can see how different sections of the website have impacted overall traffic. This is really great if you are a product manager or somebody that’s working in-house that wants to raise a business case.
With the models that I have worked with, we normally get about an 87% confidence ratio, in terms of how accurate a model is compared to what it is trying to predict. The more information you have, and the more you teach the model or normalize anomalies for certain information or data, the better you can understand how the business might perform this year and the year after that.”
Should you start by taking historical events that have happened in your business, seeing what people do with different areas of your website, and trying to improve for when that happens again in the future?
“Yes, that is a good place to start. With test and learn, you collect all the information that you have from a previous activity or anything that’s happened in the past, and you’re layering that into any campaigns that were running during that period. That could be Black Friday, Valentine’s Day, or Halloween, for instance. You’re then able to identify how that event impacted your business.
When you run your data through something like Forecast Forge, you can use the models to essentially see the outcome for different scenarios. What would have happened if we didn’t run any campaigns? What did happen when we did run the campaigns? That gives you benchmarks for the future - or starting points. For example, if there was lower search volume for a particular term in 2020, and there is a 20% increase in demand by 2023, then you can play that into your model as well. You can say that you will still see an increase of X%.
If you also gather a lot of information about your clients or competitors, you can use other third-party tools, such as Similarweb. You can plug that data in and have a look at what their traffic results have been historically - and they have an API. It is a lot more enterprise tool, for larger brands and larger organisations and agencies. You can also have a look at how they would have performed if they didn’t run any campaigns during important events such as Black Friday, for instance.
When you are comparing yourself against your competitors, or against yourself and what you did or didn’t do in the past, you can test out different scenarios for how things played out. Then, you can essentially see what would happen in 2023 for any campaigns that you want to run from an SEO point of view, that’s integrated with the rest of your marketing plans.”
How did you know that you had achieved an 87% confidence ratio with your forecasting model and what is the importance of that confidence percentage?
“That’s a great question. We ran this particular model consistently with a couple of my clients at a previous agency. We would create a forecast, and run that forecast alongside the actual results. What the data was showing us was that it was about 87% close to being correct. Essentially, there was about 10% give or take - either positive or negative - in terms of its accuracy. If you predicted that you would have 100 clicks per month, the actual traffic for that specific landing page or keyword might have actually been 110 clicks or 90 clicks, for instance. When I tested it across multiple clients, that’s how I knew that the confidence of that model was about 87%.
If you’re an agency and you are doing something similar, with Forecast Forge perhaps, this is where you actually have scale. If you run an agency, you can implement something like Forecast Forge across all your clients, and across different verticals. Using the seasonality data and the machine learning algorithm, you will know what your specific confidence levels are within a specific industry - whether that’s 70%, 80%, etc. Then, when you create those models and you pitch it to a client within that specific industry, you can say that you know (after testing the model with several different clients) that it is going to be about 70% or 80% correct. It gives you the confidence to say that you either want to take a project on, or you might give it less weight.
If you are a product manager or you work in in-house SEO, it might be a bit more costly for you. With whatever SEO tool you are using to look at your competitors’ monthly traffic results (such as Semrush, Ahrefs, or Similarweb), you can take that information, take that data forward, and look at what happened in the past. You can put that into your model, put your own brand into the model as well, and you can see what the confidence is in terms of what actually happened.
Then, when you are doing a business case, you can say that you’ve looked at the industry, you’ve analysed through this learning approach, and your modelling is going to be about 70% accurate, or whatever that might be. You might have a lower confidence percentage because a particular sector or industry might not have the seasonality data, or it could be an industry that was hit by a medic update, for instance. There are a lot of black swan events that need to be factored into the model as well.
That’s why it’s so important to test what happened in the industry, test what’s happening to your competitors, and collect all the information together. Then, when you are making those decisions on the accuracy of your model, you have that historic information.”
What shouldn’t SEOs be doing in 2023? What’s seductive in terms of time, but ultimately counterproductive?
“The main thing to stop doing in 2023, from an SEO point of view, is overcomplicating things. I’ll give you an example. When we create technical audits, there are a lot of things that we want to fix. However, some things that we want to fix within a technical audit might not actually give us the right results.
The guys at SearchPilot do this quite well because they run SEO A/B tests. If you do make a change on your website, A/B testing gives you an idea of how that change actually impacted the performance of the site, whether that’s positive or negative. For certain websites, for instance, fussing about with the alt tag doesn’t make any sense because the guys at SearchPilot found that it actually has a negative impact. Therefore, optimising that for an exact match keyword could hurt the site’s performance.
Keep things simple. Focus on the most important things for the business; focus on good content that’s specific to your market or your niche. Make sure you’re using things like the Flesch-Kincaid score to understand the readability of the content for your particular topic or your niche.
Also, people think that links are dead, but speak to the guys at Majestic and they’ll tell you otherwise. Focusing on good content marketing is still a really good thing to do, and I see a lot of SEOs are not really focusing on this as much. They’re leaving a lot of opportunities on the table for their competitors. Your website does exist out there in the ether, but you still need to promote it, tell people about it, and create engaging content that can do that for you.”
Si Shangase is an independent Digital Consultant specialising in OneSearch, measurement frameworks, and digital transformation. You can find him over at si-shangase.com.