Everyone thinks they can write. Even, it would seem, robots. But do these automated writers really have the ability to produce content to rival the
For a long time, keyword rankings were a staple part of any SEO campaign. In a lot of cases they were a primary metric used to judge performance.
Go back five or six years and we had so much more information on the keywords that users were searching for to reach our web content. All of this information was available transparently within Google Analytics, and you could get relatively accurate search volume estimates from within Google’s Keyword Tool.
The first major update that changed this was Google’s move to encrypted search and the dreaded appearance of “not provided” within Google Analytics.
This created a ripple effect across many SEO software providers that made a lot of their tools less effective — or at least tougher — to measure the impact coming from organic search on a granular level.
Next up, and more recently, was Google’s decision to move search volume estimate within their Keyword Planner tool to show estimates in broad ranges. Instead of learning that a keyword was being searched for 1,400 times each month, we’re told that it’s searched between 1k-10k times per month. This isn’t overly helpful.
These changes have forced marketers to adapt their search strategy to focus less on individual keywords and shift to a topic-centric content strategy, especially for content sitting at the top of the funnel.
One of the major criticisms of keyword ranking data is the fact that it is largely inaccurate. Many industry leaders and even software providers of rank tracking data have admitted that this is the case.
The reasons behind this can be broken down into three broad buckets:
Around the time of the launch of Google+, the SEO industry was talking a lot about personalization within search. Even after the death of Google+, personalization has remained a big consideration.
Bonus points if you remember Authorship snippets (circa 2012).
Ultimately, Google will deliver results that are personalized to a user based on their search history. This means that if I were to search for a query like “electric cars” and I’d previously been browsing the Tesla website, it’s a possibility that Google would tailor the rankings of the search results to show Tesla near the top.
This wouldn’t be the case for someone that hasn’t previously visited Tesla’s website, which makes it very tough to determine which website actually ranks #1 (because it can be different from one person to the next).
Whilst personalization plays a part in the ambiguity of keyword rankings, it’s nothing compared to the role of implicit query factors like device and location.
One of Google major advancements in search over the past five years has been its ability to take into account aspects of a search query that aren’t explicitly stated. To make sense of what I’ve just said, let’s take a query like, “Boston restaurants”.
Go back to 2010 and a search for “Boston restaurants” would yield a list of relatively generic websites that either talk about Boston restaurants or maybe are a restaurant.
Fast-forward to 2018 and a simple search for “Boston restaurants” will arm Google with a whole lot more information than before. They’re able to see which device you’ve searched from, where you’re located whilst you’re searching, even if you’re currently on the move.
Let’s say that you searched on an iPhone and you’re walking around in the center of Boston at 11:30 am. Here’s what this query would actually look like to Google:
“Which restaurants are currently open for lunch within walking distance of my current location in the center of Boston, MA?”
They’ve gathered all of this information without the individual even having to type it. As a result, they’re able to completely tailor the search results to this individual searchers’ current situation.
So … to answer the question of who ranks #1 for “Boston restaurants” becomes an even more challenging task.
Strong keyword rankings don’t always equate to high volumes of organic traffic, let alone improvements in revenue. As I mentioned at the beginning, we’ve lost a lot of visibility on search volume metrics, which makes it very difficult to accurately estimate the amount of traffic you can gain from an individual keyword. Factor in the changing appearance of the search engine results page (e.g. the widespread increase in featured snippets) and it becomes an even more daunting task.
If keyword rankings are your North Star, you may be traveling in the completely wrong direction.
When all you’re obsessing over is where each page is tracking against a ranking goal, you’ll likely be misses a ton of other value that your content is bringing in. For example, what if you’ve built out some content with the primary goal of driving backlinks or social traffic, but it isn’t necessarily designed to rank for much itself (e.g. a research report)? Using keyword rankings as a determining factor of success could evaluate content in a completely inaccurate way.
To combat a lot of the issues I raised above, we shifted the way that we measured content at HubSpot. For the past couple of years we’ve taken a step back from analyzing the performance of content on a page-by-page level and looked at the performance of content at the topic cluster level.
Organic search traffic and conversions are our primary search goals, so when we group our content into clusters to try and gain visibility for any searches related to a given topic, we look at the collective performance of these groups of webpages vs just the performance of individual pages.
This model of analysis helps us account for the varying goals of each individual piece of content. Also, running this analysis at scale tells us which topics tend to drive more traffic growth compared to others, and which topics tend to convert traffic at a higher rate.
This information tends to provide much clearer insights for the team as to what they should focus on next without obsessing over individual keyword rankings.
Despite everything I’ve said above, I’m not actually saying that keyword rankings are dead (I can already see the tweets ready to be fired at me!). Keyword data can be useful for digging into any SEO problems that happen to your site, and also to look into the intent behind certain types of searches.
That said, the new version of Google Search Console that has just recently been rolled out should give you pretty much everything you need here.
More than anything, as a marketer you need to be aware that the data that you’re looking at related to keywords is not 100% accurate. As a result, this should never be your primary performance metric.