For my inaugural post on this blog I decided to experiment with a brand new SEO service – crowdsearch.me – which is one of the first platforms attempting to improve website rankings by replicating user engagement signals.

Crowdsearch.me positions itself as the future of SEO; boldly claiming within the sales video that CTR is now the no1 factor that Google uses to determine rankings. It supports this claim by citing this years Search Metrics 2014 Ranking Factors Study, which does in all fairness list Click Through Rate as having the highest correlation with rankings within positions 1-5.

Yet, as most SEO’s will tell you, correlation does not imply causation. In fact, social signals consistently rank highly within these types of studies, despite Google stating multiple times that they are not currently a direct factor within its ranking algorithms – something that has been covered well in previous posts and studies.

So if they don’t directly impact on results, why is there such a strong correlation between social signals and rankings in these types of studies? Primarily, its because high quality content that attracts social shares, also tends to be the type of content that people link to.

As an example, a piece of content is produced that is particularly good and attracts a large quantity of social shares. As a result of all of these shares, it also receives a high proportion of traffic; which results in several websites linking to the content, increasing the contents ranking positions and, therefore, the probability that others will discover it through organic search. This process can go on ad infinitum, with more and more individuals finding the piece of content through organic search, social, or third party referrals and, in turn, sharing it through one of these channels.

User Engagement and Rankings

Industry standard over-egging in the sales video aside, is there any hard evidence to support user engagement metrics as having a causal impact on rankings? The answer to this, I believe, is yes; following on from the introduction of the Panda algorithm.

Prior to the implementation of Panda, a number of public criticisms were made about the declining quality of the Google search results. Specifically, people were becoming frustrated by the increase in the number of low quality websites which were able to rank for large quantities of med/long tail phrases, but whose content provided very little value to users.

In an attempt to address these issues, Google Panda was introduced as a method of algorithmically assessing the quality of pages ranking for specific search queries. It accomplished this by placing a greater emphasis on the plurality of groups of resources (rather than pages optimising for specific keywords), grouping documents and determining an overall quality score.

This score is determined by a large number of different signals that are derived from the data gathered by Google’s manual quality raters and used to continually refine the quality of the overall algorithm. The criterion used for this can – in my mind – be broken down into five distinct categories: Content quality, user engagement, usability, trustworthiness, and over optimisation.

Specifically in terms of user engagement, factors that Google are thought to look at include: the proportion of clicks a website gets for the queries it ranks for, whether users have a reasonable dwell time on a website or quickly bounce off (known as pogo sticking), and the amount of pages visited per session.

So Why Isn’t Everyone Doing This?

Although many within the industry realise the importance of user engagement metrics in a websites overall rankings – particularly since panda 4.1 – Google has been relatively quiet about the role they now play. This is nothing new, though, as it has also remained relatively ambiguous on how sites may overcome Panda issues; typically only offering vague generalisms, such as ‘produce great content’, as well as the odd list of questions that webmasters can use for evaluating their own sites.

This ambiguity is probably because Google fear that acknowledging user engagement metrics as a ranking factor will lead to abuse and result in a subsequent drop in search quality. After spending almost two decades attempting to combat link spam, its fairly easy to see their point here.

When Google has acknowledged ranking factors over the past few years, it has primarily been in an attempt to encourage implementations that will help them improve the quality of their results. For example, https and site speed were both officially announced on the Google webmaster blog, resulting in a large number of websites changing to faster servers and installing ssl certificates.

In addition to a lack of a public acknowledgement, another reason for the lack of similar services could be because it is relatively difficult to set this type of system up. This is because to work effectively, the software would likely need to include:

  • Functionality which enables it to be able to perform a search and select the right link from pages of results (or users who are willing to do so).
  • The ability to emulate real user interactions (or a large amount of real users).
  • A significant range IP addresses within different countries (or a large range of users that reside in different countries).
  • A feature which enables the exact duration of visits and/or pages clicked to be randomised (not an issue with real users).

In fact, the only other public study I know of that attempted something similar was Rand’s imec labs test, which cannot be called conclusive, but did show positive results.

The Case Study

To test the software, I selected an affiliate site within the homeware sector. The chosen website has been on the first page for most of its keywords for the last 12 months, but has not moved past position four for its main term. As well as being relatively stable, the website has also not had any new link building activity for a significant period of time – 6+ months – making it a good website to test, as it minimises the risk of other causal factors skewing the results.

The keywords that will be targeted by the platform are as follows:

Keyword Search Volume (UK) Current Rank
#1 2900 4
#2 70 2
#3 50 2
#4 10 2
#5 10 4
#6 10 4
#7 10 4

Additionally, as per the platforms tutorial, I also included several brand and URL variations to ensure that the visits did not appear artificial.

The Platform

The platform itself is very simple to use. After logging in you are presented with both a video tutorial and a link to an article of best practices. You can then click to add a campaign – essentially one keyword variation – and are presented with a campaign information form.

When adding a keyword, users can select their domain (or exact URL), keyword, Google TLD, searches per day, and the average duration of visits. Additionally, the software comes with several more advanced features, including:

  • Bounce Back – A feature which causes some searchers to select a competitors result and then quickly bounce off the website, before selecting yours.
  • Internal Browsing – A component which ensures searchers click on multiple pages on your website, rather than just the URL you select.
  • Random Browsing – Where users select a random combination of your website pages during their session.
  • Manual Browsing – Where users select a manually designated combination of pages during their session.
  • Social Sharing – Twitter shares and/or favourites, depending on what you select.
  • Rank checking – An inbuilt rank checker to track any increases.
  • Smart Rank – Technology designed to adjust your daily volume of searches, depending on your rank and the keywords search volume.

For my test, I set all of my searches at between 2-5 minutes in duration and varied the amount of daily visits between 2-10. This was based on the search volume and competitiveness of the target keyword. I also enabled Internal Browsing, Random Browsing, and Bounce Back.

Although I was intrigued by Smart Rank, i was unfortunately not able to select it on any of my target keywords. This is because it’s only available if you are ranking below 120 for the selected phrase and the volume is above 1000 searches per month.

Social shares were also not selected for the reasons discussed above.

Results – 7/1/2015 – 21/1/2015

After two weeks, the results recording using this platform weren’t particularly impressive. Keyword 7 did increase by two places, but this could be result of normal ranking fluctuations.

Keyword Search Volume (UK) Current Rank
#1 2900 4
#2 70 2
#3 50 2
#4 10 2
#5 10 4
#6 10 4
#7 10 2

Whilst this test was not successful, it is difficult to draw any real conclusions from a study of one.

Some factors that may have made a difference include:

  • Test Duration – Although we can postulate that user engagement metrics can impact on a websites rankings, we don’t know the total quantity of data Google examines before the algo makes the decision to promote or demote a search result. Additionally, although Panda supposedly updates on a continual basis, observable algorithmic fluctuations on software like Algoroo seem to imply that it refreshes towards the end of the month – collecting data and then rolling out over a 7-10 day period.

  • Volume of visits – Smaller keywords only had 3-5 daily visits, whilst the largest keyword had around 10 visits. On larger keywords it is possible that an increased proportion of visits may be needed to result ranking increases.

  • Starting position of keywords – Google is known to use logarithmic scales, making it far harder to move up from position 2 to 1, than 20 to 10. Presuming they might use something similar within the Panda algo, it is possible that other other studies saw more positive movement because keywords were ranking in lower positions.