Influence: The Psychology of Persuasion – Outreach Takeaways

Recently, i’ve been trying to better understand what constitutes ‘good outreach’ in an attempt to increase response rates and overall placements. Ignoring the quality of the actual content being outreached — which, I believe, will always be the number one factor — there appear to be three main components. These are:

  1. Outreach targets: The quality of the contact list, e.g. relevency to the writer/site.
  2. The Pitch: The language used in the initial email — subject line, copy, length of copy, link inclusion, attachment inclusion, etc.
  3. Analysis and Tracking: Using software to monitor response rates and test different approaches.

Components one and three are both fairly easy to improve upon; you simply need to buy the right tools, and implement the right training and processes. Writing good pitches, however, is a far more difficult challenge, with success largely being determined by the personal preferences of your contacts.

What Constitutes a Good Pitch?

Anyone who has performed a lot of outreach will know that it’s hard to predict exactly what will happen on a given day. Sometimes you can send out half a dozen emails and secure a couple of good placements; other days you can send 30 and get no replies at all. Yet, whilst there’s certainly an element of luck involved, some people are undoubtedly more successful than others. This may be hard to measure in the short term, but if you compare people over a range of different projects then you’ll start seeing trends emerge.

Partially, this will be down to experience; the more you perform a task, the better you normally get at it; but interestingly this isn’t always the case, and the fact that most people improve over time doesn’t tell us why they improve. Thus, I decided to read Influence: The Psychology of Persuasion by Robert Cialdini to see if I could identify any psychological principles underpinning success rates. The main points of interest are listed below.

Outreach Takeaways

Favours — A well known principle of human behaviour says that when we ask someone a favour, we will be more successful if we provide a reason. People simply like to have reasons for what they do.

Takeaway — Always provide a ‘because’ in your outreach emails. What is the reason someone should run your infographic, or publish your survey?

The Rule of Reciprocation — We feel obligated to repay, in kind, what another person has provided or done for us. This feeling of indebtedness can be triggered by doing an uninvited favour, and we may be willing to agree to perform a larger favour than we received to relieve ourselves of the psychological burden.

Takeaway — Consider sending useful and relevant information over to a writer before you actually contact them. When you do contact them, they may be more likely to comply with your actual request.

We feel an obligation to make a concession to someone who has made one to us. Mutual concessions are an important part of socially desirable arrangements, as they ensure that both parties are not exploited.

Takeaway — If you ask for a second smaller favour after the first, then it’s more likely to be accepted. As an example, if a site doesn’t want to cover a piece of your content, ask if they’d be willing to share it socially.

Consistency — Once we have made a decision, we will encounter personal and interpersonal pressures to behave consistently. One technique used in sales is the ‘foot in the door technique’, where they start out with a little request in order to gain eventual compliance with a larger one.

Takeaway — Asking someone to view an earlier draft of a project, or perform a smaller task such as social sharing, can increase the overall chance of compliance.

Social Proof — We determine what the correct behaviour is by observing others. When people are uncertain, they are more likely to use others’ actions to decide how to act. This is more powerful when we are watching people similar to ourselves.

Takeaway — If you’ve secured other placements then don’t be afraid of using them to provide credibility that a piece is worth covering.

Liking — We all feel pressure to say yes to someone we know and like. Often the mention of a name is enough.

Takeaway — If you receive a reply (whether positive or negative) consider asking the contact if they know anyone else who might be interested.

Similarity — We like people who are similar to us. This fact seems to hold true whether the similarity is in the area of opinions, personality traits, background, or lifestyle. People can claim they have backgrounds and interests similar to us to boost compliance.

Takeaway — If you make an effort to show you know someone’s ‘beat’, you’ll have more success.

Compliments — Although there are limits, we tend to believe praise and to like those who provide it, even if it seems false.

Takeaway — Don’t be afraid to complement a contact on their existing content or site. You may worry they’ll see it as fake, but that’s not necessarily true.

Co-operation — Establishing co-operation and mutual benefit can boost compliance.

Takeaway — Ensure that your emails are focused around the benefit you’ll be providing to the contact, rather than the benefit to yourself.

Association — There is a natural human tendency to dislike a person who brings us unpleasant information, even when that person did not cause the bad news. The simple association is enough to stimulate dislike.

Takeaway — Avoid referencing SEO or anything else that may have a negative impact by association.

Scarcity — Opportunities seem more valuable to us when their availability is limited. Because we know that the things that are difficult to possess are typically better than those that are easy to posses, we can often use an item’s availability to help us quickly and correctly decide on its quality. Not only do we want the same item more when it is scarce, we want it most when we are in competition for it.

Takeaway — Don’t be afraid to pitch exclusives to larger publications, preferably for a few weeks only. Also don’t be afraid to casually mention you’re talking to other publications.

Parsing Logs for SEO Analysis Using Windows CMD Line

Using log files for SEO analysis is a great way to uncover issues that you may have otherwise missed. This is because, unlike third party spiders, they allow you to see exactly how Googlebot is crawling a site.

If you’re an SEO professional looking to carry out your own log file analysis, then the chances are you’ll have to request the files through your own, or your clients, dev team. However, what you get back can vary widely depending on the servers configuration, and whether or not the developers were willing to extract Googlebot from the raw logs.

Whenever I go through this process at work, my standard request is for around 100k Googlebot requests. On smaller sites, I recommend pulling 2-4 weeks worth of logs; on larger sites, 5-7 days is usually sufficient.

Occasionally, the developers will be nice enough to perform a GREP to extract Googlebot from the raw files. Most of the time, however, they’ll just send them over as is:

Server Logs - SEO

If you end up in this scenario and you’re a Mac user, then you’re in luck — Mac is a UNIX based operating system so you can perform a GREP using its version of the terminal; if you’re on Windows like myself, then you’re either going to need to use an emulator like Cygwin, third party software like Gamut, or the Windows CMD line.

Luckily, Windows CMD line does have the functionality to extract text, through a function called Findstr.

As an initial step, head over to the start menu and perform a search for ‘CMD’. Click it to load up the command line (effectively a text based interface for your operating system). You’ll see something like this:

CMD Line FINDSTR

For ease of use, create a folder in the location that is indicated within the interface (in this instance C:\Users\Will) and place the log files within it. Use the ‘CD’ command to access the folder you created. E.g.

cd c:\Users\Will\Logs

Next, you can run the following command:

findstr “Googlebot” *.log > export.csv

This will search all of the .log files in the folder and extract all of the instances of Googlebot into one file that you can open in excel. When it’s completed processing you’ll be shown a fresh line in the command prompt; on larger batches this may take some time to complete.

FAQ

  1. Why would you use this over something like Gamut with a graphical user interface?

Mainly because it’s free and actually usually ends up being quicker. Additionally, any graphical interface requires some form of processing power, which can cause it to hang on very large files.

     2.  What if i’m not using logs?

Although this is primarily aimed at logs, you can use it for any form of text extraction. Ever wanted to merged a load of CSV files when doing link analysis/prospecting? Put them in the same folder and run the following command:

copy *.csv export.csv

What SEOs Can Learn From #TheDress

Unless you’ve been living under a rock for the past week, you’ve probably read or heard something about #TheDress. The debate went viral last Thursday, resulting in a massive influx of publicity for the retailer Roman Originals and an overnight 347% increase in sales for the garment in question.

Yet, beyond an initial spike in product interest, what will the legacy of #TheDress be for the brand? Additionally, based on previous academic research, what can we learn from the stories success?

Using these questions as a starting point, this post will attempt to explain:

  • Why this story went viral
  • The potential long term SEO benefit to Roman Originals

What Makes a Story Go Viral?

Like most SEOs, i’ve faced the disappointment of a well planned out piece of content falling flat — and have had to attempt to explain to a client why it didn’t perform as anticipated.

The harsh reality is that the sheer quantity of quality content being produced has made it extremely difficult to make a significant impact, and often the difference between successful and unsuccessful campaigns can feel like pure luck.

Yet, whilst luck certainly plays a part, certain types of content do seem to perform consistently better than others. This means that it should be theoretically possible to anticipate the types of content that will be successful, by comparing them to a specific criterion.

In his book, ‘Contagious’, Dr Jonah Berger lists six factors which can serve as a framework for successful viral content. These are:

Social Currency: People like to share things that have some sort of benefit them. For example, reinforcing their views or values, or helping them to be perceived in a certain way.

Triggers: Effective content often incorporates stimuli; external triggers that remind people of the original piece and help to keep it on peoples minds.

Emotion: If something evokes a strong emotive reaction, we are more inclined to share it — although some emotions can be less beneficial.

Public: We have a tendency to imitate the behaviour of others. For example, content that is observable and inclusive

Practical Value: We all crave information that is valuable and will make a difference in our lives, or the lives of those we care about. Hence the popularity of practical content with clear utility.

Stories: As human beings, we have always disseminated information through storytelling. Wherever possible, content should attempt to tap into this love for stories by creating a compelling narrative.

In my mind, #TheDress debate tapped into at least three of these factors:

Emotion

The dress appeared to be completely different colours to different people, eliciting strong emotions from those on either side of the debate. These included: awe, surprise, confusion and frustration.

Public

The division in opinions created by the garment, caused a very public and large scale debate that everyone could freely participate in.

Practical Value

The story was both fairly unique and highly inclusive, presenting clear value in sharing the story with others.

So as SEOs what can we learn from this? Primarily that content success isn’t always determined by budget. Sometimes — like in this instance — you can get lucky, regardless of the effort or planning you put in. Nevertheless, if you use the above framework as a litmus test for your ideas, you stand a much better chance of achieving success.

SEO Benefits

Now we’ve covered the factors that helped the story go viral, it’s time to analyse the SEO benefits that Roman Originals may see from #TheDress, due to the large quantity of high quality links the story has accrued.

On 27/2/15, the website only had 508 linking root domains; a fairly small number considering the competitiveness of the fashion vertical. As of 4/3/15, this had already increased to 1213 domains, a rise of 138.8%.

Roman Originals Majestic Graph

The vast majority of these links were also of a very high quality, causing the websites Majestic TF to increase from 17 on 27/2/15, to 27 on 4/3/15. Some of the best links included:

URLDomain PRDomain TFDomain CF
http://huffingtonpost.com/2015/02/27/the-dress_n_6766774.html88489
http://independent.co.uk/life-style/fashion/news/the-dress-actual-colour-brand-and-price-details-revealed-10074686.html87876
https://yahoo.com/health/is-this-dress-blue-and-black-or-white-and-gold-112194158507.html99297
http://metro.co.uk/2015/02/27/this-is-why-your-facebook-and-twitter-feeds-are-full-of-chat-about-that-dress-this-morning-5081515/63651
http://money.cnn.com/2015/02/27/smallbusiness/the-dress-blue-black-gold-white/index.html?sr=twmoney0227dressmaker853aVODtop87976
http://myfoxdc.com/story/28219686/real-color-of-the-dress64546
http://cbsnews.com/news/blue-and-black-dress-is-not-white-and-gold-uk-retailer-says/97380
http://telegraph.co.uk/technology/social-media/11439084/So-what-colour-is-this-dress-The-science-and-theories-behind-dressgate.html87981
http://mashable.com/2015/02/26/buy-blue-black-white-gold-dress/87784
http://cbc.ca/news/community/thedress-colour-question-divides-an-internet-1.297493288470

Penalty Prevention

As well as the direct ranking benefits the links will likely bring, another perk is reduction in probability of the Penguin algorithm negatively impacting on the website. Or alternatively — if the domain is already affected — a higher chance of escaping the algo, without submitting a disavow file.

Whether or not it is possible to escape Penguin by building or attracting high quality links is a topic that has been discussed in some detail before. The general consensus is yes, because Penguin works by looking at ratios of good and bad links, before penalising websites on a gradient of impact. As such, it is theoretically possible for an affected domain to recover if these ratios change.

The influx in high quality links from news sites covering the story has resulted in a massive increase in Trust Flow for the domain, which is a good indication that this ratio has shifted somewhat. More interesting, however, is the change in the proportion of commercial vs branded and generic anchor texts.

Roman Originals anchor text profile previously showed relatively high levels of links using commercial anchors, including target keywords like: ‘evening wear’, ‘evening blouses’, ‘evening wear dresses’ and ‘womens clothing’.

Majestic Anchor Text Ratio Roman Originals Feb
The recent influx of links has massively improved this ratio, as most of the websites linked with branded or generic anchor texts. Consequently, by 4/3/15, the profile looked like this:

Majestic Anchor Text Ration Roman Originals March

As anchor text ratios are one of the primary factors Penguin looks at, it stands to reason that this increase in branded and generic terms may both serve to prevent any future penalties occurring, or help the brand to escape any current penalisation. In this instance, the former scenario seems more likely, as Semrush shows no large-scale drops in organic visibility.

semrush-roman-originals

Note: Without seeing webmaster tools, there is no way of knowing whether or not these links have already been disavowed, in which case this would all be a non-issue. Nevertheless, the above is still an interesting example of how a viral piece of content could potentially lead to a website escaping an algorithmic penalty.

Conclusion

Although it was largely accidental, the success of #TheDress is still a good case study for the ways in which viral content can benefit a business. The challenge for SEOs and marketers alike, is how we can approach content creation in a way that present us with a higher probability of achieving success. One way to approach this, is to study previous academic research and incorporate the frameworks they have developed. If content ideas are then compared against these, it should be possible to weed out the ideas that don’t fit the correct criterion.

Do you consider these frameworks when coming up with your own ideas?

Experimenting with Crowdsearch.me

For my inaugural post on this blog I decided to experiment with a brand new SEO service – crowdsearch.me – which is one of the first platforms attempting to improve website rankings by replicating user engagement signals.

Crowdsearch.me positions itself as the future of SEO; boldly claiming within the sales video that CTR is now the no1 factor that Google uses to determine rankings. It supports this claim by citing this years Search Metrics 2014 Ranking Factors Study, which does in all fairness list Click Through Rate as having the highest correlation with rankings within positions 1-5.

Search Metrics 2014 Ranking Factors

Yet, as most SEO’s will tell you, correlation does not imply causation. In fact, social signals consistently rank highly within these types of studies, despite Google stating multiple times that they are not currently a direct factor within its ranking algorithms – something that has been covered well in previous posts and studies.

So if they don’t directly impact on results, why is there such a strong correlation between social signals and rankings in these types of studies? Primarily, its because high quality content that attracts social shares, also tends to be the type of content that people link to.

As an example, a piece of content is produced that is particularly good and attracts a large quantity of social shares. As a result of all of these shares, it also receives a high proportion of traffic; which results in several websites linking to the content, increasing the contents ranking positions and, therefore, the probability that others will discover it through organic search. This process can go on ad infinitum, with more and more individuals finding the piece of content through organic search, social, or third party referrals and, in turn, sharing it through one of these channels.

User Engagement and Rankings

Industry standard over-egging in the sales video aside, is there any hard evidence to support user engagement metrics as having a causal impact on rankings? The answer to this, I believe, is yes; following on from the introduction of the Panda algorithm.

Prior to the implementation of Panda, a number of public criticisms were made about the declining quality of the Google search results. Specifically, people were becoming frustrated by the increase in the number of low quality websites which were able to rank for large quantities of med/long tail phrases, but whose content provided very little value to users.

In an attempt to address these issues, Google Panda was introduced as a method of algorithmically assessing the quality of pages ranking for specific search queries. It accomplished this by placing a greater emphasis on the plurality of groups of resources (rather than pages optimising for specific keywords), grouping documents and determining an overall quality score.

This score is determined by a large number of different signals that are derived from the data gathered by Google’s manual quality raters and used to continually refine the quality of the overall algorithm. The criterion used for this can – in my mind – be broken down into five distinct categories: Content quality, user engagement, usability, trustworthiness, and over optimisation.

Specifically in terms of user engagement, factors that Google are thought to look at include: the proportion of clicks a website gets for the queries it ranks for, whether users have a reasonable dwell time on a website or quickly bounce off (known as pogo sticking), and the amount of pages visited per session.

So Why Isn’t Everyone Doing This?

Although many within the industry realise the importance of user engagement metrics in a websites overall rankings – particularly since panda 4.1 – Google has been relatively quiet about the role they now play. This is nothing new, though, as it has also remained relatively ambiguous on how sites may overcome Panda issues; typically only offering vague generalisms, such as ‘produce great content’, as well as the odd list of questions that webmasters can use for evaluating their own sites.

This ambiguity is probably because Google fear that acknowledging user engagement metrics as a ranking factor will lead to abuse and result in a subsequent drop in search quality. After spending almost two decades attempting to combat link spam, its fairly easy to see their point here.

When Google has acknowledged ranking factors over the past few years, it has primarily been in an attempt to encourage implementations that will help them improve the quality of their results. For example, https and site speed were both officially announced on the Google webmaster blog, resulting in a large number of websites changing to faster servers and installing ssl certificates.

In addition to a lack of a public acknowledgement, another reason for the lack of similar services could be because it is relatively difficult to set this type of system up. This is because to work effectively, the software would likely need to include:

  • Functionality which enables it to be able to perform a search and select the right link from pages of results (or users who are willing to do so).
  • The ability to emulate real user interactions (or a large amount of real users).
  • A significant range IP addresses within different countries (or a large range of users that reside in different countries).
  • A feature which enables the exact duration of visits and/or pages clicked to be randomised (not an issue with real users).

In fact, the only other public study I know of that attempted something similar was Rand’s imec labs test, which cannot be called conclusive, but did show positive results.

The Case Study

To test the software, I selected an affiliate site within the homeware sector. The chosen website has been on the first page for most of its keywords for the last 12 months, but has not moved past position four for its main term. As well as being relatively stable, the website has also not had any new link building activity for a significant period of time – 6+ months – making it a good website to test, as it minimises the risk of other causal factors skewing the results.

The keywords that will be targeted by the platform are as follows:

Keyword Search Volume (UK) Current Rank
#1 2900 4
#2 70 2
#3 50 2
#4 10 2
#5 10 4
#6 5 4
#7 5 4

 

Additionally, as per the platforms tutorial, I also included several brand and URL variations to ensure that the visits did not appear artificial.

The Platform

The platform itself is very simple to use. After logging in you are presented with both a video tutorial and a link to an article of best practices. You can then click to add a campaign – essentially one keyword variation – and are presented with a campaign information form.

When adding a keyword, users can select their domain (or exact URL), keyword, Google TLD, searches per day, and the average duration of visits. Additionally, the software comes with several more advanced features, including:

  • Bounce Back – A feature which causes some searchers to select a competitors result and then quickly bounce off the website, before selecting yours.
  • Internal Browsing – A component which ensures searchers click on multiple pages on your website, rather than just the URL you select.
  • Random Browsing – Where users select a random combination of your website pages during their session.
  • Manual Browsing –  Where users select a manually designated combination of pages during their session.
  • Social Sharing – Twitter shares and/or favourites, depending on what you select.
  • Rank checking – An inbuilt rank checker to track any increases.
  • Smart Rank – Technology designed to adjust your daily volume of searches, depending on your rank and the keywords search volume.

For my test, I set all of my searches at between 2-5 minutes in duration and varied the amount of daily visits between 2-10. This was based on the search volume and competitiveness of the target keyword.  I also enabled Internal Browsing, Random Browsing, and Bounce Back.

Although I was intrigued by Smart Rank, i was unfortunately not able to select it on any of my target keywords. This is because it’s only available if you are ranking below 120 for the selected phrase and the volume is above 1000 searches per month.

Social shares were also not  selected for the reasons discussed above.

Crowdsearch.me Dashboard

Results – 7/1/2015 – 21/1/2015

After two weeks, the results recording using this platform weren’t particularly impressive. Keyword 7 did increase by two places, but this could be result of normal ranking fluctuations.

Keyword Search Volume (UK) Current Rank
#1 2900 4
#2 70 2
#3 50 2
#4 10 2
#5 10 4
#6 5 4
#7 5 2

 

However, despite this lack of success, I am reluctant to draw any negative conclusions. This is partially because others within the industry – e.g. Terry Kyle – have reported positive improvements using the software, and also because this is only a study of one.

Some factors that may have made a difference include:

    • Test Duration – Although we can postulate that user engagement metrics can impact on a websites rankings, we don’t know the total quantity of data Google examines before the algo makes the decision to promote or demote a search result. Additionally, although Panda supposedly updates on a continual basis, observable algorithmic fluctuations on software like Algoroo seem to imply that it refreshes towards the end of the month – collecting data and then rolling out over a 7-10 day period.
    • Volume of visits – Smaller keywords only had 3-5 daily visits, whilst the largest keyword had around 10 visits. On larger keywords it is possible that an increased proportion of visits may be needed to result ranking increases.

Starting position of keywords – Google is known to use logarithmic scales, making it far harder to move up from position 2 to 1, than 20 to 10. Presuming they might use something similar within the Panda algo, it is possible that other other studies saw more positive movement because keywords were ranking in lower positions.

As all of the above factors are purely conjecture, I will keep the test running for at least one month to see if things improve further. I will also in increase the amount of visits on the keyword with the largest search volume, from 10 to 15. Please check back in a couple of weeks for another update.