We recently had a client recover from a manual action placed by Google (sometimes also called a ‘manual penalty’) for “unnatural links from your site”. This is one of the many manual actions a website can receive for infringing Google’s webmaster quality guidelines.
Google’s documentation states that this manual action occurs when “Google has detected a pattern of unnatural artificial, deceptive, or manipulative outbound links”. This intimidating offence seems like something only black hat SEOs would be accused of, so I was somewhat surprised when we received the manual action message through Search Console. But it turns out we were innocently infringing Google’s Webmaster Guidelines…
In this article, I provide a detailed outline of events which led to our recovery of this manual action. You’ll get to understand why our recovery timeline was so long and how you can mitigate that through thorough investigative work and practising less patience.
Timeline of recovery
To understand exactly what was wrong, I needed to investigate where the issue was on our site. With this type of manual action, either a section or your full site will be affected, respectfully known as a “partial match” or a “site-wide match”. For example, if Google has detected the violation on your blog which sits within a subfolder of the same name, they would state that only pages with the URL pattern example.com/blog/are affected.
With the issue narrowed down to a specific section of the website, I took a deeper dive into what a violating link looked like. The best place to start is Google’s Webmaster Guidelines on linking, or what they also call “Link schemes”. On this page, Google provides several examples of link schemes – the TL;DR version of this page is that you’re likely violating Google’s Webmaster Guidelines if you have any links which are intentionally placed in an attempt to pass PageRank to outbound sources. It was at this point that I had my first “eureka!” moment.
One of the examples of violations that Google provides is “Text advertisements that pass PageRank”. In this instance, the clients’ website generates revenue through paid advertorials on their site. While a completely acceptable advertising method used by even the biggest publishers, you need to have a tight control on the placement of links, while also ensuring they either have a rel=”nofollow” attribute or are redirected through a page which is blocked by robots.txt.
Some publishers even take this a step further and place a rel=”nofollow” link on any outbound link, which I think is completely unnecessary (and makes it difficult for search engines to crawl and index the web). It was at this point, we decided to methodically assess our issue on scale.
The clients website is huge. Therefore, manually assessing every webpage in the /blog/ would take a huge amount of resource. We decided that it would make sense to use Screaming Frog to crawl the entire Blog subfolder to extract all outbound links.
Once we had this data, we created a spreadsheet which showed the link count to all outbound domains. Using this data, we sat down with the client and assessed which links were definitely in place due to paid partnerships. With this refined list, we programmatically set a rel=”nofollow” directive on those links. Furthermore, our client also placed strict instructions within their content writer guidelines and made small tweaks to the copy promoting their paid advertorial opportunities so that it was very clear that they’re not in the business of exchanging PageRank (for example, we changed ‘purchase our content’ to ‘license our brand’).
We then submitted a reconsideration request to Google, which read as follows:
“Google Webmaster Team, example.com received a manual penalty for “Unnatural links from your site” affecting pages within our example.com/blogs section.
This manual penalty has been taken very seriously by example.com.
After heavy investigative work, we believe that this manual penalty is due to an innocent mistake on our behalf. We do not believe that our Development, Content or Advertising team members are intentionally or knowingly participating in any link schemes or attempting to violate Google’s Webmaster Guidelines.
To remedy this issue, we have performed a crawl of the offending sections to extract instances of “nofollow” links both within editorial content and any on-page advertising. The resulting list has been reviewed in great depth and we have updated code where a “nofollow” directive should be in place.
We have also updated our internal training documents to ensure our Development, Content or Advertising teams are educated on this manual penalty and reasons why this penalty can be placed on a website.
It is in example.com’s best interest to provide high quality, relevant content and be perceived as a trustworthy source of music industry journalism. In doing so we also seek to align with Google’s Webmaster Guidelines.
We kindly ask that Google reviews this reconsideration request to remove the manual penalty. Further evidence of our changes can be provided on request.”
In hindsight we submitted this too early. We should have spent more time investigating the violating links. On day 16 we realised, several links had been missed in our initial “fix”.
We received a response from the Google Webmaster Team notifying us that our reconsideration request had been processed but was not fully accepted. They were however, useful with providing an example of an offending link. When checking this link, this was definitely a link from a paid advertorial that didn’t have a rel=”nofollow” directive placed on it.
It was clear that our initial solution was not thorough enough and we therefore needed to reassess our methodology. This is where I had my second “eureka!” moment. From assessing common elements on paid advertorials, while it may seem obvious now, we realised that each one contained a phrase such as
“In partnership with…”
“This content is sponsored by…”
Once we knew exactly all the phrases we were looking for, we ran a Screaming Frog crawl to extract all pages within our Blog which contained this text.
I’m not going to say the next step is easy, but it was a necessary task that you cannot cut corners on! We took the pages from the crawl and manually assessed each one for violating links. Pages containing violating links then took one of the following pathways:
- Set a rel=”nofollow” on violating links if they’re still relevant
- Remove violating links that are no longer relevant
- Delete content.
Deleting content may seem like an extreme measure, but we saw this as a good opportunity to remove content that was completely irrelevant and received no visibility – for example, a paid advertorial that was published for Christmas 2014.
After spending two weeks manually reviewing content, we were finally prepared to submit another reconsideration request. We tweaked our initial reconsideration request message and confidently clicked “Request Review”.
Now, we know that Google can take anywhere from a few hours in the best case, to several weeks in worst case scenarios. However, we weren’t prepared for just how long the wait ahead of us would be…
Four weeks had passed and we were yet to receive any correspondence from the Google Webmaster Team. We attributed this to being Christmas time and we were sitting at the end of a very long backlog of reconsideration requests.
One week later and we still hadn’t received any response. Impatiently waiting, we submitted the same reconsideration request from Day 40. In hindsight, we should have done this sooner to speed up the process and according to Google this is completely fine to do:
In response to someone asking whether it’s normal to wait two months to hear back regarding a reconsideration request, Matt Cutts stated that “..no that’s not normal. What I would do is I would actually do another reconsideration request and I would mention hey I didn’t hear back..”
80 full days after we received the initial manual action, it was finally removed. As shown on the graph below, the organic traffic for the Blog subfolder also quickly started to recover.
While the timeline stretched over 80 days, we learnt a lot about how we would deal with this manual action if we were to manage the recovery process again:
- Don’t submit your reconsideration request too early. In hindsight, we should have double-checked, then checked once more to ensure we completely remediated the situation. It’s better to spend another week triple-checking potential violating links, than it is to wait several weeks for a response from the Google Webmaster Team, only to be told that you need to conduct some further investigation.
- Be thorough in your investigation. I recommend using crawling tool like Screaming Frog to produce an outbound link count report for the affected sections, but you should also look for phrases or tags which you know are used on your promotional content. Furthermore, show diligence by manually reviewing offending pages and subsequently placing a rel=”nofollow” directive on violating links, or removing links which are no longer relevant. It may also help to run a site search for your domain, if you’re not sure (ie. type “site:example.com/blog/ “promotional” into Google search).
- Don’t exercise too much patience. While it’s very unlikely that Google will respond to your reconsideration request in a few hours, it’s also in rare circumstances that they require two months to review your request. Therefore, as Matt Cutts stated “what I would do is I would actually do another reconsideration request and I would mention hey I didn’t hear back..”. As I mentioned, we waited five weeks before submitting the same reconsideration request which lifted the manual action a short few days later.