A Grumpy Panda (sourced from guzer.com)
Ever since it was released way back in February 2011, Panda has been the bain of my life, let me tell you why…. It just so happened that I had launched a site aimed at the loans market just a few months prior and it was performing pretty well in the SERPs; by this I mean it was on page 1 for the keywords I had hoped for. As soon as early March rolled round however, boom, it disappeared from all results over-night. So, let me tell you my story of how I got it back in the rankings the first time round and subsequently when it suffered the same fate again last month, how I got it back within a month.
I won’t mention the site in question, but suffice to say it’s a site aimed at a niche financial market in the UK offering a particular kind of loan product, so although it’s target keywords aren’t extremely competitive they are still reasonably competitive. When I launched it in October 2010, it soon got to page one even though the site wasn’t of the highest quality and the SEO was relatively poor. The problem was, at the time, I didn’t know the best way to improve the site and it’s SEO, and to make things worse, I didn’t realise that some of the things I was doing would actually come back to haunt me in the coming months. Some of the practices I employed were building links using the article sites (I’m not proud of this), leaving comments on blogs with a link back and creating a few thin pages which were poorly inter-linked. Before Panda, from a rankings perspective this didn’t seem to matter (and I didn’t really know any better), in fact it seemed to help, but when the update was rolled out (to an unsuspecting me), it gave me a bit of a wake up call to say the least!
What I did the First Time Around
When Panda launched, it was all a bit new and advice was a little thin on the ground about how to “break free” (BTW, check out the Ultimate Guide To Panda by Michael Cropper for a pretty comprehensive guide). Using resources such as the SEOMoz forums helped me to figure out a way forward, but unfortunately the first time round, it look a little while for me to escape the clutches of the bear (this I blame mostly on my lack of knowledge at the time).
Using the best of my brain’s recall capabilities, here’s what I did.
- Removed some thin content. There were essentially a lot of pages that said things very similar to other pages. Although the pages were there for different reasons, they weren’t providing too much value for visitors and from an alogorithm perspective I figured they could only be a bad thing.
- Re-arranged the site structure. The site’s structure was a bit terrible to begin with, for example, I had a section of ‘customer guides’ which could be navigated to using a footer link (this was bad enough). The problem was that the index for this ‘guides’ section was just an html file in the site’s root directory (/helpful_guides.html). Each guide was then held in a subdirectory (/guides/) which were linked to from /helpful_guides.html. This made no sense to me, so I simply moved /helpful_guides.html to be the index page of /guides/. That seemed to make much more logical sense and flowed better.
- Created a Blog. As blogs seem fairly significant from an SEO point of view (as Mr. Cutts keeps telling us) I created one in a /blog/ subdirectory. To begin with the blog had no real direction or (from a visitor perspective) reason to be there, but never the less, I filled it with loan related content on a regular basis with the aim of grabbing the attention of some passers by at least. Resources such as this great post on creating content by Ian Lurie and Neil Patel’s post on not what to do helped me improve this over time.
- Changed the Layout of Similar Pages. There were a lot of pages which, although had different content, were laid out EXACTLY the same, just with different text. I didn’t like that, so for certain pages, such as news pages, I removed the standard side boxes and replaced them with twitter and facebook widgets for example. This seemed to fit better with the news part of the site than having boxes trying to get visitors to ‘complete an application online’.
- Obtained Links on Relevant Sites. A lot of the site’s existing links didn’t carry much weight. So I focused on building new high quality links from relevant financial sites whilst getting rubbish links removed where possible. Using tools such as BuzzStream I found a wealth of personal finance blogs, submitted guest posts and forged relationships. Take a look at Jame Agate’s post on earning guest posts.
- Established the Brand. Other than the site and a few business listings, there was little on the web to verify the company as an established brand. I went about setting up Facebook pages, Twitter accounts and everything a brand should have in this day and age. I also obtained some links that purely focussed on the brand for anchor text – Let’s be honest, whenever you link to a company how many times do you use the brand or site name rather than keyword specific anchor text. This helped to naturalise the link profile.
- Removed Excessive Amounts of Keyword Links. The home page used to be littered with keyword rich links which pointed to….(face palm) The Home Page! What good is that to a user? I removed these and created deep links to more relevant content in the site. I also linked this new content to other relevant content to create a much easier to navigate and user friendly site.
- Tidied up the Code. There were numerous problems with page coding. These ranged from validity issues, to missing image attributes (e.g. alt and width/height) and lack of linking consistency (e.g. some homepage links were pointing to /index.html rather than / causing duplication issues.
Now, I’m sure I made many other changes as well, but these were the main ones and although doing all of this took a while (especially as I was reasonably new to a lot of this), the site had popped back to page one by late 2011. I had appeased the Panda as well as improving the site which make it better for users and attracted more natural links!!…. or so I thought.
The Panda Strikes Back
I know, pretty seamless right?
By now the site was (from a user and crawl perspective) far better than when it began. The blog had a purpose and a direction – it was now a “Money Saving Blog” aimed at helping visitors understand financial matters better and save money. The site had some infographics on financial matters. It was laid out in a much more logical manner and the content was more useful and customer orientated. So I relaxed a little….. Bad idea!
That time of year came around again… February, and one day whilst checking the site rankings, I noticed it had disappeared once again. It turns out this coincided with a Panda update (and several other algorithmic updates). I almost cried. But rather than getting down about it, I set out to sort the problems once again. Here’s what I did this time;
- Sorted The Navigation Out. I was still unhappy with the navigation of the site. I felt that although there was good content on there, it was hard for people to access. A lot of the content could only be navigated to by using footer links (not ideal). To get round this, I removed the footer links completely and replaced the navigation bar of single links to one with drop downs. This made it much easier for visitors to navigate and put the content right at the top of the page as opposed to stuck at the bottom. I also created more links (where they were relevant) in page content. I purposely created links with a view to improving the user experience as opposed to using keyword rich links. Finally, I consolidated content spread over several pages; an example being the legal sections – before they were split between disclaimer, policy and legal pages, now they are condensed into one page. Richard Baxter talks about site navigation here.
Old nav bar with single page links
New nav bar with menus
- Cleared out The Blog. Although the blog now had a direction and purpose, it was getting messy and hard to navigate. On deeper examination I realised it was still filled with lots of frankly crap posts. I removed these posts immediately. The tagging and categories of the blog were also all over the place. I immediately cut down the amount of categories to 4 main categories with 12 sub-categories (possibly still more than I’d like). I also cut down on the amount of tags being used from several hundred to well under a hundred. There’s a great video featuring Joost de Valk on improving your blog’s seo at http://seobraintrust.com/wordpress-seo-with-joost-de-valk-2/ which helped to point me in the right direction.
- Created Some Better Content. Although I was creating blog content that I thought was good at the time, I had a second look and realised that it just wasn’t. I therefore decided to write some good posts that were more personal and detailed. I also tidied up the old posts by changing the odd bit of text, creating some improved meta descriptions, adding ALT attributes onto any images which needed them and setting more appropriate featured images.
- Removed Even More Thin Content. I had removed a lot of thin content the first time round. But I looked back and decided that I hadn’t been thorough enough. This time I went through the content from a visitor’s perspective and purged any that I didn’t find useful or interesting.
- Installed HTTPS Functionality. I haven’t spoken to many SEOs who think that having a secure server helps to boost rankings, but personally, for some sites, I think otherwise. Matt Cutts has always said we’re looking for sites that “you would be happy to give your credit card details to”. I don’t know about you, but I certainly wouldn’t give my credit card details to a page that didn’t have HTTPS. Although my site doesn’t capture any details that sensitive, I think that having the functionality there (especially as it’s a finance site) can only be a positive trust indicator.
- Put the Company Address on Every Page Footer. Again, I think this is another trust indicator. A company that doesn’t display their address or makes it hard to find, I think, is more likely to be up to something. Having an address on every page helps the user to trust the site and it can help you in local search too.
- Created Internal Links to the Blog. Although there were links from blog articles back to the commerical site, there weren’t many going from the commercial site to the blog. I therefore inserted some relevant links from the commercial site back to the blog. After all, what’s the point in having a blog if no one visits it?
- Removed Duplicate Pages. There were a couple of pages that dealt with the same subject. Instead of having several pages spread out, I consolidated them into one comprehensive page and 301’ed the other links.
- Created More Branded Links. I’ve heard from numerous well established SEOs that branded links are a good way to get out of a penalty (Patrick Altoft spoke about this at MozCon 2010, although I’m not sure if you can get the video anymore??). So I went about building relationships with more finance related sites so that they would link back to me (usually with the company name or site name). I also promoted infographics and content from the site, which were again linked back with the company or site name.
- Added Custom 404 Page. I know it’s not a big thing, but how often do you see a quality website that doesn’t have a custom 404 page?
- Removed Article Directory Links. Unfortunately there were still lots of inbound links from places like article directories, and it felt wrong to hold on to them for the sake of it. I swiftly removed as many articles I could find from various sites and only left ones which had been viewed a significant number of times (for me that was 50+). After reading this rather useful post by Kristi Hines on what effect Panda has had on article site traffic, it seemed silly not to.
An upsetting sight in webmaster tools showing number of links from each article site and to how many pages (tut tut)
Although this isn’t a comprehensive list of every change I’ve made, it does list what I consider to be the main reasons why the site managed to escape from Panda again, and this time, thankfully in less than a month.
Keeping The Bear at Bay
Having learnt from this experience, I realise that the site now needs to constantly improve and offer the highest quality content and user experience. For me, rather than seeing Panda as a pain in my side, I now appreciate what it is trying to do and how it is helping to clear up the web. I know that because of it, my site has improved dramatically (and will continue to in a big way) which not only gives me improved rankings, but provides a much better user experience and adds a lot more value to the web. Suffice to say, I am now completely in the white hat camp.
It’s clear that Google has many more tricks to come in the near future and it seems we’re in an ever accelerating period where getting in search results is becoming increasingly difficult unless your site is up to scratch. I hope that by sharing this post, I might help some of you to improve things you may not have thought about with the hope of evading or getting rid of a Panda problem (If not, sorry! There’s also a good post on TechCrunch by Matt Moog about changes Viewpoints.com made to appease the Panda), for others I hope you simply found it an interesting read.
If you have any questions and comments (or any other Panda beating suggestions), please leave them below and I’ll get back to you as soon as I can (after all I’ve got content to produce!).