When Google initiated its search engine the methods for ascertaining the relevance of content on a website were crude at best. The methodology of analyzing content was in its infancy and the cornerstone of deciding whether a sight had any pertinent value was measured by the amount of links. Enter now the SEO. Through tools and procedures an SEO could bring a website from nothing into the forefront by linking what seemed to be similar and relevant contented sites. However, this was not always the case and a simpler method for building links to a website was born. They are the link farms; a base of seemingly relevant information which add an artificial importance to a website. In addition to the farms key word stuffing became popular. This method did not offer real information but a sort of circular text that explained nothing but offered everything. These two areas caught Google’s attention and thus the new direction.
The Panda offering was Google’s method for helping ensure that the content offered by a website was indeed in line with its proclaimed topic. Years of analyzing billions of lines of text had honed Pandas algorithms to a degree almost on par with eyes on analysis. These same algorithms are used in colleges to expose plagiarism among students. In the months that followed websites who had been using black hat techniques to raise their sites page rank were found to be wanting and were either penalized or dropped out of the index completely. Google had made its point. The users who query their database were going to find the information returned to be relevant and useful. It is the reason why Google exists. If the user cannot find pertinent answers to a question they will go elsewhere.
Next enter Penguin. A technology designed to ferret out irrelevant links or links that were from link farms. This method had a two pronged approach. First the links were analyzed for relevant content and were weighted as such in reference to the child website. Secondly, a database of pirate link farms was built and queried against the links tied to a website. Google gave ample warning that anyone using links from these farms would be penalized. Those who listened remained in the index, those who did not dropped off the planet.
In order to understand each iteration of Penguin and Panda one must put on the hat of a developer. Test data, by definition, is just that, test data. It can only simulate real world situations to a small degree and is only useful for finding the most glaring of design flaws. In programming and analysis the weakest link is not the logic, it is the data set used to test the algorithm. Panda and Penguin are no different. Upon initial release Panda was coarse at best and had some problems. Google worked with webmasters and SEOs in order to alleviate any conditions that were not the intention of the new algorithms. As time passed and the algorithms analyzed Panda became much sharper in its analysis and can be depended upon to give a good examination of a websites content.
Penguin is now in its initial phase of release. It is still beta to a certain degree and is being analyzed constantly. As Matt Cutts has related:
“(Likewise), we’re still in the early stages of Penguin where the engineers are incorporating new signals and iterating to improve the algorithm. Because of that, expect that the next few Penguin updates will take longer, incorporate additional signals, and as a result will have more noticeable impact. It’s not the case that people should just expect data refreshes for Penguin quite yet.”
In other words, there may still be some kinks in Penguin and yes, they may affect your site to some degree. Ultimately we all want a better product for the user and Google is doing just that. To make a cake you have to break a few eggs.
Manual Multilingual Link Building Service