May 29 2013

Google Warns: You Better Adequately Disclose Paid Content

Google released a new Webmaster Help video today featuring Matt Cutts discussing Google’s policies on advertorials and native advertising. “Well, it’s advertising, but it’s often the sort of advertising that looks a little closer to editorial, but it basically means that someone gave you some money, rather than you writing about this naturally because you thought it was interesting or because you wanted to,” says Cutts. “So why do I care about this? Why are we making a video about this at all? Well, the reason is, certainly within the webspam team, we’ve seen a little bit of problems where there’s been advertorial or native advertising content or paid content, that hasn’t really been disclosed adequately, so that people realize that what they’re looking at was paid. So that’s a problem. We’ve had longstanding guidance since at least 2005 I think that says, ‘Look, if you pay for links, those links should not pass PageRank,’ and the reason is that Google, for a very long time, in fact, everywhere on the web, people have mostly treated links as editorial votes.” The video links to this . “Well, there’s two-fold things that you should think about,” says Cutts. “The first is on the search engine side of things, and search engine wise, you should make sure that if links are paid – that is if money changed hands in order for a link to be placed on a website – that it should not flow PageRank. In essence, it shouldn’t affect search engines’ rankings. That’s no different than the guidance we’ve had for years, and years, and years.” The video suggests using rel=”nofollow”. “Likewise, if you are doing disclosure, you need to make sure that it’s clear to people,” adds Cutts. “A good rule of thumb is that there should be clear and conspicuous disclosure. It shouldn’t be the case that people have to dig around, buried in small print or have to click and look around a long time to find out, ‘Oh, this content that I’m reading was actually paid.’” The video suggests using text like “Advertisement” or “Sponsored”. “So why are we talking about this now?” Cutts continues. “This isn’t a change in our search engine policy. Certainly not in the webspam team. Well, the reason is that we’ve seen some people who have not been doing it correctly. So we’ve seen, for example, in the United Kingdom, a few sites that have been taking money, and writing articles that were paid, and including keyword-rich anchor text in those articles that flowed PageRank, and then not telling anybody that those were paid articles. And that’s the sort of thing where if a regular user happened to be reading your website, and didn’t know that it was paid, they’d really be pretty frustrated and pretty angry when they found out that it was paid.” “So, we’ve taken action on this sort of thing for years and years, and we’re going to keep taking strong action,” says Cutts. “We do think it’s important to be able to figure out whether something is paid or not on the web, and it’s not just the webspam team. It’s not just search quality and web search results. The Google News team recently published on their blog, and said that if you don’t provide adequate disclosure of paid content – whether it be native advertising, advertorials – whenever there’s money changing hand, if users don’t realize that sufficiently because there’s not adequate disclosure, the Google News team mentioned that they might not only remove the paid content, but we’re willing to go up to and including removing the publication from Google News.” We covered what the Google News team had to say about it here . In that big video Cutts put out a while back talking about all of the changes coming over the next several months (which included the most recent Penguin update ), he also said Google would be “looking at some efforts to be a little bit stronger on our enforcement” on advertorials. A couple weeks ago, Cutts tweeted that Google had just took action on thousands of linksellers .

May 24 2013

How Big Is The Latest Google Penguin Update?

Webmasters have been expecting a BIG Penguin update from Google for quite some time, and a couple weeks ago, Google’s Matt Cutts promised that one was on the way . Finally, on Wednesday, he announced that Google had not only started the roll-out, but completed it. While it was said to be a big one, it remains to be seen just how big it has been in terms of impacting webmasters. Have you been impacted by the latest Penguin update? Let us know in the comments . Just what did Cutts mean by “big” anyway? When discussing the update a couple weeks ago, he said it would be “larger”. When it rolled out, he announced that “about 2.3% of English-US queries are affected to the degree that a regular user might notice,” and that “the scope of Penguin varies by language, e.g. languages with more webspam will see more impact.” As far as English queries, it would appear that the update is actually smaller. The original Penguin (first called the “Webspam” update) was said to impact about 3.1% of queries in English. So, perhaps this one is significantly larger in terms of other languages. Cutts has also been tossing around the word “deeper”. In the big “What should we expect in the next few months” video released earlier this month, Cutts said this about Penguin 2.0: “So this one is a little more comprehensive than Penguin 1.0, and we expect it to go a little bit deeper, and have a little bit more of an impact than the original version of Penguin.” Cutts talked about the update a little more in an interview with Leo Laporte on the day it rolled out, and said, “It is a leap. It’s a brand new generation of algorithms. The previous iteration of Penguin would essentially only look at the homepage of a site. The newer generation of Penguin goes much deeper. It has a really big impact in certain small areas.” We asked Cutts if he could elaborate on that part about going deeper. He said he didn’t have anything to add: @ ccrum237 not much to add for the time being. — Matt Cutts (@mattcutts) May 23, 2013 The whole thing has caused some confusion in the SEO community. In fact, it’s driving Search Engine Roundtable’s Barry Schwartz “absolutely crazy.” Schwartz wrote a post ranting about this “misconception,” saying: The SEO community is translating “goes deeper” to mean that Penguin 1.0 only impacted the home page of a web site. That is absolutely false. Deeper has nothing to do with that. Those who were hit by Penguin 1.0 know all to well that their whole site suffered, not just their home page. What Matt meant by “deeper” is that Google is going deeper into their index, link graph and more sites will be impacted by this than the previous Penguin 1.0 update. By deeper, Matt does not mean how it impacts a specific web site architecture but rather how it impacts the web in general. He later updated the piece after realizing that Cutts said what he said in the video, adding, “Matt must mean Penguin only analyzed the links to the home page. But anyone who had a site impacted by Penguin noticed not just their home page ranking suffer. So I think that is the distinction.” Anyhow, there have still been plenty of people complaining that they were hit by the update, though we’re also hearing from a bunch of people that they saw their rankings increase. One reader says this particular update impacted his site negatively, but was not as harsh as the original Penguin. Paul T. writes: Well, in a way I like this update better than any of the others. It is true I lost about 50% of my traffic on my main site, but the keywords only dropped a spot or two–so far anyway. The reason I like it is because it is more discriminating. It doesn’t just wipe out your whole site, but it goes page by page. Some of my smaller sites were untouched. Most of my loss came from hiring people to do automated back-linking. I though I would be safe doing this because I was really careful with anchor text diversity, but it was not to be. I am going to try to use social signals more to try to bringt back my traffic. Another reader, Nick Stamoulis, suggests that Google could have taken data from the Link Disavow tool into consideration when putting together Penguin 2.0: I would guess that the Disavow tool was factored into Penguin 2.0. If thousands of link owners disavowed a particular domain I can’t imagine that is something Google didn’t pick up on. It’s interesting that they are offering site owners the chance to “tell” on spammy sites that Penguin 2.0 might have overlooked. Cutts has tweeted about the Penguin spam form several times. With regards to the Link Disavow tool, Google did not rule out the possibility of using it as a ranking signal when quizzed about it in the past. Back in the fall, Search Engine Land’s Danny Sullivan shared a Q&A with Matt Cutts in which he did not rule out the possibility. Sullivan asked him if “someone decides to disavow link from good sites a perhaps an attempt to send signals to Google these are bad,” is Google mining this data to better understand what bad sites are? “Right now, we’re using this data in the normal straightforward way, e.g. for reconsideration requests,” Cutts responded. “We haven’t decided whether we’ll look at this data more broadly. Even if we did, we have plenty of other ways of determining bad sites, and we have plenty of other ways of assessing that sites are actually good.” Searchmetrics released its list of the top losers from the latest Penguin update, which you can see here . It includes some porn, travel, and game sites, as well as a few big brands like Dish and Salvation Army. What is your opinion of Google’s latest Penguin update? It it doing its job? Let us know in the comments .

May 23 2013

The New Google Penguin Update Goes Much Deeper Into Your Site

Google has been warning of a big and scary new version of the Penguin update for quite some time. When Google’s Matt Cutts r eleased a video discussing the upcoming SEO menu earlier this month, he mentioned that Penguin 2.0 was getting closer. Now it’s here. Have you been affected by the new Penguin update? Is this update good or bad for Google results? Let us know what you think in the comments . In the aforementioned video (below), Cutts said this about the update: “We’re relatively close to deploying the next generation of Penguin. Internally we call it ‘Penguin 2.0,’ and again, Penguin is a webspam change that’s dedicated to try to find black hat webspam, and try to target and address that. So this one is a little more comprehensive than Penguin 1.0, and we expect it to go a little bit deeper, and have a little bit more of an impact than the original version of Penguin.” Even before that video, Cutts was discussing the update on Twitter . He pretty much said the same thing: it’s called Penguin 2.0, and it would be larger. Late on Wednesday, Cutts revealed that the update rolled out. He took to his personal blog to say, “We started rolling out the next generation of the Penguin webspam algorithm this afternoon (May 22, 2013), and the rollout is now complete. About 2.3% of English-US queries are affected to the degree that a regular user might notice. The change has also finished rolling out for other languages world-wide. The scope of Penguin varies by language, e.g. languages with more webspam will see more impact.” “This is the fourth Penguin-related launch Google has done, but because this is an updated algorithm (not just a data refresh), we’ve been referring to this change as Penguin 2.0 internally,” he noted. “For more information on what SEOs should expect in the coming months, see the video that we recently released.” This does not mean that this is the last we’ll see of Penguin, by any means. When a reader of Cutts’ blog noted that he still sees a lot of spam in results, Cutts responded, “We can adjust the impact but we wanted to start at one level and then we can modify things appropriately.” Side note: Cutts tweeted out a link to a “special spam report form” for spam that Penguin missed: Here’s a special spam report form:… Please tell us about the spammy sites that Penguin missed. — Matt Cutts (@mattcutts) May 23, 2013 So, it sounds like they’ll still be working on Penguin-ifying results more beyond the update that has already rolled out. I presume this will come in the form of data refreshes, much like the last two version of Penguin we’ve seen. Penguin is all about webspam, and Cutts discussed other webspam initiatives in that video. Specifically, he talked about denying value upstream for link spammers.This is not part of the Penguin update that just rolled out, so expect more there too. “That comes later,” said Cutts. Another reader suggested in the comments of Cutts’ blog post that people are finding it riskier to spend the time buildling authoritative sites that Google supposedly likes, because there’s still a chance that an algo update will (even if unintentionally) knock it down for one reason or another. He makes the case that it’s easier to build a bunch of “throwaway affiliate spam sites” that could easily be replaced if Google shuts them down. Cutts’ response to that was, “We have some things coming later this summer that should help with the type of sites you mention, so I think you made the right choice to work on building authority.” Cutts briefly discussed the new Penguin update in a conversation with Leo Laporte on Wednesday right before it was getting ready to roll out. In that, he said, “It is a leap. It’s a brand new generation of algorithms. The previous iteration of Penguin would essentially only look at the homepage of a site. The newer generation of Penguin goes much deeper. It has a really big impact in certain small areas.” It will be interesting to see how long Google waits for a data refresh on Penguin again. Unlike Panda, which saw many refreshes, before ultimately transforming into a rolling update, Penguin, since originally launching in April, 2012, only saw two refreshes before this new update (May and October, 2012). If this one is even bigger, should we expect refreshes even less often? The less often they happen, the harder it is to recover, some webmasters have discovered. I’m guessing a lot of those impacted negatively by this new update will be looking at starting over with new sites. It remains to be seen just how big the impact of this update really is on webmasters. If you’ve been affected (either positively or negatively) let us know in the comments .

May 20 2013

Google: No Search Engine Is Completely Objective

Today’s Google Webmaster Help video gets a little philosophical. Matt Cutts takes on the question: How can Google be confident with their SERPs, when relying on inherently subjective signals that influence which sites display (i.e. using human ‘quality raters’ to evaluate entire domains without the context of the search query itself)? Cutts notes that the quality raters do in fact see the search itself, so they’re not seeing the results out of context. On the philosophy that there are subjective signals, Cutts says, “I would agree with that. I think people who think that search engines are completely objective ignore the fact that every search engine has its own philosophy. Every search engine has its own set of algorithms, and those algorithms encode the ranking philosophy of that search engine, and some algorithms will veer more towards diversity. Some might show Wikipedia more. Every search engine is going to have different ideas about what the ideal set of search results is. And there is no scientifically provable best way to rank websites, so it’s always going to be a little bit subjective.” “I think on the bright side, what we do is we try to listen to outside feedback,” he continues. “We have people like Amit Singhal who have been ranking and dealing with information retrieval for longer than a lot of SEOs have been alive (if you’re a young SEO, you know). He got his PhD in information retrieval, and a lot of us have been working on it for a long time, and so I think we have a relatively fine-tuned sense of when people will get angry, [or] of when they’ll be unhappy.” “For example, with Panda, we were actually working on trying to spot low-quality content – the sort of thing that’s in between the quality team and the webspam team, and the sort of low quality that’s not quite spam, but almost spam,” he says. “We were working on that for months, and thinking about that for months before we started to see the larger public get a little bit angry about that. So I think we do have to say to ourselves, like any engineering organization, it’s possible for people to be wrong. It’s possible for us to show not enough domain diversity or too much domain diversity. That’s why it’s important that we listen to what people say from outside Google, and hear that feedback as well.” On the Panda front, Cutts did reveal recently that the algorithm might be a little more forgiving, going forward, than it has been in the past. So there’s that.

May 15 2013

Google Just Took Out Thousands Of Linksellers

Earlier this week, Google’s Matt Cutts ran down a bunch of new stuff Google’s web spam team is working on. Cutts tweeted an extension of that today, noting that Google will continue to tackle link networks, and that in fact, they just took action on “several thousand linksellers” today. Follow @mattcutts Matt Cutts @mattcutts In addition to it’s safe to assume webspam will continue to tackle link networks that violate our guidelines as well.   Reply  ·   Retweet  ·   Favorite 11 hours ago via web · powered by @socialditto Follow @mattcutts Matt Cutts @mattcutts In fact, we took action on several thousand linksellers in a paid-link-that-passes-PageRank network earlier today.   Reply  ·   Retweet  ·   Favorite 10 hours ago via web · powered by @socialditto Let the good times roll. Webmasters continue to anxiously await an upcoming, bigger version of the Penguin update , and Cutts also indicated that Panda would be easing up a bit. As part of Cutts’ big video, he said Google would continue to be vigilant when it comes to all types of link spam. Already, the webspam team is making good on its word.

May 13 2013

Matt Cutts Talks About Penguin, Panda And A Bunch Of Changes Google Has In The Works

Sporting a Mozilla Firefox shirt, Google’s Matt Cutts provided what might be his most informative Webmaster Help video to date. It’s essentially a rundown of what Google’s webspam team has planned for the coming months, and what it means for webmasters. It involves the Penguin update, the Panda update, advertorials, hacked sites, link spam, and a lot more. Cutts is careful to note that any of this information is subject to change, and should be taken with a grain of salt, but this pretty much the kind of stuff they have planned at the moment. We already knew the Penguin update was on the way, and he touches on that, but also delves into a ton of other stuff. Following are some key quotes from the video. Penguin “We’re relatively close to deploying the next generation of Penguin. Internally we call it ‘Penguin 2.0,’ and again, Penguin is a webspam change that’s dedicated to try to find black hat webspam, and try to target and address that. So this one is a little more comprehensive than Penguin 1.0, and we expect it to go a little bit deeper in have a little bit more of an impact than the original version of Penguin.” Advertorials “We’ve also been looking at advertorials – that is sort of native advertising – and those sorts of things that violate our quality guidelines. So, again, if someone pays for coverage, or pays for an ad or something like that, those ads should not flow PageRank. We’ve seen a few sites in the U.S. and around the world that take money and do link to websites, and pass PageRank, so we’ll be looking at some efforts to be a little bit stronger on our enforcement as advertorials that violate our quality guidelines.” “There’s nothing wrong inherently with advertorials or native advertising, but they should not flow PageRank, and there should be clear and conspicuous disclosure, so that users realize that something is paid – not organic or editorial.” Payday Loans and Porn “We get a lot of great feedback from outside of Google, so, for example, there were some people complaining about searches like ‘payday loans’ on So we have two different changes that try to tackle those kinds of queries in a couple different ways. We can’t get into too much detail about exactly how they work, but I’m kind of excited that we’re going from having just general queries be a little more clean to going to some of these areas that have traditionally been a little more spammy, including for example, some more pornographic queries, and some of these changes might have a little bit more of an impact on those kinds of areas that are a little more contested by various spammers and that sort of thing.” Denying Value To Link Spam “We’re also looking at some ways to go upstream to deny the value to link spammers – some people who spam links in various ways. We’ve got some nice ideas on ways that that becomes less effective, and so we expect that that will roll out over the next few months as well.” “In fact, we’re working on a completely different system that does more sophisticated link analysis. We’re still in the early days for that, but it’s pretty exciting. We’ve got some data now that we’re ready to start munching, and see how good it looks. We’ll see whether that bears fruit or not.” Hacked Sites “We also continue to work on hacked sites in a couple different ways. Number one: trying to detect them better. We hope in the next few months to roll out a next-generation site detection that is even more comprehensive, and also trying to communicate better to webmasters, because sometimes they see confusion between hacked sites and sites that serve up malware, and ideally, you’d have a one-stop shop where once someone realizes that they’ve been hacked, they can go to Webmaster Tools, and have some single spot where they could go and have a lot more info to sort of point them in the right way to hopefully clean up those hacked sites.” Sites And Their Authority “We have also been working on a lot of ways to help regular webmasters. We’re doing a better job of detecting when someone is more of an authority on a specific space. You know, it could be medical. It could be travel. Whatever. And try to make sure that those rank a little more highly if you’re some sort of authority or a site, according to the algorithms, we think might be a little more appropriate for users.” Updates To Panda “We’ve also been looking at Panda, and seeing if we can find some additional signals (and we think we’ve got some) to help refine things for the sites that are kind of in the border zone – in the gray area a little bit. And so if we can soften the effect a little bit for those sites that we believe have some additional signals of quality, then that will help sites that have previously been affected (to some degree) by Panda.” Clusters Of Results From The Same Site “We’ve also heard a lot of feedback from people about – if I go down three pages deep, I’ll see a cluster of several results all from one domain, and we’ve actually made things better in terms of – you would be less likely to see that on the first page, but more likely to see that on the following pages. And we’re looking a change, which might deploy, which would basically say that once you’ve seen a cluster of results from one site, then you’d be less likely to see more results from that site as you go deeper into the next pages of Google search results.” “We’re going to keep trying to figure out how we can give more information to webmasters…we’re also going to be looking for ways that we can provide more concrete details, [and] more example URLs that webmasters can use to figure out where to go to diagnose their site.” I guess this all makes up for the lack of “Search Quality HIghlights” from Google in recent months. Kind of.

May 1 2013

Guess Which SEO ‘Misconception’ Matt Cutts Puts To Rest

In Google’s latest Webmaster Help video, Matt Cutts is asked about a common SEO misconception that he wishes to put to rest. The answer: Google is not doing everything you read about in patents. Cutts says, “There a sort of persistent misconception that people often have, which is that just because a patent issues…that has somebody’s name on it, or someone who works at search quality, or someone who works at Google, that doesn’t necessarily mean that we are using that patent at that moment.” He continues, “Sometimes you’ll see speculation, ‘Oh, Google had a patent where they mentioned using the length of time that the domain was registered.’ That doesn’t mean that we’re necessarily doing that. It just means that, you know, that mechanism is patented.” Cutts recalls, “Somebody else at Google had gotten a patent on the idea (or the mechanism, not just the idea, the actual implementation) by which you could look at how people had changed their webpage after an update, and basically say, ‘Oh, these are people who are responding to Google, or they are dynamically SEOing their stuff,’ and so there were a lot of publishers who were like, ‘Ugh, I’m just gonna throw up my hands. Why bother at all if Google’s just gonna keep an eye?’ and you know, ‘If we change, and Google’s just using that and monitoring that, and changing their ranking in response,’ and it’s the sort of thing where just because that patent comes out, doesn’t mean that Google’s currently using that technology.” “So, patents are a lot of interesting ideas,” he adds. “You can see a lot of stuff mentioned in them, but don’t take it as an automatic golden truth that we’re doing any particular thing that is mentioned in a patent.” It is true that patents provide a lot of insight into the kinds of ideas that Google is thinking about, and often we can only really speculate about certain things that it is actually implementing.