Sep 30 2013

Google: Don’t Use Nofollow On Internal Links

In the latest Webmaster Help video, Google’s Matt Cutts discusses the use of rel=”nofollow” on internal links, addressing the following user-submitted question: Does it make sense to use rel=”nofollow” for internal links? Like, for example, to link to your login page/ Does it really make a difference? “Okay, so let me give you the rules of thumb,’ he begins. “I’ve talked about this a little bit in the past, but it’s worth mentioning again. rel=’nofollow’ means the PageRank won’t flow through that link as far as discovering the link, PageRank computation [and] all that sort of stuff. So, for internal links – links within your site – I would try to leave the nofllow off, so if it’s a link from one page on your site to another page on your site, you want that PageRank to flow. You want Googlebot to be able to find that page. So almost every link within your site – that is a link going from one page on your site to another page on your site – I would make sure that the PageRank does flow, which means leaving off the nofollow link.” “Now, this question goes a little bit deeper, and it’s a little more nuanced,” Cutts continues. “It’s talking about login pages. It doesn’t hurt if you want to put a nofollow pointing to a login page or to a page that you think is really useless like a terms and conditions page or something like that, but in general, it doesn’t hurt for Googlebot to crawl that page because it’s not like we’re gonna submit a credit card to make an order or try to log in or something like that.” He goes on to note that you would probably want a nofollow on pages pointing to other sites, like in cases where people abuse comment systems. The general rule for internal linking, however, is to go ahead and let the PageRank flow, and let Googlebot learn all about your site. Even in cases where you don’t want Google to crawl the page, you might as well just use noindex, he says. He also suggests that login pages can still be useful for some searchers. Image: Google

Sep 25 2013

Don’t Worry About Google Penalties From Invalid HTML (At Least for Right Now)

  • Posted by in Web Pro News
  • Comments Off on Don’t Worry About Google Penalties From Invalid HTML (At Least for Right Now)

Ever wonder how the quality of your HTML is affecting your rankings in Google? Well, at least for the time being, it’s not having any effect at all, regardless of how clean it is. Google’s Matt Cutts said as much in a new Webmaster Help video. Cutts was answering the following submitted question: Does the crawler really care about valid HTML? Validating google.com gives me 23 errors and 4 warnings. “There are plenty of reasons to write valid HTML, and to pay attention to your HTML, and to make sure that it’s really clean and that it validates,” says Cutts. “It makes it more maintainable. It makes it easier whenever you want to upgrade. It makes it much better if you want to hand that code off to somebody else. There’s just a lot of good reasons to do it. At the same time, Google has to work with the web we have, not the web that we want to have. And the web that we have has a lot of syntax errors – a lot of invalid HTML, and so we have to build the crawler to compensate for that and to deal with all the errors and weird syntax that people sometimes mistakenly write in a broken way onto the web.” “So Google does not penalize you if you have invalid HTML because there would be a huge number of webpages like that,” he says. “And some people know the rules and then decide to make things a little bit faster or to tweak things here or there, and so their pages don’t validate, and there are enough pages that don’t validate that we said, ‘Okay, this would actually hurt search quality,’ if we said, ‘Only the pages that validate are allowed to rank or rank those a little bit higher’. First and foremost, we have to look at the quality of the information, and whether users are getting the most relevant information they need rather than someone has done a very good job of making the cleanest website they can.” “Now, I wouldn’t be surprised if they correlate relatively well,” he adds. “You know, maybe it’s a signal we’ll consider in the future, but at least for right now, do it because it’s good for maintenance. It’s easier for you if you want to change the site in the future. Don’t just do it because you think it will give you higher search rankings.” Or maybe you should do it also because Google might decide to use it in the future, and then you’ll have your bases covered. Image: Google

Sep 24 2013

Google Reportedly Kills Link Network Ghost Rank 2.0

Google has reportedly penalized link network Ghost Rank 2.0, and sites with links from it. If you were using it, you probably should have expected this to happen sooner or later. Cutts appeared to have hinted at this last month, when he tweeted: Thinking of ghost-related puns for a spam network. "They try to look super natural, but using them will dampen your spirits." — Matt Cutts (@mattcutts) August 29, 2013 Now, forum watcher Barry Schwartz writes , “I am pretty confident, 99% confident, based on the data I see in the forums and some sources I have that want to remain anonymous, that Ghost Rank 2.0 was hit hard by Google. It seems that at least one of the underground and under the radar networks was severely hurt by Google and many of the sites using them to rank well in Google are now penalized.” Google shutting down link networks with the sole purpose of gaming Google rankings is nothing new. Google’s web spam team is constantly working to combat this type of thing. It should come as no surprise when they take such action. This is not a sustainable way to build links for SEO value. Here’s a closer look at Ghost Rank 2.0: The Ghost Rank site does its best to convince you that it is above Google penalties: Ghost Rank 2.0′s site claims to sell high PR links. Here’s the pricing scale: “We have signed up to 35 different Russian exchange networks,” the site explains. “Put all these domains available into one pool and ran them through our custom made algo and filters to find the strongest, most beneficial links. We don’t just look at PR. It’s a lot more complex than that.” On how safe you would be using this system, it says, “Well, let’s just put it this way. It’s about as diversified as you are going to get. We aren’t relying on one network for links. One gets hit and you still have 30+ other networks to keep you going strong.” You’re taking your site’s destiny into your own hands when you use this approach. Who do you believe more: a link network that promises to get you higher rankings or Google, who vows to penalize link networks that promise to get you higher rankings? Image: GhostRank.net

Sep 23 2013

Matt Cutts Talks Duplicate Content Once Again

Google’s Matt Cutts has a new video out about duplicate content, a subject he has discussed many times in the past. If you have a site that you use to sell a product that other sites also sell, and are concerned that pages listing the “ingredients” of said product will be seen as duplicate content, this one’s for you. Cutts takes on the following submitted question: What can e-commerce sites do that sell products which have an ingredients list exactly like other e-commerce sites selling the same product to avoid Google as seeing it as duplicate content? Cutts begins, “Let’s consider an ingredients list, which is like food, and you’re listing the ingredients in that food and ingredients like, okay, it’s a product that a lot of affiliates have an affiliate feed for, and you’re just going to display that. If you’re listing something that’s vital, so you’ve got ingredients in food or something like that – specifications that are 18 pages long, but are short specifications, that probably wouldn’t get you into too much of an issue. However, if you just have an affiliate feed, and you have the exact same paragraph or two or three of text that everybody else on the web has, that probably would be more problematic.” He continues, “So what’s the difference between them? Well, hopefully an ingredients list, as you’re describing it as far as the number of components or something probably relatively small – hopefully you’ve got a different page from all the other affiliates in the world, and hopefully you have some original content – something that distinguishes you from the fly-by-night sites that just say, ‘Okay, here’s a product. I got the feed and I’m gonna put these two paragraphs of text that everybody else has.’ If that’s the only value add you have then you should ask yourself, ‘Why should my site rank higher than all these hundreds of other sites when they have the exact same content as well?’” “So if some small sub-component of your pages have some essential information that then appears in multiple places, that’s not nearly so bad,” Cutts adds. “If the vast majority or all of your content is the same content that appears everywhere else, and there’s nothing else to really distinguish it or to add value, that’s something I would try to avoid if you can.” So, pretty much the same thing you’ve heard before. Got it yet? Find other things Cutts has said about duplicate content in the past here .

Sep 16 2013

Google: No Duplicate Content Issues With IPv4, IPv6

Google released a new Webmaster Help video today discussing IPv4 and IPv6 with regards to possible duplicate content issues. To make a long story short, there are none. Google’s Matt Cutts responded to the following user-submitted question: As we are now closer than ever to switching to IPv6, could you please share some info on how Google will evaluate websites. One website being in IPv4, exactly the same one in IPv6 – isn’t it considered duplicate content? “No, it won’t be considered duplicate content, so IPv4 is an IP address that’s specified with four octets,” says Cutts. “IPv6 is specified with six identifiers like that, and you’re basically just serving up the same content on IPv4 and IPv6. Don’t worry about being tagged with duplicate content.” “It’s the similar sort of question to having content something something dot PL or something something dot com,” he continues. “You know, spammers are very rarely the sorts of people who actually buy multiple domains on different country level domains, and try to have that sort of experience. Normally when you have a site on multiple country domains, we don’t consider that duplicate content. That’s never an issue – very rarely an issue for our rankings, and having the same thing on IPv4 and IPv6 should be totally fine as well.” More on IPv6 here . Image: Google

Sep 11 2013

Here’s A New Panda Video From Matt Cutts

Google has released a new Webmaster Help video about the Panda update. Matt Cutts responds to a user-submitted question asking how she will know whether her site is hit by Panda since Google has integrated it into its normal indexing process, and if her site was already hit, how will she know if she has recovered? Cutts begins, “I think it’s a fair question because if the site as already hit, how will she know if she has recovered from Panda? So, Panda is a change that we rolled out, at this point, a couple years ago targeted towards lower quality content. It used to be that roughly every month or so we would have a new update, where you’d say, okay there’s something new – there’s a launch. We’ve got new data. Let’s refresh the data. It had gotten to the point, where Panda – the changes were getting smaller, they were more incremental, we had pretty good signals, we had pretty much gotten the low-hanging winds, so there weren’t a lot of really big changes going on with the latest Panda changes. And we said lets go ahead and rather than have it be a discreet data push that is something that happens every month or so at its own time, and we refresh the data, let’s just go ahead and integrate it into indexing.” “So at this point, we think that Panda is affecting a small enough number of webmasters on the edge that we said, ‘Let’s go ahead and integrate it into our main process for indexing,’” he continues. “We did put out a blog post, which I would recommend, penned by Amit Singhal, that talks about the sorts of signals that we look at whenever we’re trying to assess quality within Panda, and I think we’ve done some videos about that in the past, so I won’t rehash it, but basically we’re looking for high-quality content. And so if you think you might be affected by Panda, the overriding kind of goal is to try to make sure that you have high-quality content – the sort of content that people really enjoy, that’s compelling – the sort of thing that they’ll love to read that you might see in a magazine or in a book, and that people would refer back to or send friends to – those sorts of things.” You can read more about that Singhal blog post here . “That would be the overriding goal, and since Panda is now integrated with indexing, that remains the goal of entire indexing system,” says Cutts. “So, if your’e not ranking as highly as you were in the past, overall, it’s always a good idea to think about, ‘Okay, can I look at the quality of the content on my site? Is there stuff that’s derivative or scraped or duplicate or just not as useful, or can I come up with something original that people will really enjoy, and those kinds of things tend to be a little more likely to rank higher in our rankings.” See all of our past Panda coverage here to learn more. Image: Google (YouTube)

Sep 9 2013

Matt Cutts On When Nofollow Links Can Still Get You A Manual Penalty

Today, we get an interesting Webmaster Help video from Google and Matt Cutts discussing nofollow links, and whether or not using them can impact your site’s rankings. The question Cutts responds to comes from somebody going by the name Tubby Timmy: I’m building links, not for SEO but to try and generate direct traffic, if these links are no-follow am I safe from getting any Google penalties? Asked another way, can no-follow links hurt my site? Cutts begins, “No, typically nofollow links cannot hurt your site, so upfront, very quick answer on that point. That said, let me just mention one weird corner case, which is if you are like leaving comment on every blog in the world, even if those links might be nofollow, if you are doing it so much that people notice you, and they’re really annoyed by you, and people spam report about you, we might take some manual spam action on you, for example.” “I remember for a long time on TechCrunch anytime that people showed up, there was this guy anon.tc would show up, and make some nonsensical comment, and it was clear that he was just trying to piggyback on the traffic from people reading the article to whatever he was promoting,” he continues. “So even if those links were nofollow, if we see enough mass-scale action that we consider deceptive or manipulative, we do reserve the right to take action, so you know, we carve out a little bit of an exception if we see truly huge scale abuse, but for the most part, nofollow links are dropped out of our link graph as we’re crawling the web, and so those links that are nofollowed should not affect you from an algorithmic point of view.” “I always give myself just the smallest out just in case we find somebody who’s doing a really creative attack or mass abuse or something like that, but in general, as long as you’re doing regular direct traffic building, and you’re not annoying the entire web or something like that, you should be in good shape,” he concludes. This is perhaps a more interesting discussion than it seems on the surface in light of other recent advice from Cutts, like that to nofollow links on infographics , which can arguably provide legitimate content and come naturally via editorial decision. It also comes at a time when there are a lot questions about the value of links and what links Google is going to be okay with, and which it is not. Things are complicated even further in instances when Google is making mistakes on apparently legitimate links, and telling webmasters that they’re bad. Image: Google

Sep 9 2013

Google Admits To Getting Another Link Wrong, Which Will Probably Not Help The Link Hysteria

Google is apparently getting links wrong from time to time. By wrong, we mean giving webmasters example links (in unnatural link warning messaging) that are actually legitimate, natural links. A couple weeks ago, a forum thread received some attention when a webmaster claimed that this happened to him. Eventually Google responded , not quite admitting a mistake, but not denying it either. A Googler told him: Thanks for your feedback on the example links sent to you in your reconsideration request. We’ll use your comments to improve the messaging and example links that we send. If you believe that your site no longer violates Google Webmaster Guidelines, you can file a new reconsideration request, and we’ll re-evaluate your site for reconsideration. Like I said, not exactly an admission of guilt, but it pretty much sounds like they’re acknowledging the merit of the guy’s claims, and keeping these findings in mind to avoid making similar mistakes in the future. That’s just one interpretation, so do with that what you will. Now, however, we see a Googler clearly admitting a mistake when it provided a webmaster with one of those example URLs for a DMOZ link. Barry Schwartz at Search Engine Roundtable, who pointed out the other thread initially, managed to find this Google+ discussion from even earlier. Dave Cain shared the message he got from Google, which included the DMOZ link, and tagged Google’s Matt Cutts and John Mueller in the post. Mueller responded, saying, “That particular DMOZ/ODP link-example sounds like a mistake on our side.” “Keep in mind that these are just examples — fixing (or knowing that you can ignore) one of them, doesn’t mean that there’s nothing else to fix,” he added. “With that in mind, I’d still double-check to see if there are other issues before submitting a reconsideration request, so that you’re a bit more certain that things are really resolved (otherwise it’s just a bit of time wasted with back & forth).” Cain asked, ” Because of the types of links that were flagged in the RR response (which appear to be false negatives . i.e DMOZ/ODP), would it be safe to assume that the disavow file wasn’t processed with the RR?” Mueller said that “usually” submitting both at the same time is no problem, adding, “So I imagine it’s more a matter of the webspam team expecting more.” Cutts, of the webpspam team, did not weigh in on the conversation (which took place on August 20th). Mistakes happen, and Google is not above that. However, seeing one case where Google is openly admitting a mistake so close to another case where it looks like they probably also made a mistake is somewhat troubling, considering all the hysteria we’ve seen over linking over the past year and a half. It does make you wonder how often it’s happening. Image: Google

Sep 4 2013

Google’s Matt Cutts Talks Auto-Generated Pages

Google takes action on pages that are auto-generated, and add no value. You probably know that. Google talks about this kind of content in its Quality Guidelines. But that hasn’t stopped Matt Cutts from discussing it further in a new Webmaster Help Video in response to the user-submitted question: What does Google do against sites that have a script that automatically picks up search query and makes a page about it? Ex: you Google [risks of drinking caffeine], end up at a page: “we have no articles for DRINKING CAFFEINE” with lots of ads. “Okay, so it’s a bad user experience,” says Cutts. “And the way that you have asked the question, we are absolutely willing to take action against those sites.” He notes that in the past, he has put out specific calls for sites where you search for a product, and you think you’re going to get a review, and you find a page with “zero reviews found” for that product. If you have a site that would have something like this, Cutts says to just block the pages that have zero results found. Image: Google (YouTube)