May 21 2014

Google Launches Two Algorithm Updates Including New Panda

Google makes changes to its algorithm every day (sometimes multiple changes in one day). When the company actually announces them, you know they’re bigger than the average update, and when one of them is named Panda, it’s going to get a lot of attention. Have you been affected either positively or negatively by new Google updates? Let us know in the comments . Google’s head of webspam Matt Cutts tweeted about the updates on Tuesday night: Google is rolling out our Panda 4.0 update starting today. — Matt Cutts (@mattcutts) May 20, 2014 This past weekend we started rolling out a ranking update for very spammy queries: — Matt Cutts (@mattcutts) May 21, 2014 Panda has been refreshed on a regular basis for quite some time now, and Google has indicated in the past that it no longer requires announcements because of that. At one point, it was actually softened . But now, we have a clear announcement about it, and a new version number (4.0), so it must be significant. For one, this indicates that the algorithm was actually updated as opposed to just refreshed, opening up the possibility for some big shuffling of rankings. The company told Search Engine Land that the new Panda affects different languages to different degrees, and impacts roughly 7.5% of queries in English to the degree regular users might notice. The other update is the what is a new version of what is sometimes referred to as the “payday loans” update. The first one was launched just a little more than a year ago. Cutts discussed it in this video before launching it: “We get a lot of great feedback from outside of Google, so, for example, there were some people complaining about searches like ‘payday loans’ on,” he said. “So we have two different changes that try to tackle those kinds of queries in a couple different ways. We can’t get into too much detail about exactly how they work, but I’m kind of excited that we’re going from having just general queries be a little more clean to going to some of these areas that have traditionally been a little more spammy, including for example, some more pornographic queries, and some of these changes might have a little bit more of an impact on those kinds of areas that are a little more contested by various spammers and that sort of thing.” He also discussed it at SMX Advanced last year. As Barry Schwartz reported at the time: Matt Cutts explained this goes after unique link schemes, many of which are illegal. He also added this is a world-wide update and is not just being rolled out in the U.S. but being rolled out globally. This update impacted roughly 0.3% of the U.S. queries but Matt said it went as high as 4% for Turkish queries were web spam is typically higher. That was then. This time, according to Schwartz , who has spoken with Cutts, it impacts English queries by about 0.2% to a noticeable degree. Sites are definitely feeling the impact of Google’s new updates. Here are a few comments from the WebmasterWorld forum from various webmasters: We’ve seen a nice jump in Google referrals and traffic over the past couple of days, with the biggest increase on Monday (the announced date of the Panda 4.0 rollout). Our Google referrals on Monday were up by 130 percent…. … I am pulling out my hair. I’ve worked hard the past few months to overcome the Panda from March and was hoping to come out of it with the changes I made. Absolutely no change at all in the SERPS. I guess I’ll have to start looking for work once again. … While I don’t know how updates are rolled out, my site that has had panda problems since April 2011first showed evidence of a traffic increase at 5 p.m. (central, US) on Monday (5/19/2014). … This is the first time I have seen a couple sites I deal with actually get a nice jump in rankings after a Panda… It appears that eBay has taken a hit. Dr. Peter J. Meyers at Moz found that eBay lost rankings on a variety of keywords, and that the main eBay subodmain fell out of Moz’s “Big 10,” which is its metric of the ten domains with the most real estate in the top 10. “Over the course of about three days, eBay fell from #6 in our Big 10 to #25,” he writes. “Change is the norm for Google’s SERPs, but this particular change is clearly out of place, historically speaking. eBay has been #6 in our Big 10 since March 1st, and prior to that primarily competed with for either the #6 or #7 place. The drop to #25 is very large. Overall, eBay has gone from right at 1% of the URLs in our data set down to 0.28%, dropping more than two-thirds of the ranking real-estate they previously held.” He goes on to highlight specific key phrases where eBay lost rankings. It lost two top ten rankings for three separate phrases: “fiber optic christmas tree,” “tongue rings,” and “vermont castings”. Each of these, according to Meyers, was a category page on eBay. eBay also fell out of the top ten, according to this report, for queries like “beats by dr dre,” “honeywell thermostat,” “hooked on phonics,” “batman costume,” “lenovo tablet,” “george foreman grill,” and many others. It’s worth noting that eBay tended to be on the lower end of the top ten rankings for these queries. They’re not dropping out of the number one spot, apparently. Either way, this is isn’t exactly good news for eBay sellers. Of course, it’s unlikely that Google was specifically targeting eBay with either update, and they could certainly bounce back. Have you noticed any specific types of sites (or specific sites) that have taken a noticeable hit? Do Google’s results look better in general? Let us know in the comments . Image via Thinkstock

May 19 2014

Google Responds To Link Removal Overreaction

People continue to needlessly ask sites that have legitimately linked to theirs to remove links because they’re afraid Google won’t like these links or because they simply want to be cautious about what Google may find questionable at any given time. With Google’s algorithms and manual penalty focuses changing on an ongoing basis, it’s hard to say what will get you in trouble with the search engine down the road. Guest blogging, for example, didn’t used to be much of a concern, but in recent months, Google has people freaking out about that. Have you ever felt compelled to have a natural link removed? Let us know in the comments . People take different views on specific types of links whether they’re from guest blog posts, directories, or something else entirely, but things have become so bass ackwards that people seek to have completely legitimate links to their sites removed. Natural links. The topic is getting some attention once again thanks to a blog post from Jeremy Palmer called “Google is Breaking the Internet.” He talks about getting an email from a site his site linked to. “In short, the email was a request to remove links from our site to their site,” he says. “We linked to this company on our own accord, with no prior solicitation, because we felt it would be useful to our site visitors, which is generally why people link to things on the Internet.” “For the last 10 years, Google has been instilling and spreading irrational fear into webmasters,” he writes. “They’ve convinced site owners that any link, outside of a purely editorial link from an ‘authority site’, could be flagged as a bad link, and subject the site to ranking and/or index penalties. This fear, uncertainty and doubt (FUD) campaign has webmasters everywhere doing unnatural things, which is what Google claims they’re trying to stop.” It’s true. We’ve seen similar emails, and perhaps you have too. A lot of sites have. Barry Schwartz at Search Engine Roundtable says he gets quite a few of them, and has just stopped responding. It’s gotten so bad that people even ask StumbleUpon to remove links . You know, Stumbleupon – one of the biggest drivers of traffic on the web. “We typically receive a few of these requests a week,” a spokesperson for the company told WebProNews last year. “We evaluate the links based on quality and if they don’t meet our user experience criteria we take them down. Since we drive a lot of traffic to sites all over the Web, we encourage all publishers to keep and add quality links to StumbleUpon. Our community votes on the content they like and don’t like so the best content is stumbled and shared more often while the less popular content is naturally seen less frequently.” Palmer’s post made its way to Hacker News, and got the attention of a couple Googlers including Matt Cutts himself. It actually turned into quite a lengthy conversation . Cutts wrote: Note that there are two different things to keep in mind when someone writes in and says “Hey, can you remove this link from your site?” Situation #1 is by far the most common. If a site gets dinged for linkspam and works to clean up their links, a lot of them send out a bunch of link removal requests on their own prerogative. Situation #2 is when Google actually sends a notice to a site for spamming links and gives a concrete link that we believe is part of the problem. For example, we might say “we believe has a problem with spam or inorganic links. An example link is” The vast majority of the link removal requests that a typical site gets are for the first type, where a site got tagged for spamming links and now it’s trying hard to clean up any links that could be considered spammy. He also shared this video discussion he recently ad with Leo Laporte and Gina Trapani. Cutts later said in the Hacker News thread, “It’s not a huge surprise that some sites which went way too far spamming for links will sometimes go overboard when it’s necessary to clean the spammy links up. The main thing I’d recommend for a site owner who gets a fairly large number of link removal requests is to ask ‘Do these requests indicate a larger issue with my site?’ For example, if you run a forum and it’s trivially easy for blackhat SEOs to register for your forum and drop a link on the user profile page, then that’s a loophole that you probably want to close. But if the links actually look organic to you or you’re confident that your site is high-quality or doesn’t have those sorts of loopholes, you can safely ignore these requests unless you’re feeling helpful.” Side note: Cutts mentionedin the thread that Google hasn’t been using the disavow links tool as a reason not to trust a source site. Googler Ryan Moulton weighed in on the link removal discussion in the thread, saying, “The most likely situation is that the company who sent the letter hired a shady SEO. That SEO did spammy things that got them penalized. They brought in a new SEO to clean up the mess, and that SEO is trying to undo all the damage the previous one caused. They are trying to remove every link they can find since they didn’t do the spamming in the first place and don’t know which are causing the problem.” That’s a fair point that has gone largely overlooked. Either way, it is indeed clear that sites are overreacting in getting links removed from sites. Natural links. Likewise, some sites are afraid to link out naturally for similar reasons. After the big guest blogging bust of 2014, Econsultancy, a reasonably reputable digital marketing and ecommerce resource site, announced that it was adding nofollow to links in the bios of guest authors as part of a “safety first approach”. Keep in mind, they only accept high quality posts in the first place, and have strict guidelines. Econsultancy’s Chris Lake wrote at the time, “Google is worried about links in signatures. I guess that can be gamed, on less scrupulous blogs. It’s just that our editorial bar is very high, and all outbound links have to be there on merit, and justified. From a user experience perspective, links in signatures are entirely justifiable. I frequently check out writers in more detail, and wind up following people on the various social networks. But should these links pass on any linkjuice? It seems not, if you want to play it safe (and we do).” Of course Google is always talking about how important the user experience is. Are people overreacting with link removals? Should the sites doing the linking respond to irrational removal requests? Share your thoughts in the comments . Image via

Apr 23 2014

Google: Small Sites Can Outrank Big Sites

The latest Webmaster Help video from Google take on a timeless subject: small sites being able to outrank big sites. This time, Matt Cutts specifically tackles the following question: How can smaller sites with superior content ever rank over sites with superior traffic? It’s a vicious circle: A regional or national brick-and-mortar brand has higher traffic, leads to a higher rank, which leads to higher traffic, ad infinitum. Google rephrased the question for the YouTube title as “How can small sites become popular?” Cutts says, “Let me disagree a little bit with the premise of your question, which is just because you have some national brand, that automatically leads to higher traffic or higher rank. Over and over gain, we see the sites that are smart enough to be agile, and be dynamic, and respond quickly, and roll out new ideas much faster than these sort of lumbering, larger sites, can often rank higher in Google search results. And it’s not the case that the smaller site with superior content can’t outdo the larger sites. That’s how the smaller sites often become the larger sites, right? You think about something like MySpace, and then Facebook or Facebook, and then Instagram. And all these small sites have often become very big. Even Alta Vista and Google because they do a better job of focusing on the user experience. They return something that adds more value.” “If it’s a research report organization, the reports are higher quality or they’re more insightful, or they look deeper into the issues,” he continues. “If it’s somebody that does analysis, their analysis is just more robust.” Of course, sometimes they like the dumbed down version . But don’t worry, you don’t have to dumb down your content that much . “Whatever area you’re in, if you’re doing it better than the other incumbents, then over time, you can expect to perform better, and better, and better,” Cutts says. “But you do have to also bear in mind, if you have a one-person website, taking on a 200 person website is going to be hard at first. So think about concentrating on a smaller topic area – one niche – and sort of say, on this subject area – on this particular area, make sure you cover it really, really well, and then you can sort of build out from that smaller area until you become larger, and larger, and larger.” “If you look at the history of the web, over and over again, you see people competing on a level playing field, and because there’s very little friction in changing where you go, and which apps you use, and which websites you visit, the small guys absolutely can outperform the larger guys as long as they do a really good job at it,” he adds. “So good luck with that. I hope it works well for you. And don’t stop trying to produce superior content, because over time, that’s one of the best ways to rank higher on the web.” Image via YouTube

Apr 21 2014

Google’s ‘Rules Of Thumb’ For When You Buy A Domain

Google has a new Webmaster Help video out, in which Matt Cutts talks about buying domains that have had trouble with Google in the past, and what to do. Here’s the specific question he addresses: How can we check to see if a domain (bought from a registrar) was previously in trouble with Google? I recently bought, and unbeknownst to me the domain isn’t being indexed and I’ve had to do a reconsideration request. How could I have prevented? “A few rules of thumb,” he says. “First off, do a search for the domain, and do it in a couple ways. Do a ‘site:’ search, so, ‘site:’ for whatever it is that you want to buy. If there’s no results at all from that domain even if there’s content on that domain, that’s a pretty bad sign. If the domain is parked, we try to take parked domains out of our results anyways, so that might not indicate anything, but if you try do do ‘site:’ and you see zero results, that’s often a bad sign. Also just search for the domain name or the name of the domain minus the .com or whatever the extension is on the end because you can often find out a little of the reputation of the domain. So were people spamming with that domain name? Were they talking about it? Were they talking about it in a bad way? Like this guy was sending me unsolicited email, and leaving spam comments on my blog. That’s a really good way to sort of figure out what’s going on with that site or what it was like in the past.” “Another good rule of thumb is to use the Internet Archive, so if you go to, and you put in a domain name, the archive will show you what the previous versions of that site look like. And if the site looked like it was spamming, then that’s definitely a reason to be a lot more cautious, and maybe steer clear of buying that domain name because that probably means you might have – the previous owner might have dug the domain into a hole, and you just have to do a lot of work even to get back to level ground.” Don’t count on Google figuring it out or giving you an easy way to get things done. Cutts continues, “If you’re talking about buying the domain from someone who currently owns it, you might ask, can you either let me see the analytics or the Webmaster Tools console to check for any messages, or screenshots – something that would let me see the traffic over time, because if the traffic is going okay, and then dropped a lot or has gone really far down, then that might be a reason why you would want to avoid the domain as well. If despite all that, you buy the domain, and you find out there was some really scuzzy stuff going on, and it’s got some issues with search engines, you can do a reconsideration request. Before you do that, I would consider – ask yourself are you trying to buy the domain just because you like the domain name or are you buying it because of all the previous content or the links that were coming to it, or something like that. If you’re counting on those links carrying over, you might be disappointed because the links might not carry over. Especially if the previous owner was spamming, you might consider just going a disavow of all the links that you can find on that domain, and try to get a completely fresh start whenever you are ready to move forward with it.” Cutts did a video about a year ago about buying spamming domains advising buyers not to be “the guy who gets caught holding the bag.” Watch that one here . Image via YouTube

Mar 26 2014

PSA: The Topics You Include On Your Blog Must Please Google

It’s no secret now that Google has launched an attack against guest blogging. Since penalizing MyBlogGuest earlier this month, Google has predictably reignited the link removal hysteria . More people are getting manual penalties related to guest posts. SEO Doc Sheldon got one specifically for running one post that Google deemed to not be on-topic enough for his site. Even though it was about marketing. Maybe there were more, but that’s the one Google pointed out. The message he received (via Search Engine Roundtable ) was: Google detected a pattern of unnatural, artificial, deceptive or manipulative outbound links on pages on this site. This may be the result of selling links that pass PageRank or participating in link schemes. He shared this in an open letter to Matt Cutts, Eric Schmidt, Larry Page, Sergey Brin, et al. Cutts responded to that letter with this: @DocSheldon what "Best Practices For Hispanic Social Networking" has to do with an SEO copywriting blog? Manual webspam notice was on point. — Matt Cutts (@mattcutts) March 24, 2014 To which Sheldon responded: @mattcutts My blog is about SEO, marketing, social media, web dev…. I'd say it has everything to do – or I wouldn't have run it — DocSheldon (@DocSheldon) March 25, 2014 Perhaps that link removal craze isn’t so irrational. Irrational on Google’s part perhaps, but who can really blame webmasters for succumbing to Google’s pressure to dictate what content they run on their sites when they rely on Google for traffic and ultimately business. @mattcutts So we can take this to mean that just that one link was the justification for a sitewide penalty? THAT sure sends a message! — DocSheldon (@DocSheldon) March 25, 2014 Here’s the article in question. Sheldon admitted it wasn’t the highest quality post in the world, but also added that it wasn’t totally without value, and noted that it wasn’t affected by the Panda update (which is supposed to handle the quality part algorithmically). I have a feeling that link removal craze is going to be ramping up a lot more. Ann Smarty, who runs MyBlogGuest weighed in on the conversation: I don't have the words RT @DocSheldon @mattcutts one link was the justification for a sitewide penalty? THAT sure sends a message! — Ann Smarty (@seosmarty) March 25, 2014 Image via YouTube

Feb 24 2014

Google’s Cutts Talks EXIF Data As A Ranking Factor

Google may use EXIF data attached to images as a ranking factor in search results. This isn’t exactly a new revelation, but it is the topic of a new “Webmaster Help” video from the company. Matt Cutts responds to the submitted question, “Does Google use EXIF data from pictures as a ranking factor?” “The short answer is: We did a blog post, in I think April of 2012 where we talked about it, and we did say that we reserve the right to use EXIF or other sort of metadata that we find about an image in order to help people find information,” Cutts says. “And at lest in the version of image search as it existed back then, when you clicked on an image, we would sometimes show the information from EXIF data in the righthand sidebar, so it is something that Google is able to parse out, and I think we do reserve the right to use it in ranking.” “So if your’e taking pictures, I would go ahead, and embed that sort of information if it’s available within your camera because, you know, if someone eventually wants to search for camera types or focal lengths or dates or something like that it can be possibly a useful source of information,” he continues. “So I’d go ahead and include it if it’s already there. I wouldn’t worry about adding it if it’s not there. But we do reserve the right to use it as potentially a ranking factor.” The blog post he was talking about was called, “ 1000 Words About Images ,” and gives some tips on helping Google index your images, and a Q&A section. In that part, one of the questions is: What happens to the EXIF, XMP and other metadata my images contain? The answer was: “We may use any information we find to help our users find what they’re looking for more easily. Additionally, information like EXIF data may be displayed in the right-hand sidebar of the interstitial page that appears when you click on an image.” Google has made significant changes to image search since that post was written, causing a lot of sites to get a great deal less traffic from it. Image via YouTube

Feb 13 2014

Matt Cutts Talks About A Typical Day In Spam-Fighting

The latest “Webmaster Help” video from Google is an interesting (and long) one. Google webspam king Matt Cutts talks about a day in the life of someone on the webspam team. Here’s the set of questions he answers verbatim: What is a day in the life of a search spam team member like? What is the evolution of decisions in terms of how they decide which aspects of the search algorithm to update? Will certain things within the algorithm never be considered for removal? He begins by noting that the team is made up of both engineers and manual spam fighters, both of which he addresses separately. First, he gives a rough idea of a manual spam-fighter’s day. “Typically it’s a mix of reactive spam-fighting and proactive spam-fighting,” he says. “So reactive would mean we get a spam report or somehow we detect that someone is spamming Google. Well, we have to react to that. We have to figure out how do we make things better, and so a certain amount of every day is just making sure that the spammers don’t infest the search results, and make the search experience horrible for everyone. So that’s sort of like not hand to hand combat, but it is saying ‘yes’ or ‘no’ this is spam, or trying to find the spam that is currently ranking relatively well. And then in the process of doing that, the best spam-fighters I know are fantastic at seeing the trends, seeing the patterns in that spam, and then moving into a proactive mode.” This would involve trying to figure out how they’re ranking so highly, the loophole they’re exploiting, and trying to fix it at the root of the problem. This could involve interacting with engineers or just identifying specific spammers. “Engineers,” he says. “They absolutely look at the data. They absolutely look at examples of spam, but your average day is usually spent coding and doing testing of ideas. So you’ll write up an algorithm that you think will be able to stop a particular type of spam. There’s no one algorithm that will stop every single type of spam. You know, Penguin, for example, is really good at several types of spam, but it doesn’t tackle hacked sites, for example. So if you are an engineer, you might be working on, ‘How do I detect hacked sites more accurately?’” He says they would come up with the best techniques and signals they can use, and write an algorithm that tries to catch as many hacked sites as possible while preserving safely the sites that are innocent. Then they test it, and run it across the index or run an experiment with ratings from URLs and see if things look better. Live traffic experiments, seeing what people click on, he says, help them identify what the false positives are. On the “evolution of decisions” part of the question, Cutts says, “We’re always going back and revisiting, and saying, ‘Okay, is this algorithm still effective? Is this algorithm still necessary given this new algorithm?’ And one thing that the quality team (the knowledge team) does very well is trying to go back and ask ourselves, ‘Okay, let’s revisit our assumptions. Let’s say if we were starting from scratch, would we do it this way? What is broken, or stale, or outdated, or defunct compared to some other new way of coming up with this?’ And so we don’t just try to have a lot of different tripwires that would catch a lot of different types of spam, you try to come up with elegant ways that will always catch spam, and try to highlight new types of spam as they occur.” He goes on for about another three minutes after that. Image via YouTube

Feb 11 2014

Google Updated The Page Layout Algorithm Last Week

Google’s Matt Cutts announced on Twitter that the search engine launched a data refresh for its “page layout” algorithm last week. If you’ll recall, this is the Google update that specifically looks at how much content a page has “above the fold”. The idea is that you don’t want your site’s content to be pushed down or dwarfed by ads and other non-content material. You want it to be simple for users to find your content without having to scroll. SEO folks: we recently launched a refresh of this algorithm: Visible to outside world on ~Feb. 6th. — Matt Cutts (@mattcutts) February 10, 2014 Cutts first announced the update in January, 2012. He said this at the time: Rather than scrolling down the page past a slew of ads, users want to see content right away. So sites that don’t have much content “above-the-fold” can be affected by this change. If you click on a website and the part of the website you see first either doesn’t have a lot of visible content above-the-fold or dedicates a large fraction of the site’s initial screen real estate to ads, that’s not a very good user experience. Such sites may not rank as highly going forward. We understand that placing ads above-the-fold is quite common for many websites; these ads often perform well and help publishers monetize online content. This algorithmic change does not affect sites who place ads above-the-fold to a normal degree, but affects sites that go much further to load the top of the page with ads to an excessive degree or that make it hard to find the actual original content on the page. This new algorithmic improvement tends to impact sites where there is only a small amount of visible content above-the-fold or relevant content is persistently pushed down by large blocks of ads. The initial update only affected less than 1% of searches globally, Google said. It’s unclear how far-reaching this data refresh is. Either way, if you’ve suddenly lost Google traffic, you may want to check out your site’s design. Unlike some of its other updates, this one shouldn’t be too hard to recover from if you were hit. You should check out Google’s browser size tool , which lets you get an idea of how much of your page different users are seeing. Image via Google

Jan 10 2014

Google Tweaks Guidance On Link Schemes

Google has made a subtle, but noteworthy change to its help center article on link schemes , which is part of its quality guidelines dissuading webmasters from engaging in spammy SEO tactics. Google put out a video last summer about adding rel=”nofollow” to links that are included in widgets: In that, Matt Cutts, Google’s head of webspam, said, “I would not rely on widgets and infographics as your primary way to gather links, and I would recommend putting a nofollow, especially on widgets, because most people when they just copy and paste a segment of code, they don’t realize what all is going with that, and it’s usually not as much of an editorial choice because they might not see the links that are embedded in that widget.” “Depending on the scale of the stuff that you’re doing with infographics, you might consider putting a rel nofollow on infographic links as well,” he continued. “The value of those things might be branding. They might be to drive traffic. They might be to sort of let people know that your site or your service exists, but I wouldn’t expect a link from a widget to necessarily carry the same weight as an editorial link freely given where someone is recommending something and talking about it in a blog post. That sort of thing.” In Google’s guidance for link schemes, it gives “common examples of unnatural links that may violate our guidelines.” It used to include: “Links embedded in widgets that are distributed across various sites.” As Search Engine Land brings to our attention , that part now reads: “Keyword-rich, hidden or low-quality links embedded in widgets that are distributed across various sites.” That’s a little more specific, and seems to indicate that the previous guidance cast a broader net over such links than what Google really frowns upon. That’s worth noting. You’d do well to pay attention to what Google thinks about link schemes, as the search engine has made a big point of cracking down on them lately (even if some have gotten off lightly ).

Sep 9 2013

Matt Cutts On When Nofollow Links Can Still Get You A Manual Penalty

Today, we get an interesting Webmaster Help video from Google and Matt Cutts discussing nofollow links, and whether or not using them can impact your site’s rankings. The question Cutts responds to comes from somebody going by the name Tubby Timmy: I’m building links, not for SEO but to try and generate direct traffic, if these links are no-follow am I safe from getting any Google penalties? Asked another way, can no-follow links hurt my site? Cutts begins, “No, typically nofollow links cannot hurt your site, so upfront, very quick answer on that point. That said, let me just mention one weird corner case, which is if you are like leaving comment on every blog in the world, even if those links might be nofollow, if you are doing it so much that people notice you, and they’re really annoyed by you, and people spam report about you, we might take some manual spam action on you, for example.” “I remember for a long time on TechCrunch anytime that people showed up, there was this guy would show up, and make some nonsensical comment, and it was clear that he was just trying to piggyback on the traffic from people reading the article to whatever he was promoting,” he continues. “So even if those links were nofollow, if we see enough mass-scale action that we consider deceptive or manipulative, we do reserve the right to take action, so you know, we carve out a little bit of an exception if we see truly huge scale abuse, but for the most part, nofollow links are dropped out of our link graph as we’re crawling the web, and so those links that are nofollowed should not affect you from an algorithmic point of view.” “I always give myself just the smallest out just in case we find somebody who’s doing a really creative attack or mass abuse or something like that, but in general, as long as you’re doing regular direct traffic building, and you’re not annoying the entire web or something like that, you should be in good shape,” he concludes. This is perhaps a more interesting discussion than it seems on the surface in light of other recent advice from Cutts, like that to nofollow links on infographics , which can arguably provide legitimate content and come naturally via editorial decision. It also comes at a time when there are a lot questions about the value of links and what links Google is going to be okay with, and which it is not. Things are complicated even further in instances when Google is making mistakes on apparently legitimate links, and telling webmasters that they’re bad. Image: Google