Apr 23 2014

Google: Small Sites Can Outrank Big Sites

The latest Webmaster Help video from Google take on a timeless subject: small sites being able to outrank big sites. This time, Matt Cutts specifically tackles the following question: How can smaller sites with superior content ever rank over sites with superior traffic? It’s a vicious circle: A regional or national brick-and-mortar brand has higher traffic, leads to a higher rank, which leads to higher traffic, ad infinitum. Google rephrased the question for the YouTube title as “How can small sites become popular?” Cutts says, “Let me disagree a little bit with the premise of your question, which is just because you have some national brand, that automatically leads to higher traffic or higher rank. Over and over gain, we see the sites that are smart enough to be agile, and be dynamic, and respond quickly, and roll out new ideas much faster than these sort of lumbering, larger sites, can often rank higher in Google search results. And it’s not the case that the smaller site with superior content can’t outdo the larger sites. That’s how the smaller sites often become the larger sites, right? You think about something like MySpace, and then Facebook or Facebook, and then Instagram. And all these small sites have often become very big. Even Alta Vista and Google because they do a better job of focusing on the user experience. They return something that adds more value.” “If it’s a research report organization, the reports are higher quality or they’re more insightful, or they look deeper into the issues,” he continues. “If it’s somebody that does analysis, their analysis is just more robust.” Of course, sometimes they like the dumbed down version . But don’t worry, you don’t have to dumb down your content that much . “Whatever area you’re in, if you’re doing it better than the other incumbents, then over time, you can expect to perform better, and better, and better,” Cutts says. “But you do have to also bear in mind, if you have a one-person website, taking on a 200 person website is going to be hard at first. So think about concentrating on a smaller topic area – one niche – and sort of say, on this subject area – on this particular area, make sure you cover it really, really well, and then you can sort of build out from that smaller area until you become larger, and larger, and larger.” “If you look at the history of the web, over and over again, you see people competing on a level playing field, and because there’s very little friction in changing where you go, and which apps you use, and which websites you visit, the small guys absolutely can outperform the larger guys as long as they do a really good job at it,” he adds. “So good luck with that. I hope it works well for you. And don’t stop trying to produce superior content, because over time, that’s one of the best ways to rank higher on the web.” Image via YouTube

Apr 21 2014

Google’s ‘Rules Of Thumb’ For When You Buy A Domain

Google has a new Webmaster Help video out, in which Matt Cutts talks about buying domains that have had trouble with Google in the past, and what to do. Here’s the specific question he addresses: How can we check to see if a domain (bought from a registrar) was previously in trouble with Google? I recently bought, and unbeknownst to me the domain isn’t being indexed and I’ve had to do a reconsideration request. How could I have prevented? “A few rules of thumb,” he says. “First off, do a search for the domain, and do it in a couple ways. Do a ‘site:’ search, so, ‘site: domain.com’ for whatever it is that you want to buy. If there’s no results at all from that domain even if there’s content on that domain, that’s a pretty bad sign. If the domain is parked, we try to take parked domains out of our results anyways, so that might not indicate anything, but if you try do do ‘site:’ and you see zero results, that’s often a bad sign. Also just search for the domain name or the name of the domain minus the .com or whatever the extension is on the end because you can often find out a little of the reputation of the domain. So were people spamming with that domain name? Were they talking about it? Were they talking about it in a bad way? Like this guy was sending me unsolicited email, and leaving spam comments on my blog. That’s a really good way to sort of figure out what’s going on with that site or what it was like in the past.” “Another good rule of thumb is to use the Internet Archive, so if you go to archive.org, and you put in a domain name, the archive will show you what the previous versions of that site look like. And if the site looked like it was spamming, then that’s definitely a reason to be a lot more cautious, and maybe steer clear of buying that domain name because that probably means you might have – the previous owner might have dug the domain into a hole, and you just have to do a lot of work even to get back to level ground.” Don’t count on Google figuring it out or giving you an easy way to get things done. Cutts continues, “If you’re talking about buying the domain from someone who currently owns it, you might ask, can you either let me see the analytics or the Webmaster Tools console to check for any messages, or screenshots – something that would let me see the traffic over time, because if the traffic is going okay, and then dropped a lot or has gone really far down, then that might be a reason why you would want to avoid the domain as well. If despite all that, you buy the domain, and you find out there was some really scuzzy stuff going on, and it’s got some issues with search engines, you can do a reconsideration request. Before you do that, I would consider – ask yourself are you trying to buy the domain just because you like the domain name or are you buying it because of all the previous content or the links that were coming to it, or something like that. If you’re counting on those links carrying over, you might be disappointed because the links might not carry over. Especially if the previous owner was spamming, you might consider just going a disavow of all the links that you can find on that domain, and try to get a completely fresh start whenever you are ready to move forward with it.” Cutts did a video about a year ago about buying spamming domains advising buyers not to be “the guy who gets caught holding the bag.” Watch that one here . Image via YouTube

Mar 26 2014

PSA: The Topics You Include On Your Blog Must Please Google

It’s no secret now that Google has launched an attack against guest blogging. Since penalizing MyBlogGuest earlier this month, Google has predictably reignited the link removal hysteria . More people are getting manual penalties related to guest posts. SEO Doc Sheldon got one specifically for running one post that Google deemed to not be on-topic enough for his site. Even though it was about marketing. Maybe there were more, but that’s the one Google pointed out. The message he received (via Search Engine Roundtable ) was: Google detected a pattern of unnatural, artificial, deceptive or manipulative outbound links on pages on this site. This may be the result of selling links that pass PageRank or participating in link schemes. He shared this in an open letter to Matt Cutts, Eric Schmidt, Larry Page, Sergey Brin, et al. Cutts responded to that letter with this: @DocSheldon what "Best Practices For Hispanic Social Networking" has to do with an SEO copywriting blog? Manual webspam notice was on point. — Matt Cutts (@mattcutts) March 24, 2014 To which Sheldon responded: @mattcutts My blog is about SEO, marketing, social media, web dev…. I'd say it has everything to do – or I wouldn't have run it — DocSheldon (@DocSheldon) March 25, 2014 Perhaps that link removal craze isn’t so irrational. Irrational on Google’s part perhaps, but who can really blame webmasters for succumbing to Google’s pressure to dictate what content they run on their sites when they rely on Google for traffic and ultimately business. @mattcutts So we can take this to mean that just that one link was the justification for a sitewide penalty? THAT sure sends a message! — DocSheldon (@DocSheldon) March 25, 2014 Here’s the article in question. Sheldon admitted it wasn’t the highest quality post in the world, but also added that it wasn’t totally without value, and noted that it wasn’t affected by the Panda update (which is supposed to handle the quality part algorithmically). I have a feeling that link removal craze is going to be ramping up a lot more. Ann Smarty, who runs MyBlogGuest weighed in on the conversation: I don't have the words RT @DocSheldon @mattcutts one link was the justification for a sitewide penalty? THAT sure sends a message! — Ann Smarty (@seosmarty) March 25, 2014 Image via YouTube

Sep 9 2013

Matt Cutts On When Nofollow Links Can Still Get You A Manual Penalty

Today, we get an interesting Webmaster Help video from Google and Matt Cutts discussing nofollow links, and whether or not using them can impact your site’s rankings. The question Cutts responds to comes from somebody going by the name Tubby Timmy: I’m building links, not for SEO but to try and generate direct traffic, if these links are no-follow am I safe from getting any Google penalties? Asked another way, can no-follow links hurt my site? Cutts begins, “No, typically nofollow links cannot hurt your site, so upfront, very quick answer on that point. That said, let me just mention one weird corner case, which is if you are like leaving comment on every blog in the world, even if those links might be nofollow, if you are doing it so much that people notice you, and they’re really annoyed by you, and people spam report about you, we might take some manual spam action on you, for example.” “I remember for a long time on TechCrunch anytime that people showed up, there was this guy anon.tc would show up, and make some nonsensical comment, and it was clear that he was just trying to piggyback on the traffic from people reading the article to whatever he was promoting,” he continues. “So even if those links were nofollow, if we see enough mass-scale action that we consider deceptive or manipulative, we do reserve the right to take action, so you know, we carve out a little bit of an exception if we see truly huge scale abuse, but for the most part, nofollow links are dropped out of our link graph as we’re crawling the web, and so those links that are nofollowed should not affect you from an algorithmic point of view.” “I always give myself just the smallest out just in case we find somebody who’s doing a really creative attack or mass abuse or something like that, but in general, as long as you’re doing regular direct traffic building, and you’re not annoying the entire web or something like that, you should be in good shape,” he concludes. This is perhaps a more interesting discussion than it seems on the surface in light of other recent advice from Cutts, like that to nofollow links on infographics , which can arguably provide legitimate content and come naturally via editorial decision. It also comes at a time when there are a lot questions about the value of links and what links Google is going to be okay with, and which it is not. Things are complicated even further in instances when Google is making mistakes on apparently legitimate links, and telling webmasters that they’re bad. Image: Google

Aug 12 2013

Google: You Should Probably Include Nofollow On Widgets, Infographics

If you’re putting out widgets or infographics, you might want to be including nofollows in the embed code. That is according to Google’s Matt Cutts, who addresses the subject in a new Webmaster Help video. Cutts takes on the following submitted question: What should we do with embeddable codes in things like widgets and infographics? Should we include the rel=”nofollow” by default? Advert the user that the code includes a link and give him the option of not including it? “My answer to this is colored by the fact that we have seen a ton of people trying to abuse widgets and abuse infographics. We’ve seen people who get a web counter, and they don’t realize that there’s mesothelioma links in there,” Cutts says. He notes that he did a previous video about the criteria for widgets. Here’s that: “Does it point back to you or a third party?” he continues. “Is the keyword text sort of keyword rich and something where the anchor text is really rich or is it just the name of your site? That sort of stuff.” “I would not rely on widgets and infographics as your primary way to gather links, and I would recommend putting a nofollow, especially on widgets, because most people when they just copy and paste a segment of code, they don’t realize what all is going with that, and it’s usually not as much of an editorial choice because they might not see the links that are embedded in that widget,” Cutts says. “Depending on the scale of the stuff that you’re doing with infographics, you might consider putting a rel nofollow on infographic links as well,” he adds. “The value of those things might be branding. They might be to drive traffic. They might be to sort of let people know that your site or your service exists, but I wouldn’t expect a link from a widget to necessarily carry the same weight as an editorial link freely given where someone is recommending something and talking about it in a blog post. That sort of thing.” More recent Webmaster Help videos here .

Jul 10 2013

Google On How Not To Do Guest Posts

Google’s view of guest blog posts has come up in industry conversation several times this week. As far as I can tell this started with an article at HisWebMarketing.com by Marie Haynes, and now Google’s Matt Cutts has been talking about it in a new interview with Eric Enge . Haynes’ post, titled, “Yes, high quality guest posts CAN get you penalized!” shares several videos of Googlers talking about the subject. The first is on old Matt Cutts Webmaster Help video that we’ve shared in the past . In that, Cutts basically said that it can be good to have a reputable, high quality writer do guest posts on your site, and that it can be a good way for some lesser-known writers to generate exposure, but… “Sometimes it get taken to extremes. You’ll see people writing…offering the same blog post multiple times or spinning the blog posts, offering them to multiple outlets. It almost becomes like low-quality article banks.” “When you’re just doing it as a way to sort of turn the crank and get a massive number of links, that’s something where we’re less likely to want to count those links,” he said. The next video Haynes points to is a Webmaster Central Hangout from February: When someone in the video says they submit articles to the Huffington Post, and asks if they should nofollow the links to their site, Google’s John Mueller says, “Generally speaking, if you’re submitting articles for your website, or your clients’ websites and you’re including links to those websites there, then that’s probably something I’d nofollow because those aren’t essentially natural links from that website.” Finally, Haynes points to another February Webmaster Central hangout: In that one, when a webmaster asks if it’s okay to get links to his site through guest postings, Mueller says, “Think about whether or not this is a link that would be on that site if it weren’t for your actions there. Especially when it comes to guest blogging, that’s something where you are essentially placing links on other people’s sites together with this content, so that’s something I kind of shy away from purely from a linkbuilding point of view. I think sometimes it can make sense to guest blog on other peoples’ sites and drive some traffic to your site because people really liked what you are writing and they are interested in the topic and they click through that link to come to your website but those are probably the cases where you’d want to use something like a rel=nofollow on those links.” Barry Schwartz at Search Engine Land wrote about Haynes’ post , and now Enge has an interview out with Cutts who elaborates more on Google’s philosophy when it comes to guest posts (among other things). Enge suggests that when doing guest posts, you create high-quality articles and get them published on “truly authoritative” sites that have a lot of editorial judgment, and Cutts agrees. He says, “The problem is that if we look at the overall volume of guest posting we see a large number of people who are offering guest blogs or guest blog articles where they are writing the same article and producing multiple copies of it and emailing out of the blue and they will create the same low quality types of articles that people used to put on article directory or article bank sites.” “If people just move away from doing article banks or article directories or article marketing to guest blogging and they don’t raise their quality thresholds for the content, then that can cause problems,” he adds. “On one hand, it’s an opportunity. On the other hand, we don’t want people to think guest blogging is the panacea that will solve all their problems.” Enge makes an interesting point about accepting guest posts too, suggesting that if you have to ask the author to share with their own social accounts, you shouldn’t accept the article. Again, Cutts agrees, saying, “That’s a good way to look at it. There might be other criteria too, but certainly if someone is proud to share it, that’s a big difference than if you’re pushing them to share it.” Both agree that interviews are good ways to build links and authority. In a separate post on his Search Engine Roundtable blog, Schwartz adds: You can argue otherwise but if Google sees a guest blog post with a dofollow link and that person at Google feels the guest blog post is only done with the intent of a link, then they may serve your site a penalty. Or they may not – it depends on who is reviewing it. That being said, Google is not to blame. While guest blogging and writing is and can be a great way to get exposure for your name and your company name, it has gotten to the point of being heavily abused. He points to one SEO’s story in a Cre8asite forum thread about a site wanting to charge him nearly five grand for one post. Obviously this is the kind of thing Google would frown upon when it comes to link building and links that flow PageRank. Essentially, these are just paid links, and even if more subtle than the average advertorial (which Google has been cracking down on in recent months), in the end it’s still link buying. But there is plenty of guest blogging going on out there in which no money changes hands. Regardless of your intensions, it’s probably a good idea to just stick the nofollows on if you want to avoid getting penalized by Google. If it’s still something you want to do without the SEO value as a consideration, there’s a fair chance it’s the kind of content Google would want anyway.

May 24 2013

How Big Is The Latest Google Penguin Update?

Webmasters have been expecting a BIG Penguin update from Google for quite some time, and a couple weeks ago, Google’s Matt Cutts promised that one was on the way . Finally, on Wednesday, he announced that Google had not only started the roll-out, but completed it. While it was said to be a big one, it remains to be seen just how big it has been in terms of impacting webmasters. Have you been impacted by the latest Penguin update? Let us know in the comments . Just what did Cutts mean by “big” anyway? When discussing the update a couple weeks ago, he said it would be “larger”. When it rolled out, he announced that “about 2.3% of English-US queries are affected to the degree that a regular user might notice,” and that “the scope of Penguin varies by language, e.g. languages with more webspam will see more impact.” As far as English queries, it would appear that the update is actually smaller. The original Penguin (first called the “Webspam” update) was said to impact about 3.1% of queries in English. So, perhaps this one is significantly larger in terms of other languages. Cutts has also been tossing around the word “deeper”. In the big “What should we expect in the next few months” video released earlier this month, Cutts said this about Penguin 2.0: “So this one is a little more comprehensive than Penguin 1.0, and we expect it to go a little bit deeper, and have a little bit more of an impact than the original version of Penguin.” Cutts talked about the update a little more in an interview with Leo Laporte on the day it rolled out, and said, “It is a leap. It’s a brand new generation of algorithms. The previous iteration of Penguin would essentially only look at the homepage of a site. The newer generation of Penguin goes much deeper. It has a really big impact in certain small areas.” We asked Cutts if he could elaborate on that part about going deeper. He said he didn’t have anything to add: @ ccrum237 not much to add for the time being. — Matt Cutts (@mattcutts) May 23, 2013 The whole thing has caused some confusion in the SEO community. In fact, it’s driving Search Engine Roundtable’s Barry Schwartz “absolutely crazy.” Schwartz wrote a post ranting about this “misconception,” saying: The SEO community is translating “goes deeper” to mean that Penguin 1.0 only impacted the home page of a web site. That is absolutely false. Deeper has nothing to do with that. Those who were hit by Penguin 1.0 know all to well that their whole site suffered, not just their home page. What Matt meant by “deeper” is that Google is going deeper into their index, link graph and more sites will be impacted by this than the previous Penguin 1.0 update. By deeper, Matt does not mean how it impacts a specific web site architecture but rather how it impacts the web in general. He later updated the piece after realizing that Cutts said what he said in the video, adding, “Matt must mean Penguin only analyzed the links to the home page. But anyone who had a site impacted by Penguin noticed not just their home page ranking suffer. So I think that is the distinction.” Anyhow, there have still been plenty of people complaining that they were hit by the update, though we’re also hearing from a bunch of people that they saw their rankings increase. One reader says this particular update impacted his site negatively, but was not as harsh as the original Penguin. Paul T. writes: Well, in a way I like this update better than any of the others. It is true I lost about 50% of my traffic on my main site, but the keywords only dropped a spot or two–so far anyway. The reason I like it is because it is more discriminating. It doesn’t just wipe out your whole site, but it goes page by page. Some of my smaller sites were untouched. Most of my loss came from hiring people to do automated back-linking. I though I would be safe doing this because I was really careful with anchor text diversity, but it was not to be. I am going to try to use social signals more to try to bringt back my traffic. Another reader, Nick Stamoulis, suggests that Google could have taken data from the Link Disavow tool into consideration when putting together Penguin 2.0: I would guess that the Disavow tool was factored into Penguin 2.0. If thousands of link owners disavowed a particular domain I can’t imagine that is something Google didn’t pick up on. It’s interesting that they are offering site owners the chance to “tell” on spammy sites that Penguin 2.0 might have overlooked. Cutts has tweeted about the Penguin spam form several times. With regards to the Link Disavow tool, Google did not rule out the possibility of using it as a ranking signal when quizzed about it in the past. Back in the fall, Search Engine Land’s Danny Sullivan shared a Q&A with Matt Cutts in which he did not rule out the possibility. Sullivan asked him if “someone decides to disavow link from good sites a perhaps an attempt to send signals to Google these are bad,” is Google mining this data to better understand what bad sites are? “Right now, we’re using this data in the normal straightforward way, e.g. for reconsideration requests,” Cutts responded. “We haven’t decided whether we’ll look at this data more broadly. Even if we did, we have plenty of other ways of determining bad sites, and we have plenty of other ways of assessing that sites are actually good.” Searchmetrics released its list of the top losers from the latest Penguin update, which you can see here . It includes some porn, travel, and game sites, as well as a few big brands like Dish and Salvation Army. What is your opinion of Google’s latest Penguin update? It it doing its job? Let us know in the comments .

Feb 11 2013

Google On How To Figure Out Which Links To Remove

For the past year or so, webmasters have been receiving a great deal of messages from Google about unnatural links pointing to their sites. You may know exactly which links Google doesn’t like, but there’s also a good chance you may not. As we’ve seen, a lot of people have gone on link removal request rampages, greatly overreacting , and seeking the takedown of legitimate links out of fear that Google might not like them. In the latest Webmaster Help video, Google’s Matt Cutts discusses how to figure out which links to get removed. The video is a response to this user-submitted question: Google Webmaster Tools says I have “unnatural links,” but gives little help as to which specific links are bad. Since I have never purchased links, I don’t know which ones to have removed, and I’m scared of removing good ones, which will hurt my traffic. Suggestions? “We’ve tried to become more transparent, and when we were saying, ‘Links were affecting the reputation of an entire site,’ we would tell people about that,” says Cutts. “And more recently we’ve been telling people, and opening up and saying, ‘Hey, we still like your site. Your site, overall, might be good, but maybe there’s some individual links to your site that we don’t trust.’ Now, the problem is that we weren’t, at that time, giving specific examples. So one feature that we rolled out is the ability to sort by recent, discovery of links, so you can actually get the date of when we discovered a link. So if you sort that way, you can look for the recent links. But a feature that we are working on – we are in the process or rolling out – is that we will actually – we will basically give you examples.” “So it’s a…you know, as we’re building the incident whenever a webmaster analyst or something like that is saying, ‘Okay, these are links not to trust,’ they’ll include an example link,” continues Cutts. “You might get one, you might get two, you might get three, depending, but basically it will give you an idea of the sorts of links that we are no longer trusting. Now, it’s not exhaustive. It’s not comprehensive, but it should give you a flavor, you know. Is it a bunch of widget links? Were you doing a bunch of keyword-rich anchor text in article bank or article marketing type stuff? Maybe you weren’t trying to do paid links, but maybe you hired an agency, and it turns out they were doing paid links, and you didn’t realize it.” “I would look in the text of the messages,” concludes. “Over time, we’re working really hard on trying to include an example or two link, so that when you get that message, you have an idea of exactly where to look.”

Feb 5 2013

Matt Cutts Talks Referer Spam In Latest Video

Google’s Matt Cutts is back online, and cranking out the Webmaster Help videos . He tweeted a link to the second of the latest series today, and this one is about referer spam coming from a YouTube video. The user-submitted question is: Why does a certain YouTube video appear to be visiting my blogspot blog? Take this video for example, it keeps appearing in my Blogger Dashboard as a referral.. Cutts says they looked at the video, and found in the comments that there were multiple people complaining about the same problem – that the video spammed their blog. “This is an instance of what we call referer spam,” he says. “A referer is just a simple HTTP header that is passed along when a browser goes from one page to another page, and it normally is used to indicate where the user’s coming from. Now, people can use that, and change the referer to be anything that they want. They can make it empty, or there are some people who will set the referer to a page they want to promote, and then they will just visit tons of pages around the web. All the people that look at the referers see that, and say, ‘Oh, maybe I should go and check that out.’ And the link – whenever there’s a referer – it doesn’t mean that there was necessarily a link, because you can make that referer anything you want, so there are some people who try to drive traffic by visiting a ton of websites, even with an automated script, and setting the referer to be the URL that they want to promote.” He notes that some of the other comments on the YouTube video say that its creator is well known, and has no reason to spam people. Cutts notes that it doesn’t necessarily have to be coming from the actual creator. “The thing to know is that there’s no authentication with referer. Anybody can make a browser, and set the referer,” he says. “You can’t automatically assume it was the owner of that URL if you see something showing up in your dashboard.” Basically, you should just ignore it, he says.

Aug 17 2012

Matt Cutts Clarifies What He Said About Twitter (On Twitter)

Matt Cutts appeared at Search Engine Strategies this week. In addition to talking up the Knowledge Graph and scaring people about the Penguin update, he talked briefly about Google’s relationship with Twitter. First, we linked to a liveblogged account of Cutts’ session from State Of Search, which paraphrased him as saying: Danny [Sullivan] asks ‘Can’t you see how many times a page is tweeted? I can see it, I could call you’. Cutts: we can do it relatively well, but if we could crawl Twitter in the full way we can, their infastructure wouldn’t be able to handle it. In a later article on what SEOmoz CEO Rand Fishkin had to say about Twitter’s impact on SEO , we also referenced Brafton’s version , which paraphrased Cutts as saying: People were upset when Realtime results went away! But that platform is a private service. If Twitter wants to suspend someone’s service they can. Google was able to crawl Twitter until its deal ended, and Google was no longer able to crawl those pages. As such, Google is cautious about using that as a signal – Twitter can shut it off at any time. We’re always going to be looking for ways to identify who is valuable in the real world. We want to return quality results that have real world reputability and quality factors are key – Google indexes 20 billion pages per day. The Brafton piece also indicated that Cutts said that Google can’t crawl Facebook pages or Twitter accounts. It was later updated, but this led to Fishkin asking Cutts about that on Twitter, which led to some more from Cutts on the matter. Follow @randfish Rand Fishkin @randfish “(Google) can’t crawl Facebook pages or Twitter accounts” – @mattcutts via http://t.co/WZOhINjA Really? What’s http://t.co/pDL1RXdQ then?   Follow @mattcutts Matt Cutts @mattcutts @randfish what I actually said was: when Twitter cut off our firehose, they also blocked Googlebot from crawling. For 1.5 months, in fact.   Follow @randfish Rand Fishkin @randfish @mattcutts Ah. Gotcha. So it was misquoted then. Thank you for the clarification!   Follow @mattcutts Matt Cutts @mattcutts @randfish also: post-firehose cutoff & post-crawl cutoff, there are >400M tweets/day. Unclear Twitter could/would stand us webcrawling that.   Reply  ·   Retweet  ·   Favorite 14 hours ago via web · powered by @socialditto