Feb 28 2014

Googler Rachel Searles Writes Sci-Fi Novel

Googler Rachel Searles has a new sci-fi book out called The Lost Planet . Matt Cutts points us to it, and has only good things to say about it: Amazon has a pretty substantial preview . Searles is from the search quality team. Here’s a video of her talking about reconsideration request tips, which we covered a couple years ago. You can check out her blog here , where she talks about the book a great deal. Image via Rachel Searles’ blog

Feb 27 2014

Google Gives You A Form To Report Scrapers Ranking Higher Than Your Original Content

  • Posted by in Web Pro News
  • Comments Off on Google Gives You A Form To Report Scrapers Ranking Higher Than Your Original Content

Google has a form called the Scraper Report for people to report when they see a scraper site ranking ahead of the original content that it’s scraping. Head of webspam Matt Cutts tweeted: If you see a scraper URL outranking the original source of content in Google, please tell us about it: http://t.co/WohXQmI45X — Matt Cutts (@mattcutts) February 27, 2014 The form asks for the URL on the site where the content was taken from, the exact URL on the scraper site, and the Google search result URL that demonstrates the problem. It then asks you to confirm that your site is following Google Webmaster Guidelines and is not affected by any manual actions. You confirm this with a checkbox. Danny Sullivan asks a good question: @mattcutts will you actually do removals with this form, or are you harvesting signals to try and prevent the problem algorithmically? — Danny Sullivan (@dannysullivan) February 27, 2014 No answer on that so far, though Sullivan suggests in an article that Google will “potentially” use it as a way to improve its ranking system. Hat tip to Larry Kim for spotting this one: . @mattcutts I think I have spotted one, Matt. Note the similarities in the content text: pic.twitter.com/uHux3rK57f — dan barker (@danbarker) February 27, 2014

Feb 26 2014

Google: You Don’t Have To Dumb Your Content Down ‘That Much’

  • Posted by in Web Pro News
  • Comments Off on Google: You Don’t Have To Dumb Your Content Down ‘That Much’

Google’s Matt Cutts answers an interesting question in a new “Webmaster Help” video: “Should I write content that is easier to read or more scientific? Will I rank better if I write for 6th graders?” Do you think Google should give higher rankings to content that is as well-researched as possible, or content that is easier for the layman to understand? Share your thoughts in the comments . This is a great question as we begin year three of the post-Panda Google . “This is a really interesting question,” says Cutts. “I spent a lot more time thinking about it than I did a lot of other questions today. I really feel like the clarity of what you write matters a lot.” He says, “I don’t know if you guys have ever had this happen, but you land on Wikipedia, and you’re searching for information – background information – about a topic, and it’s way too technical. It uses all the scientific terms or it’s talking about a muscle or whatever, and it’s really hyper-scientific, but it’s not all that understandable, and so you see this sort of revival of people who are interested in things like ‘Explain it to me like I’m a five-year-old,’ right? And you don’t have to dumb it down that much , but if you are erring on the side of clarity, and on the side of something that’s going to be understandable, you’ll be in much better shape because regular people can get it, and then if you want to, feel free to include the scientific terms or the industry jargon, the lingo…whatever it is, but if somebody lands on your page, and it’s just an opaque wall of scientific stuff, you need to find some way to pull people in to get them interested, to get them enticed in trying to pick up whatever concept it is you want to explain.” Okay, it doesn’t sound so bad the way Cutts describes it, and perhaps I’m coming off a little sensational here, but it’s interesting that Cutts used the phrase, “You don’t have to dumb it down that much.” This is a topic that we discussed last fall when a Googler Ryan Moulton said in a conversation on Hacker News, “There’s a balance between popularity and quality that we try to be very careful with. Ranking isn’t entirely one or the other. It doesn’t help to give people a better page if they aren’t going to click on it anyways.” He then elaborated: Suppose you search for something like [pinched nerve ibuprofen]. The top two results currently are http://www.mayoclinic.com/health/pinched-nerve/DS00879/DSECT… and http://answers.yahoo.com/question/index?qid=20071010035254AA… Almost anyone would agree that the mayoclinic result is higher quality. It’s written by professional physicians at a world renowned institution. However, getting the answer to your question requires reading a lot of text. You have to be comfortable with words like “Nonsteroidal anti-inflammatory drugs,” which a lot of people aren’t. Half of people aren’t literate enough to read their prescription drug labels: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1831578/ The answer on yahoo answers is provided by “auntcookie84.” I have no idea who she is, whether she’s qualified to provide this information, or whether the information is correct. However, I have no trouble whatsoever reading what she wrote, regardless of how literate I am. That’s the balance we have to strike. You could imagine that the most accurate and up to date information would be in the midst of a recent academic paper, but ranking that at 1 wouldn’t actually help many people. This makes for a pretty interesting debate. Should Google bury the most well-researched and accurate information just to help people find something that they can read easier, even if it’s not as high quality? Doesn’t this kind of go against the guidelines Google set forth after the Panda update? You know, like these specific questions Google suggested you ask about your content: “Would you trust the information presented in this article?” (What’s more trustworthy, the scientific explanation from a reputable site or auntcookie’s take on Yahoo Answers?) “Is this article written by an expert or enthusiast who knows the topic well, or is it more shallow in nature?” (Uh…) “Does the article provide original content or information, original reporting, original research, or original analysis?” (Original research and analysis, to me, suggests that someone is going to know and use the lingo.) “Does the page provide substantial value when compared to other pages in search results?” (Couldn’t value include educating me about the terminology I might not otherwise understand?) “Is the site a recognized authority on its topic?” (You mean the type of authority that would use the terminology associated with the topic?) “For a health related query, would you trust information from this site?” (Again, are you really trusting auntcookie on Yahoo Answers over Mayo Clinic?) “Does this article provide a complete or comprehensive description of the topic?” (Hmm. Complete and comprehensive. You mean as opposed to dumbed down for the layman?) “Does this article contain insightful analysis or interesting information that is beyond obvious?” (I’m not making this up. Here’s Google’s blog post listing these right here .) “Are the pages produced with great care and attention to detail vs. less attention to detail?” (You get the idea.) Maybe I’m missing something, but it seems like Google has been encouraging people to make their content as thorough, detailed, and authoritative as possible. I don’t see “Is your content dumbed down for clarity’s sake?” on the list. Of course that was nearly three years ago. If quality is really the goal (as Google has said over and over again in the past), doesn’t the responsibility of additional research and additional clicking of links rest with the searcher? If I don’t understand what the most accurate and relevant result is saying, isn’t it my responsibility to continue to educate myself, perhaps by looking at other sources of information and looking up the things I don’t understand? But that would go against Google trying to get users answers as quickly as possible. That must be why Google is trying to give you the answers itself rather than having to send you to third-party sites. Too bad those answers aren’t always reliable . Cutts continues in the video, “So I would argue first and foremost, you need to explain it well, and then if you can manage to do that while talking about the science or being scientific, that’s great, but the clarity of what you do, and how you explain it often matters almost as much as what you’re actually saying because if you’re saying something important, but you can’t get it across, then sometimes you never got it across in the first place, and it ends up falling on deaf ears.” Okay, sure, but isn’t this just going to encourage users to dumb down content at the risk of educating users less? I don’t think that’s what Cutts is trying to say here, but people are going to do anything they can to get their sites ranked better. At least he suggests trying to use both layman’s terms and the more scientific stuff. “It varies,” he says. “If you’re talking only to industry professionals – terminators who are talking about the scientific names of bugs, and your audience is only bugs – terminator – you know, exterminator experts, sure, then that might make sense, but in general, I would try to make things as natural sounding as possible – even to the degree that when I’m writing a blog post, I’ll sometimes read it out loud to try to catch what the snags are where things are gonna be unclear. Anything you do like that, you’ll end up with more polished writing, and that’s more likely to stand the test of time than something that’s just a few, scientific mumbo jumbo stuff that you just spit out really quickly.” I’m not sure where the spitting stuff out really quickly thing comes into play here. The “scientific mumbo jumbo” (otherwise known as facts and actual terminology of things) tends to appear in lengthy, texty content, like Moulton suggested, no? Google, of course, is trying to get better at natural language with updates like Hummingbird and various other acquisitions and tweaks . It should only help if you craft your content around that. “It’s not going to make that much of a difference as far as ranking,” Cutts says. “I would think about the words that a user is going to type, which is typically going to be the layman’s terms – the regular words rather than the super scientific stuff – but you can find ways to include both of them, but I would try to err on the side of clarity if you can.” So yeah, dumb it down. But not too much. Just enough. But also include the smart stuff. Just don’t make it too smart. What do you think? Should Google dumb down search results to give users things that are easier to digest, or should it be the searcher’s responsibility to do further research if they don’t understand the accurate and well-researched information that they’re consuming? Either way, isn’t this kind of a mixed message compared to the guidance Google has always given regarding “quality” content? Share your thoughts . For the record, I have nothing against auntcookie. I know nothing about auntcookie, but that’s kind of the point.

Feb 25 2014

Google Moves Link Network Focus To Poland

Google continues to wipe out link networks across the Internet, apparently on a country-by-country basis. Earlier this month, Google’s Matt Cutts told Twitter followers about Google cracking down on German link networks . That is still happening, but focus is also moving over to Poland. Here’s the latest on the situation from Cutts: Not done with Germany yet, but we just took action on two Polish link networks + a reminder blog post: http://t.co/p4tHWx5vHF — Matt Cutts (@mattcutts) February 24, 2014 That links to Google’s Poland blog, where the company offers a similar post on unnatural links to the one it put on its German Webmaster blog earlier this month. While Google has called out specific link networks by name on numerous occasions, there are no specifics this time. Image via YouTube

Feb 24 2014

Google’s Cutts Talks EXIF Data As A Ranking Factor

Google may use EXIF data attached to images as a ranking factor in search results. This isn’t exactly a new revelation, but it is the topic of a new “Webmaster Help” video from the company. Matt Cutts responds to the submitted question, “Does Google use EXIF data from pictures as a ranking factor?” “The short answer is: We did a blog post, in I think April of 2012 where we talked about it, and we did say that we reserve the right to use EXIF or other sort of metadata that we find about an image in order to help people find information,” Cutts says. “And at lest in the version of image search as it existed back then, when you clicked on an image, we would sometimes show the information from EXIF data in the righthand sidebar, so it is something that Google is able to parse out, and I think we do reserve the right to use it in ranking.” “So if your’e taking pictures, I would go ahead, and embed that sort of information if it’s available within your camera because, you know, if someone eventually wants to search for camera types or focal lengths or dates or something like that it can be possibly a useful source of information,” he continues. “So I’d go ahead and include it if it’s already there. I wouldn’t worry about adding it if it’s not there. But we do reserve the right to use it as potentially a ranking factor.” The blog post he was talking about was called, “ 1000 Words About Images ,” and gives some tips on helping Google index your images, and a Q&A section. In that part, one of the questions is: What happens to the EXIF, XMP and other metadata my images contain? The answer was: “We may use any information we find to help our users find what they’re looking for more easily. Additionally, information like EXIF data may be displayed in the right-hand sidebar of the interstitial page that appears when you click on an image.” Google has made significant changes to image search since that post was written, causing a lot of sites to get a great deal less traffic from it. Image via YouTube

Feb 19 2014

Matt Cutts: Backlinks Still Super Important To Search Quality

  • Posted by in Web Pro News
  • Comments Off on Matt Cutts: Backlinks Still Super Important To Search Quality

In case you were wondering, backlinks are still really important to how Google views the quality of your site. Head of webspam Matt Cutts said as much in the latest “Webmaster Help” video, in which he discusses Google testing a version of its search engine that excludes backlinks as a ranking signal. The discussion was prompted when a user asked if Google has a version of the search engine that totally excludes backlink relevance. “We don’t have a version like that that is exposed to the public, but we have run experiments like that internally, and the quality looks much, much worse,” he says. “It turns out backlinks, even though there’s some noise, and certainly a lot of spam, for the most part are still a really, really big win in terms of quality for search results.” “We’ve played around with the idea of turning off backlink relevance, and at least for now, backlink relevance still really helps in terms of making sure we return the best, most relevant, most topical set of search results,” Cutts adds. I wonder how big a role backlinks are playing in these results . Image via YouTube

Feb 14 2014

Donkey Cutts Is Donkey Kong With Matt Cutts

Somebody has made a Donkey Kong game with Matt Cutts as the ape. It’s called, appropriately, Donkey Cutts . In the game, social signals increase your rank, and you get ten points for jumping over penguins and pandas, or lose the points if they get you. You can also gain points for “awesome content,” and more links and shares. Penalties can be “fixed with a trusty SEO hammer”. You know, like the hammer in Donkey Kong. The game comes from NetVoucherCodes.co.uk, and it’s a hell of a lot harder to lose than the real Donkey Kong. Cutts stands at the top like Kong himself, shouting things like “Nofollow any paid links!” and “Make your links from blog comments genuine!” We haven’t seen any comment from Cutts yet. Via Search Engine Land Image via NetVoucherCodes

Feb 13 2014

Matt Cutts Talks About A Typical Day In Spam-Fighting

The latest “Webmaster Help” video from Google is an interesting (and long) one. Google webspam king Matt Cutts talks about a day in the life of someone on the webspam team. Here’s the set of questions he answers verbatim: What is a day in the life of a search spam team member like? What is the evolution of decisions in terms of how they decide which aspects of the search algorithm to update? Will certain things within the algorithm never be considered for removal? He begins by noting that the team is made up of both engineers and manual spam fighters, both of which he addresses separately. First, he gives a rough idea of a manual spam-fighter’s day. “Typically it’s a mix of reactive spam-fighting and proactive spam-fighting,” he says. “So reactive would mean we get a spam report or somehow we detect that someone is spamming Google. Well, we have to react to that. We have to figure out how do we make things better, and so a certain amount of every day is just making sure that the spammers don’t infest the search results, and make the search experience horrible for everyone. So that’s sort of like not hand to hand combat, but it is saying ‘yes’ or ‘no’ this is spam, or trying to find the spam that is currently ranking relatively well. And then in the process of doing that, the best spam-fighters I know are fantastic at seeing the trends, seeing the patterns in that spam, and then moving into a proactive mode.” This would involve trying to figure out how they’re ranking so highly, the loophole they’re exploiting, and trying to fix it at the root of the problem. This could involve interacting with engineers or just identifying specific spammers. “Engineers,” he says. “They absolutely look at the data. They absolutely look at examples of spam, but your average day is usually spent coding and doing testing of ideas. So you’ll write up an algorithm that you think will be able to stop a particular type of spam. There’s no one algorithm that will stop every single type of spam. You know, Penguin, for example, is really good at several types of spam, but it doesn’t tackle hacked sites, for example. So if you are an engineer, you might be working on, ‘How do I detect hacked sites more accurately?’” He says they would come up with the best techniques and signals they can use, and write an algorithm that tries to catch as many hacked sites as possible while preserving safely the sites that are innocent. Then they test it, and run it across the index or run an experiment with ratings from URLs and see if things look better. Live traffic experiments, seeing what people click on, he says, help them identify what the false positives are. On the “evolution of decisions” part of the question, Cutts says, “We’re always going back and revisiting, and saying, ‘Okay, is this algorithm still effective? Is this algorithm still necessary given this new algorithm?’ And one thing that the quality team (the knowledge team) does very well is trying to go back and ask ourselves, ‘Okay, let’s revisit our assumptions. Let’s say if we were starting from scratch, would we do it this way? What is broken, or stale, or outdated, or defunct compared to some other new way of coming up with this?’ And so we don’t just try to have a lot of different tripwires that would catch a lot of different types of spam, you try to come up with elegant ways that will always catch spam, and try to highlight new types of spam as they occur.” He goes on for about another three minutes after that. Image via YouTube

Feb 11 2014

Cutts: Don’t Worry About Grammatical Errors In Your Blog Comments

  • Posted by in Web Pro News
  • Comments Off on Cutts: Don’t Worry About Grammatical Errors In Your Blog Comments

In his latest “Webmaster Help” video, Google’s Matt Cutts answers a question that a lot of people have probably wondered, particularly since Google launched the Panda update in 2011: how do the comments on your blog affect how Google sees the quality of your pages? The exact wording of the question was: Should I correct the grammar on comments to my WordPress blog? Should I not approve comments with poor grammar? Will approving comments with grammar issues affect my page’s quality rating? Long story short: don’t worry about it. “I wouldn’t worry about the grammar in your comments. As long as the grammar on your own page is fine, you know, there are people on the Internet, and they write things, and it doesn’t always make sense. You can see nonsense comments, you know, on YouTube and other large properties, and that doesn’t mean a YouTube video won’t be able to rank. Just make sure that your own content is high quality, and you might want to make sure that people aren’t leaving spam comments. You know, if you’ve got a bot, than they might leave bad grammar, but if it’s a real person, and they’re leaving a comment, and the grammar is not slightly perfect, that usually reflects more on them than it does on your site, so I wouldn’t stress out about that.” You would think the spam would reflect more on them too, but go ahead and continue stressing out about that. Images via YouTube

Feb 11 2014

Google Updated The Page Layout Algorithm Last Week

Google’s Matt Cutts announced on Twitter that the search engine launched a data refresh for its “page layout” algorithm last week. If you’ll recall, this is the Google update that specifically looks at how much content a page has “above the fold”. The idea is that you don’t want your site’s content to be pushed down or dwarfed by ads and other non-content material. You want it to be simple for users to find your content without having to scroll. SEO folks: we recently launched a refresh of this algorithm: http://t.co/KKSXm8FqZW Visible to outside world on ~Feb. 6th. — Matt Cutts (@mattcutts) February 10, 2014 Cutts first announced the update in January, 2012. He said this at the time: Rather than scrolling down the page past a slew of ads, users want to see content right away. So sites that don’t have much content “above-the-fold” can be affected by this change. If you click on a website and the part of the website you see first either doesn’t have a lot of visible content above-the-fold or dedicates a large fraction of the site’s initial screen real estate to ads, that’s not a very good user experience. Such sites may not rank as highly going forward. We understand that placing ads above-the-fold is quite common for many websites; these ads often perform well and help publishers monetize online content. This algorithmic change does not affect sites who place ads above-the-fold to a normal degree, but affects sites that go much further to load the top of the page with ads to an excessive degree or that make it hard to find the actual original content on the page. This new algorithmic improvement tends to impact sites where there is only a small amount of visible content above-the-fold or relevant content is persistently pushed down by large blocks of ads. The initial update only affected less than 1% of searches globally, Google said. It’s unclear how far-reaching this data refresh is. Either way, if you’ve suddenly lost Google traffic, you may want to check out your site’s design. Unlike some of its other updates, this one shouldn’t be too hard to recover from if you were hit. You should check out Google’s browser size tool , which lets you get an idea of how much of your page different users are seeing. Image via Google