Apr 30 2014

Matt Cutts’ Floating Head Reminds You About Your Pages’ Body Content

  • Posted by in Web Pro News
  • Comments Off on Matt Cutts’ Floating Head Reminds You About Your Pages’ Body Content

Google has released a new public service announcement about putting content in the body of webpages. Naturally, this features Matt Cutts’ head floating in the air. “It’s important to pay attention to the head of a document, but you should also pay attention to the body of a document,” he said. “Head might have meta description, meta tags, all that sort of stuff. If you want to put stuff in the head that’s great. Make sure it’s unique. Don’t just do duplicate content. But stuff in the body makes a really big difference as well. If you don’t have the text – the words that will really match on a page – then it’s going to be hard for us to return that page to users. A lot of people get caught up in descriptions, meta keywords, thinking about all those kinds of things. Don’t just think about the head. Also think about the body because the body matters as well.” I find it odd that Google would feel the need to make an announcement about this, but apparently people are forgetting about the body so much that it needed to be done. Come on. Image via YouTube

Apr 28 2014

Google On Criteria For Titles In Search Results

Google has talked about titles in search results in multiple videos in the past, but once again takes on the topic in the latest Webmaster Help video. They keep getting questions about it, so why not? In fact, Cutts shares two different questions related to titles in this particular video. “Basically, whenever we try to choose the title or decide which title to show in a search result, we’re looking for a concise description of the page that’s also relevant to the query,” Cutts says. “So there’s a few criteria that we look at. Number one, we try to find something that’s relatively short. Number two, we want to have a good description of the page, and ideally the site that the page is on. Number three, we also want to know that it’s relevant to the query somehow. So if your existing HTML title fits those criteria, then often times the default will be to just use your title. So in an ideal world it would accurately describe the page and the site, it would be relevant to the query, and it would also be somewhat short.” He continues, “Now, if your current title, as best as we can tell, doesn’t match that, then a user who types in something, and doesn’t see something related to their query, or doesn’t have a good idea about what exactly this page is going to be, is less likely to click on it. So in those kinds of cases, we might dig a little bit deeper. We might use content on your page. We might look at the links that point to your page, and incorporate some text from those links. We might even use the Open Directory Project to try to help figure out what a good title would be. But the thing to bear in mind is that in each of these cases, we’re looking for the best title that will help a user assess whether that’s what they’re looking for. So if you want to control the title that’s being shown, you can’t completely control it, but you can try to anticipate what’s a user going to type, and then make sure that your title reflects not only something about that query or the page that you’re on, but also includes sort of the site that you’re on, or tries to give some context so that the user knows what they’re going to get whenever they’re clicking on it.” Google offers tips for creating descriptive page titles in its help center here . It suggests making sure each page on your site has a title specified in the title tag, for starters. It says to keep them descriptive and concise, to avoid keywords stuffing, to avoid repeated or boilerplate titles, to brand your titles, and to be careful about disallowing search engines. It gets into significantly more detail about each of these things, as well as about how it generates titles when the site fails to meet the criteria. The page also includes this old video of Cutts talking about snippets in general: Here’s a video from five years ago in which Matt talks about changing titles as well: Image via YouTube

Apr 23 2014

Google: Small Sites Can Outrank Big Sites

The latest Webmaster Help video from Google take on a timeless subject: small sites being able to outrank big sites. This time, Matt Cutts specifically tackles the following question: How can smaller sites with superior content ever rank over sites with superior traffic? It’s a vicious circle: A regional or national brick-and-mortar brand has higher traffic, leads to a higher rank, which leads to higher traffic, ad infinitum. Google rephrased the question for the YouTube title as “How can small sites become popular?” Cutts says, “Let me disagree a little bit with the premise of your question, which is just because you have some national brand, that automatically leads to higher traffic or higher rank. Over and over gain, we see the sites that are smart enough to be agile, and be dynamic, and respond quickly, and roll out new ideas much faster than these sort of lumbering, larger sites, can often rank higher in Google search results. And it’s not the case that the smaller site with superior content can’t outdo the larger sites. That’s how the smaller sites often become the larger sites, right? You think about something like MySpace, and then Facebook or Facebook, and then Instagram. And all these small sites have often become very big. Even Alta Vista and Google because they do a better job of focusing on the user experience. They return something that adds more value.” “If it’s a research report organization, the reports are higher quality or they’re more insightful, or they look deeper into the issues,” he continues. “If it’s somebody that does analysis, their analysis is just more robust.” Of course, sometimes they like the dumbed down version . But don’t worry, you don’t have to dumb down your content that much . “Whatever area you’re in, if you’re doing it better than the other incumbents, then over time, you can expect to perform better, and better, and better,” Cutts says. “But you do have to also bear in mind, if you have a one-person website, taking on a 200 person website is going to be hard at first. So think about concentrating on a smaller topic area – one niche – and sort of say, on this subject area – on this particular area, make sure you cover it really, really well, and then you can sort of build out from that smaller area until you become larger, and larger, and larger.” “If you look at the history of the web, over and over again, you see people competing on a level playing field, and because there’s very little friction in changing where you go, and which apps you use, and which websites you visit, the small guys absolutely can outperform the larger guys as long as they do a really good job at it,” he adds. “So good luck with that. I hope it works well for you. And don’t stop trying to produce superior content, because over time, that’s one of the best ways to rank higher on the web.” Image via YouTube

Apr 21 2014

Google’s ‘Rules Of Thumb’ For When You Buy A Domain

Google has a new Webmaster Help video out, in which Matt Cutts talks about buying domains that have had trouble with Google in the past, and what to do. Here’s the specific question he addresses: How can we check to see if a domain (bought from a registrar) was previously in trouble with Google? I recently bought, and unbeknownst to me the domain isn’t being indexed and I’ve had to do a reconsideration request. How could I have prevented? “A few rules of thumb,” he says. “First off, do a search for the domain, and do it in a couple ways. Do a ‘site:’ search, so, ‘site: domain.com’ for whatever it is that you want to buy. If there’s no results at all from that domain even if there’s content on that domain, that’s a pretty bad sign. If the domain is parked, we try to take parked domains out of our results anyways, so that might not indicate anything, but if you try do do ‘site:’ and you see zero results, that’s often a bad sign. Also just search for the domain name or the name of the domain minus the .com or whatever the extension is on the end because you can often find out a little of the reputation of the domain. So were people spamming with that domain name? Were they talking about it? Were they talking about it in a bad way? Like this guy was sending me unsolicited email, and leaving spam comments on my blog. That’s a really good way to sort of figure out what’s going on with that site or what it was like in the past.” “Another good rule of thumb is to use the Internet Archive, so if you go to archive.org, and you put in a domain name, the archive will show you what the previous versions of that site look like. And if the site looked like it was spamming, then that’s definitely a reason to be a lot more cautious, and maybe steer clear of buying that domain name because that probably means you might have – the previous owner might have dug the domain into a hole, and you just have to do a lot of work even to get back to level ground.” Don’t count on Google figuring it out or giving you an easy way to get things done. Cutts continues, “If you’re talking about buying the domain from someone who currently owns it, you might ask, can you either let me see the analytics or the Webmaster Tools console to check for any messages, or screenshots – something that would let me see the traffic over time, because if the traffic is going okay, and then dropped a lot or has gone really far down, then that might be a reason why you would want to avoid the domain as well. If despite all that, you buy the domain, and you find out there was some really scuzzy stuff going on, and it’s got some issues with search engines, you can do a reconsideration request. Before you do that, I would consider – ask yourself are you trying to buy the domain just because you like the domain name or are you buying it because of all the previous content or the links that were coming to it, or something like that. If you’re counting on those links carrying over, you might be disappointed because the links might not carry over. Especially if the previous owner was spamming, you might consider just going a disavow of all the links that you can find on that domain, and try to get a completely fresh start whenever you are ready to move forward with it.” Cutts did a video about a year ago about buying spamming domains advising buyers not to be “the guy who gets caught holding the bag.” Watch that one here . Image via YouTube

Apr 18 2014

Google Penalizes PostJoint, Another Guest Blog Network

Google has taken out another guest blog network. This time it’s PostJoint. Techtada tweeted about it to Matt Cutts ( via Search Engine Land ), who responded: @techtada any link or guest blog network that claims to have "zero footprints" is waving a giant red flag. — Matt Cutts (@mattcutts) April 18, 2014 Luana Spinetti says it can “thrive outside of Google”. @mattcutts @techtada PostJoint can still thrive outside of Google and I'm all for that. I really love that service. — Luana Spinetti (@luanatf) April 18, 2014 PostJoint is no longer ranking for a search for its own name. What a great user experience and relevant results! The penalty comes after Google had already penalized MyBlogGuest . When that happened, PostJoint put up a blog post about how it was differerent, in which Saleem Yaqub wrote: We’ve always put quality first even if this means a smaller user base and lower revenues. We are selective about who we work with, and we moderate everything from user accounts, to links, content and participating sites (on average we decline 70% of sites that apply). We’ve always been concerned about footprints, so from day one we’ve had a unique no browsing approach, where nobody can browse or crawl through our site list or user base. Our technology is built from the ground up with a zero footprints principle in mind. Compare this to MBG which is essentially a modified forum that anyone could join and you’ll start to understand the fundamental differences. We work hard to filter out spam and sites made for SEO. Sometimes activity on PostJoint does include follow links but these are mostly surrounded by good content, good blogs, and good marketers. PostJoint is an independent intermediary, we facilitate the connections and streamline the process, but what the users ultimately do is their own choice. Apparently Google doesn’t care about all that. Saleem confirms the penalty in the comments of that post ( via Search Engine Watch ). As we’ve seen, Google has legitimate sites afraid of accepting guest blog posts, and some that do accept them afraid to link naturally . Image via PostJoint

Apr 17 2014

Cutts Talks SEO ‘Myths,’ Says To Avoid ‘Group Think’

  • Posted by in Web Pro News
  • Comments Off on Cutts Talks SEO ‘Myths,’ Says To Avoid ‘Group Think’

In the latest “Webmaster Help” video, Matt Cutts talks about SEO “myths”. He responds to this question: What are some of the biggest SEO Myths you see still being repeated (either at conferences, or in blogs, etc.)? There are a lot of them, he says. “One of the biggest, that we always hear,” he says, “is if you buy ads, you’ll rank higher on Google, and then there’s an opposing conspiracy theory, which is, if you don’t buy ads, you’ll rank better on Google, and we sort of feel like we should get those two conspiracy camps together, and let them fight it all out, and then whoever emerges from one room, we can just debunk that one conspiracy theory. There’s a related conspiracy theory or myth, which is that Google makes its changes to try to drive people to buy ads, and having worked in the search quality group, and working at Google for over thirteen years, I can say, here’s the mental model you need to understand why Google does what it does in the search results. We want to return really good search results to users so that they’re happy, so that they’ll keep coming back. That’s basically it. Happy users are loyal users, and so if we give them a good experience on one search, they’ll think about using us the next time they have an information need, and then along the way, if somebody clicks on ads, that’s great, but we’re not gonna make an algorithmic change to try to drive people to buy ads. If you buy ads, it’s not going to algorithmically help your ranking in any way, and likewise it’s not going to hurt your ranking if you buy ads.” Google reported its quarterly earnings yesterday with a 21% revenue increase on the company’s own sites (like its search engine) year-over-year. Paid clicks were up 26% during that time. Cutts continues with another “myth”. “I would say, just in general, thinking about the various black hat forums and webmaster discussion boards, never be afraid to think for yourself. It’s often the case that I’ll see people get into kind of a ‘group think,’ and they decide, ‘Ah ha! Now we know that submitting our articles to these article directories is going to be the best way to rank number one.’ And then six months later, they’ll be like, ‘OK, guest blogging! This is totally it. If you’re guest blogging, you’re gonna go up to number one,’ and a few months before that, ‘Oh, link wheels. You gotta have link wheels if you’re gonna rank number one,’ and it’s almost like fad.” To be fair, some of this “group think” stuff has worked for some sites in the past until Google changed its algorithm to stop them from working . He suggests that if somebody really had a “foolproof” way to make money online, they’d probably use it to make money rather than putting it in an e-book or tool, and selling it to people. “The idea that you’re going to be able to buy some software package, and solve every single problem you’ve ever had is probably a little bit of a bad idea,” he says. “It’s kind of interesting how a lot of people just assume Google’s thinking about nothing but the money as far as our search quality, and truthfully, we’re just thinking about how do we make our search results better,” he says. Google’s total revenue for the quarter was up 19% year-over-year, which still wasn’t enough to meet investors’ expectations. Image via YouTube

Apr 15 2014

Cutts On 404s Vs. 410s: Webmasters Shoot Often Themselves In The Foot

Google’s latest Webmaster Help video, unlike the one before it, is very webmaster oriented. In it, Matt Cutts discusses how Google handles 404s versus how it handles 410s. “Whenever a browser or Googlebot asks for a page, the web server sends back a status code,” he says. ’200 might mean everything went totally fine. 404 means the page was not found. 410 typically means ‘gone,’ as in the page is not found, and we do not expect it to come back. So 410 has a little more of a connotation that this page is permanently gone. So the short answer is that we do sometimes treat 404s and 410s a little bit differently, but for the most part, you shouldn’t worry about it. If a page is gone, and you think it’s temporary, go ahead and use a 404. If a page is gone, and you know no other page that should substitute for it…you don’t have anywhere else that you should point to, and you know that that page is gone and never coming back, then go ahead and serve a 410.” “It turns out, webmasters shoot themselves in the foot pretty often,” he continues. “Pages go missing, people misconfigure sites, sites go down, people block Googlebot by accident, people block regular users by accident…so if you look at the entire web, the crawl team has to design to be robust against that. So 404, along with, I think, 401s and maybe 403s, if we see a page, and we get a 404, we are gonna protect that page for 24 hours in the crawling system. So we sort of wait, and we say, ‘Well, maybe that was a transient 404. Maybe it wasn’t really intended to be a page not found.’ And so in the crawling system, it will be protected for 24 hours. If we see a 410, then the crawling system says, ‘OK, we assume the webmaster knows what they’re doing because they went off the beaten path to deliberately say that this page is gone.’ So they immediately convert that 410 to an error, rather than protecting it for 24 hours.” “Don’t take this too much the wrong way, Cutts adds. “We’ll still go back and recheck, and make sure, are those pages really gone or maybe the pages have come back alive again, and I wouldn’t rely on the assumption that that behavior will always be exactly the same. In general, sometimes webmasters get a little too caught up in tiny little details, and so if a page is gone, it’s fine to serve a 404. If you know it’s gone for real, it’s fine to serve a 410, but we’ll design our crawling system to try to be robust, but if your site goes down, or if you get hacked or whatever, that we try to make sure than we can still find the good content whenever it’s available.” He also notes that these details can change. Long story short, don’t worry about it that much. Image via YouTube

Apr 14 2014

Google Considers Making SSL A Ranking Signal

About a month ago, Google’s head of webspam Matt Cutts said at the Search Marketing Expo that he’d like to see Google make SSL site encryption a signal in Google’s ranking algorithm. Barry Schwartz at SMX sister site Search Engine Land wrote at the time, “Let me be clear, Matt Cutts, Google’s head of search spam, did not say it is or it will be part of the ranking algorithm. But he did say that he personally would like to see it happen in 2014. Matt Cutts is a senior Google search engineer that has opinions that matter, so I wouldn’t be surprised if Google does announce in 2014 that this is a ranking factor – but it is far off and may never happen.” It doesn’t look like anything new has really happened with this yet, but the Wall Street Journal has a new report out reaffirming Cutts’ desire for such a signal: Cutts also has spoken in private conversations of Google’s interest in making the change, according to a person familiar with the matter. The person says Google’s internal discussions about encryption are still at an early stage and any change wouldn’t happen soon. A Google spokesman said the company has nothing to announce at this time. Search Engine Land’s Danny Sullivan is quoted in the article, and makes a pretty valid point that Google adopting such a signal could “cause an immediate change by all the wrong sites” – those specifically trying to game Google. Of course as head of webspam, something tells me Cutts has considered this. If it is to become a signal, it’s likely not going to carry a huge amount of weight. Google will still always want to provide the best user experience and content to users. At least that’s what their official stance will be. Even if the motivation is to improve search rankings, sites making themselves more secure can’t be a bad thing ( until it is ). But then, one has to wonder if Google will launch another algorithm update to penalize sites that are making themselves more secure just to influence search rankings just as it penalizes those who try to get links to get better search rankings. I wonder how that would work. Image via YouTube

Apr 10 2014

Matt Cutts Gets New ‘Melody’ Treatment

Okay, this exists. This comes from HighPosition.com . I randomly came across it on StumbleUpon, and it hardly has any views yet, so let’s change that. If you watch Matt Cutts’ videos regularly, you owe this one to yourself. You can watch it in a more theatrical setting here . And don’t forget to check out the Matt Cutts Donkey Kong and Whack-a-Mole games. Oh, and of course this classic: Wow, it just dawned on me that we’ve been covering these Matt Cutts video for an absurdly long time. Image via YouTube

Apr 10 2014

How To Make Videos Like Matt Cutts’

The latest “Webmaster Help” video from Google isn’t so much a webmaster help video, but a discussion about how they actually make these videos. It’s meant to give some advice to businesses who want to utilize online video more. There are some good, practical tips here for getting started easily and cheaply. If you’ve ever wanted to make videos like Matt Cutts’ videos, you should give this one a watch. There’s also the possibility of doing live video, of course. A recent report from Ustream finds that business use of live online video will double by 2016 . Image via YouTube