Jul 25 2012

Google Just Got A Lot Better At Math

If you ever use Google to solve math problems, you may start using it even more now. If you didn’t use Google to solve math problems, you may want to consider starting. Google has launched a new scientific calculator feature, which appears when you enter such a problem in the search box. As you can see from the image above, Google will display this entire calculator interface, enabling users to conduct additional calculations right from the results page. It’s pretty cool. It also works beautifully from mobile devices. We tried it on Android and iPhone. If you you’re holding your phone vertically, a smaller calculator appears, but if you hold it horizontally, the full scientific calculator takes over the screen. Google’s Matt Cutts notes that the feature is currently available at Google.com only because they want to make sure it works before rolling it out globally. Follow @mattcutts Matt Cutts @mattcutts Search for 2+2, get a full scientific calculator: http://t.co/fnU5MQXF Try it for yourself: http://t.co/ePvovueJ   Follow @ralphbin Ralph Binkert @ralphbin @mattcutts the calculator feature seems to be working on http://t.co/fQC675jY only?   Follow @mattcutts Matt Cutts @mattcutts @ralphbin often we start features like this on http://t.co/ybzOP7u3 to make sure everything works..   Reply  ·   Retweet  ·   Favorite 23 minutes ago via web · powered by @socialditto More discussion in our forum .

Jul 23 2012

Google Gives Webmasters Just What They Need: More Confusion

Last week, Google began sending out messages to webmasters , warning them of bad links, much like the ones that many webmasters got prior to the infamous Penguin update . Google said, however, that these messages were different. Whereas the company’s advice in the past was to pay attention to these warnings, Google’s said this time, that they’re not necessarily something you need to worry about it. Google’s head of webspam, Matt Cutts, wrote on Google+ ,”If you received a message yesterday about unnatural links to your site, don’t panic. In the past, these messages were sent when we took action on a site as a whole. Yesterday, we took another step towards more transparency and began sending messages when we distrust some individual links to a site. While it’s possible for this to indicate potential spammy activity by the site, it can also have innocent reasons. For example, we may take this kind of targeted action to distrust hacked links pointing to an innocent site. The innocent site will get the message as we move towards more transparency, but it’s not necessarily something that you automatically need to worry about.” “If we’ve taken more severe action on your site, you’ll likely notice a drop in search traffic, which you can see in the ‘Search queries’ feature Webmaster Tools for example,” Cutts added. “As always, if you believe you have been affected by a manual spam action and your site no longer violates the Webmaster Guidelines, go ahead and file a reconsideration request. It’ll take some time for us to process the request, but you will receive a followup message confirming when we’ve processed it.” Obviously, this all caused a great deal of confusion, and panic among webmasters and the SEO community. Barry Schwartz, who spends a lot of time monitoring forum discussions, wrote , “It caused a major scare amongst SEOs, webmasters and those who owned web sites, never bought a link in their life, didn’t even know what link buying was and got this severe notification that read, ‘our opinion of your entire site is affected.’ Even SEOmoz was getting these warnings . The company’s lead SEO, Ruth Burr, wrote,”We’ve got the best kind of links: the kind that build themselves. Imagine the sinking feeling I got in the pit of my stomach, then, when a Google Webmaster Tools check on Thursday revealed that we’d incurred an unnatural link warning.” Cutts eventually updated his post to indicate that Google has changed the wording of the messages it is sending, in direct response to webmaster feedback. Follow @mattcutts Matt Cutts @mattcutts We changed our messages over the weekend so people can tell what sort of situation they’re in (see update at bottom): https://t.co/VscpXeIj   Follow @justinrbriggs Justin Briggs @justinrbriggs @mattcutts Change only moving forward, or updating those messages sent last week?   Follow @mattcutts Matt Cutts @mattcutts @justinrbriggs I’ll talk to the team about whether we can resend or update the messages we sent starting on Thursday.   Reply  ·   Retweet  ·   Favorite 1 hour ago via web · powered by @socialditto Google has also removed the yellow caution sign that accompany the messages from the webmaster console. According to Cutts, this illustrates that action by the site owner isn’t necessarily required.

Jul 9 2012

Google Is Considering Discounting Infographic Links

Matt Cutts spoke with Eric Enge at SMX Advanced, and Enge has now published the entire interview . In that interview, Cutts reveals that Google may start looking at discounting infographic links. That doesn’t mean Google is doing this right now, or that they definitely will, but…come on. “In principle, there’s nothing wrong with the concept of an infographic,” Cutts says in the interview. “What concerns me is the types of things that people are doing with them. They get far off topic, or the fact checking is really poor. The infographic may be neat, but if the information it’s based on is simply wrong, then it’s misleading people.” “The other thing that happens is that people don’t always realize what they are linking to when they reprint these infographics,” he adds. “Often the link goes to a completely unrelated site, and one that they don’t mean to endorse. Conceptually, what happens is they really buy into publishing the infographic, and agree to include the link, but they don’t actually care about what it links to. From our perspective this is not what a link is meant to be.” I don’t think it’s much of a surprise to a lot of people that Google would consider not counting these kinds of links. In fact, last month, we ran an article from David Leonhardt, who talked about this very thing . There are certainly legitimate infographics, just as there are legitimate directories, but there is always that room for abuse, and it could represent something like what Google considers to be a linking scheme (which is against its quality guidelines). “I would not be surprised if at some point in the future we did not start to discount these infographic-type links to a degree,” Cutts told Enge. “The link is often embedded in the infographic in a way that people don’t realize, vs. a true endorsement of your site.” I think that says it all. If you have a major infograhpic strategy that’s built for SEO purposes, I wouldn’t put too much stock into it moving forward. That doesn’t mean, however, that infographics can’t still provide value, and certainly spark some quality social traffic. That’s only a small part of Enge’s interview with Cutts. Read the whole thing here . Hat tip: Barry Schwartz

Jul 9 2012

Google Panda Update: Matt Cutts Talks About Recovery (And A Bunch Of Other Stuff)

It’s pretty common for Google’s Matt Cutts to appear in Webmaster Help videos, but they’re usually only a few minutes long. This time, he’s treated webmasters to an hour-long Google+ Hangout (from India), with some other members of Google’s search quality team. In the video, Cutts responds to a user question, asking if it’s possible to make a one hundred percent recovery from the Panda update. “And the answer is yes,” says Cutts. “It is possible to recover a hundred percent from Panda….So, it is possible to recover from Panda in the following ways. Remember, Panda is a hundred percent algorithmic. There’s nothing manual involved in that. And we haven’t made any manual exceptions. And the Panda algorithm, we tend to run it every so often. It’s not like it runs every day. It’s one of these things where you might run it once a month or something like that. Typically, you’re gonna refresh the data for it. And at the same time, you might pull in new signals. And those signals might be able to say, ‘Ah, this is a higher-quality site.’” So, there’s a solid group of engineers that I had the chance to work with who have been looking for signals that differentiate the higher quality sites from the sites that might be slightly lower quality,” he continues. “And if look at–even in the last, say, two or three months–we’ve done a better job about differentiating all those. And so, when we rerun the pipeline to recompute the data and then we push that back out–I think the most recent one was earlier this month–there was one that was probably about two weeks ago.” Google pushed two Panda refreshes in June. More on that here . “And so, when that happens, if, according to our signals, it looks like the site is high-quality or there’s new data or there’s new signals, then you just wouldn’t, you would pop out of being affected and you wouldn’t have to worry about it at all,” Cutts says. “So, in order to recover from Panda, take a fresh look and basically ask yourself, ‘How compelling is my site?’ We’re looking for high quality. We’re looking for something where you land on it, you’re really happy, the sort of thing where you wanna tell your friends about it and come back to it, bookmark it. It’s just incredibly useful. That’s the sort of thing that we don’t want to get affected. So, yes. It is possible to recover.” More on Google’s Panda update here .

Jul 5 2012

Matt Cutts: Nofollow Links Are Small, Single Digit Percentage Of Links On The Web

Google’s Matt Cutts recently downplayed the significance of social signals in search, compared to links. Search Marketing Expo uploaded a new video to YouTube, featuring a discussion between Cutts and moderator Danny Sullivan, in which he talks about the notion that social signals have replaced links. In short, while social signals may gain power in time, links are still pretty important. “If you look at the library of congress, they say they have 151.4 million items,” says Cutts. “That’s roughly 34 million books, and if you convert that to just pure text like OCR, that’s something like ten terabytes. The web capture team at the library of congress says they have 235 terabytes. Now everybody in this room probably ought to be saying to themselves: 235 terabytes for the largest library in the world is not that much data. YouTube gets 72 hours of video uploaded every minute. So the web is the largest source of data we’ve ever seen before.” “There’s more data being generated on the web, compared to any other source of data around the web, and I think, the fact is, a lot of people think, ‘Links are dying,’ or ‘Links are not a democracy,’ or ‘It’s impossible to get links that aren’t nofollow,’ or whatever,” says Cutts. “And the fact is, that’s a little bit of a bubble in my opinion, in the SEO industry, because if you look at the actual percentage of Nofollow links on the web, it’s a single digit percentage. In fact, it’s a pretty small single digit percentage. So there’s this perception that, ‘Yes, everything will go social,’ or ‘Links are completely obsolete,’ and I think it’s premature to reach that conclusion.” “I don’t doubt that in ten years, things will be more social, and those will be more powerful signals, but I wouldn’t write the epitaph for links quite yet,” he adds. You would think that social signals are pretty damn important, looking at Google’s results on any given day, if you’re using Search Plus Your World (and there’s a good chance you are, as it’s the default experience for signed in users). How often have you seen results appear simply because someone you’re connected to through Google+ has +1′d something? I don’t necessarily think social is the best indicator of relevance, as I’ve discussed in the past , but I do believe they can carry a lot of weight, and perhaps more importantly, will help you diversify your traffic sources, and not have to depend on the search giant for so much of your traffic.

Jul 3 2012

PageRank: Is There Anything It Can’t Be Applied To?

We’ve seen Google’s PageRank algorithm applied to cancer outcome prediction and used to determine molecular shapes and chemical reactions . Now, PageRank is being used to reveal soccer teams’ strategies. MIT’s technology review points to a paper from Javier Lopez Pena at University College London and Hugo Touchette at Queen Mary University of London, analyzing soccer strategy, and using PageRank in the process. The abstract for the study says: We showcase in this paper the use of some tools from network theory to describe the strategy of football teams. Using passing data made available by FIFA during the 2010 World Cup, we construct for each team a weighted and directed network in which nodes correspond to players and arrows to passes. The resulting network or graph provides a direct visual inspection of a team’s strategy, from which we can identify play pattern, determine hot-spots on the play and localize potential weaknesses. Using different centrality measures, we can also determine the relative importance of each player in the game, the `popularity’ of a player, and the effect of removing players from the game. PageRank is used to measure player popularity, to predict who is most likely to get the ball. The paper looks at the Netherlands and Spain. Here are the passing networks for each team, as diagrammed in the paper: Here’s the main section discussing pagerank in the paper: The paper has caught the attention of Google’s Matt Cutts, who tweeted a link to the MIT article: Follow @mattcutts Matt Cutts @mattcutts Someone applied PageRank-like analysis to European soccer teams’ passing: http://t.co/U7yfFJVn   Reply  ·   Retweet  ·   Favorite 4 minutes ago via Tweet Button  · powered by @socialditto You can read the entire paper here (pdf).

Jul 3 2012

Matt Cutts: Google Can’t Tell If It’s Crawling Databases, But Has Policies To Remove Private Info

Ryan Satterfield at Planet Zuda posted an article about Google exposing private info by indexing other sites’ databases, which he says includes social security numbers and credit card numbers. “If you’ve given a site your credit card number or social security number, then there is a very high chance it is in Google search,” he writes. “This information is very easy for anyone to find, especially for cyber-criminals because Google has made it so anyone can do a Google search with the words filetype: and then the extension for ‘virtual notebooks’.” Satterfield adds, “I contacted Google immediately when I discovered this problem believing that they would want to fix it. I was wrong. They were fully aware that people can find your info, but they feel that they can’t stop it, nor is it their job to ‘censor or curate’ their results unless they are required to do so by law. They said that it is the webmasters job to hide any information that shouldn’t be seen.” He shares what he says is an email response he received from Google, which says: Hi Ryan, Thank you for your report, I apologize it was not answered sooner. We do not consider these searches (commonly called “google dorks”) to be a security risk that we can control. The amount and variety of information that is indexed on the internet precludes any sort of blacklisting system where certain information is removed. Additionally it is Google/s long standing policy to not censor or curate our results except where required by law (such requests can be viewed at http://www.chillingeffects.com). The best way to remove these results is for the affected website owners to remove the content from their website (or restrict access via robots.txt or another mechanism) and then submit a request for the content to be removed from the Google Cache. Regards, Kevin The Google Security Team Planet Zuda brought the subject up with Google’s Matt Cutts on Twitter. Here is the exchange they had: Follow @planetzuda Planet Zuda @planetzuda @mattcutts please stop indexing other sites databases. You’re exposing social security numbers, credit cards, etc. http://t.co/1TDSFTVD   Follow @mattcutts Matt Cutts @mattcutts @planetzuda we just crawl urls. It’s near-impossible to see if a url is really a database. We have policies to remove private info like this   Reply  ·   Retweet  ·   Favorite 2 days ago via web · powered by @socialditto Follow @planetzuda Planet Zuda @planetzuda @mattcutts the real problem is that Google allows people to search for filetype: then the filetype extension of databases. To be continued.   Reply  ·   Retweet  ·   Favorite 1 day ago via Twitter for iPhone  · powered by @socialditto Follow @planetzuda Planet Zuda @planetzuda @mattcutts Kevin in security said you guys aren’t going to fix this. Bing & Yahoo made it so people can’t do that certain filetype: search.   Reply  ·   Retweet  ·   Favorite 1 day ago via Twitter for iPhone  · powered by @socialditto Follow @planetzuda Planet Zuda @planetzuda @mattcutts I wrote some C++ to test if if something’s a DB. Just use rfind, get last three characters. If .sql then don’t index.   Reply  ·   Retweet  ·   Favorite 1 day ago via Twitter for iPhone  · powered by @socialditto So far, that one tweet seems to be all Cutts has had to say on the matter, at least publicly.