2014-06-14

Every year at SMX Advanced Danny does a You&A with Matt Cutts where audience members submit questions, Danny asks them (the ones he feels are the most valid to a large audience anyway) and Matt answers them. I live tweeted this year’s session and this is the roundup of those tweets along with anything I personally wanted to add. ;-)

Danny and Matt pre-chat with an awesome photobomb courtesy of Michelle Robbins. Ha.

That chat began with Matt throwing stuffed hummingbirds and other swag from the stage. Danny asked if Matt had any announcements (he said he had a lot of them) and said we’d go in a “question” then an “announcement” format for the chat.

I’d used my own made up hashtag for my live tweeting (#SMXYA for SMX You&A), so you can see the full list here (I’ve pulled selected tweets regarding the topics covered for this post and gave elaboration on related tweets below each).

PAYDAY LOAN 2.0 – “PART B”

Matt says second part of the latest payday loan iteration will be launching prob later this week, maybe as soon as tomorrow. #smx #smxya

— Rae Hoffman (@sugarrae) June 12, 2014

The first announcement Matt made was that the latest Payday Loan update (referred to as Payday Loan 2.0) was actually a 2 part update and that only the first part had launched earlier this month. He said he expected “Part B” to launch sometime soon – “probably later this week” but possibly “as soon as tomorrow”.

Danny then asked how Part B of Payday Loan 2.0 would differ from Part A. Matt said that part B would focus more on “spammy queries” vs. “spammy sites”.

Now, supposedly the Payday Loan algorithm has always focused on “spammy queries” but it’s possible Google was handing down punishment to sites on a “site level” whereas now maybe they’ll be handing down punishment on a “query level”, but that’s all speculation on my part. Matt didn’t go any further in depth than the above on the topic.

UPDATED ON 6/16/14 TO ADD: Apparently Matt tweeted out on 6/12 that Payday Loan 2.0 “Part B” began rolling out:

@BtotheMcG it's rolling out now!

— Matt Cutts (@mattcutts) June 12, 2014

However, if it is, Barry Schwartz says the webmaster community doesn’t appear to be noticing. Barry’s coverage on this is here.

ON METAFILTER

Matt says metafilter was not affected by Panda #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Danny then asked Matt what happened with MetaFilter. Matt unequivocally stated that MetaFilter was not hit by Panda. Matt said that MetaFilter is a typical high quality site, though he did notate that it was a typical high quality site with an outdated design/UI.

He then reiterated that not only was MetaFilter not affected by Panda, but that it was also not affected by Penguin. He added “there’s a lot of different algorithms we launch”. He mentioned that when MetaFilter did their post about their traffic loss, one of the things they suspected was that Google may have viewed them as spam as a result of an email they received where Google had supposedly cited a link from them to a webmaster as an example of a “bad link”.

Matt said they “checked their records” and that in fact, they’d never cited MetaFilter as a spam link to anyone – someone had taken the Google template and inserted the MetaFilter link on their own.

Matt seemed to imply that MetaFilter was not getting any manual help with their traffic hit but that instead Google was looking at what went wrong that they hit a quality site in the first place and instead planned to fix that algorithmically.

I took away two things from this discussion. The first was that they were able to “check their records” on whether or not they’d ever cited MetaFilter as a bad link.

The second was that – according to the post made by MetaFilter, their traffic losses coincided with Panda updates (the graph MetaFilter shared makes it too hard to see an exact date of their mega hit and they never gave the date in the post to confirm Panda from the outside looking in – but there were two Panda refreshes launched in November 2012), yet Matt stated they were in fact never hit by Panda, but instead were hit by a different algorithm.

So, is Google launching covert algorithms or updates at the same time it does Panda refreshes? If so, does this mean some sites who think they’re affected by Panda actually may not be? We know Panda is one of the hardest algo hits to recover from – if there’s other updates being tossed in at the same time, it could mean we’re barking up the wrong tree with “fixing” some sites. Sigh.

BETTER COMMUNICATION ON RECONSIDERATION REQUESTS

Matt says w new system any time they reject a reconsid request reviewer now has option to add a note (on first reconsid request) #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Matt then went into how Google is trying to handle reconsideration requests for manual penalties a bit better. On a first time reconsideration request, it would appear that the process was that a site either received a denial or a removal on the manual – the reviewer apparently couldn’t add any notes. It wasn’t until multiple reconsideration requests were made for the same site that reviewers had the ability to start to communicate with the webmaster.

Matt said they’ve now added the ability for the reviewer to add a note on every reconsideration request – even the first one – should they choose to do so.

Later in the session, Matt also admitted they know they need to do a better job of reaching out to and communicating with the small business owner community in the way they have in the webmaster community.

ON OVERLAPPING UPDATES

Danny asks about rolling out Panda 4.0 and payday 2.0 at the same time (matt says part A of payday 2.0) and laughs #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Danny asked about what we all get pissed about – why does Google overlap updates? Are they trying to confuse us? Matt made is seem like Google tried not to overlap updates. In the specific case Danny was asking about, Matt said that Payday Loan 2.0 (Part A) was originally slated to go early on the weekend, while Panda 4.0 was scheduled to go later in the week. Matt implied a weird series of events occurred (he was *very* vague on what those were exactly) that caused them to launch closer together than Google originally planned. He said their goal wasn’t to confuse webmasters.

WHY DON’T WE GET NOTIFICATION IN GWT FOR ALOGIRTHMIC HITS?

Danny is asking why webmasters don't get "you've been hit by Panda" type messages #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Danny asked why we don’t receive “you’ve been hit by Panda” type notices. I took Danny’s question to mean why didn’t we get notices for being hit algorithmically by an update the same way we do manual actions in GWT. Matt answered that they do try to let webmasters know about large (emphasis on large) algorithm updates when they make them by making a public announcement – which didn’t answer the question. Either I mistook what Danny was asking or Matt mistook what Danny was asking or Matt understood what Danny was asking and evaded answering by giving that response. ;-)

GWT IMPROVEMENTS

re GWT, matt says you should check it out – added fetch and render as googlebot – he says they can fetch JS and Ajax now #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Matt made a couple of announcements to new features in GWT as well as “coming soon” features in GWT.

The first was that GWT recently added a “fetch and render as Googlebot” feature within GWT. He said they can fetch Ajax and JavaScript now. He said now that Googlebot can understand more code, we should stop blocking JS and CSS files from being crawled. He said that “more help” was coming to GWT in regards to robots.txt files and that more help was also coming for ahreflang. Also on the “upcoming” list was more help in regards to errors from app indexing.

Matt said they’ve made improvements in GWT as far as site move documentation and planned to continue improving that process. He made it sound like the improvements for site moves would be in the form of both documentation and features, but that was my take and not explicitly stated. No exact (or vague) timelines for the “coming soon” features were given.

WHEN WILL THE NEXT PENGUIN UPDATE BE?

danny asks about a penguin update – matt says he doesn't believe they've had a penguin update – they've been focused on panda #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Danny asked if there had been a Penguin update since the last announced update (which was October 4, 2013 for those keeping track). Matt said he didn’t believe so. Danny got flustered there for a second – asking Matt how he could “not know” if an update had occurred, LOL (thanks Danny, cause we were all thinking that). Matt implied that they’d been focused on the Panda 4.0 release. He then said – no lie – that an engineer came up to him and said it was probably time for a Penguin refresh and Matt had agreed with him… and the topic changed.

Side note: We (I was on the panel) were asked about the upcoming Penguin update in the Ask the SEO’s session the next day – and Greg Boser had said that he believed it was coming soon and that it would essentially be the biggest update yet. I’d added that I tended to agree – with all the information we’ve been feeding Google about which sites are shitty in the last eight months in the form of disavows – this one should be big.

WHY ARE THERE SO MANY HOOPS TO JUMP THROUGH TO FIX A PENGUIN PENALTY?

Danny asks about G making people do the "link walk of shame" – danny says it's become a punishment for publishers, not spammers #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

This was the best way I could title this part of the discussion, LOL. Danny asked about why Google makes it so hard to recover from a Penguin penalty. Why the need for link removal – the “link walk of shame” versus simply disavowing things and being done with it. Matt called it a “fair question” but was then very vague on answering it.

He made a comment about how spammers could build tons of spam links today and then disavow tomorrow – I think the implication with that was that they’d be able to penalize and unpenalize a domain too easily, thus the “link walk of shame” – and the length of time it takes to recover from Penguin – but I could be wrong.

Danny then offered an alternate solution. Make it easier to bounce back from the FIRST hit but take the tough stance on any subsequent hits for spammy link activity. Matt didn’t seem to like that idea – but didn’t give any specific reasons why, LOL.

INTERNET EXPLORER 8 REFERRERS ARE BACK

Matt said they got IE8 referrers back #smx #smxYA – Danny says "you mean referrers that don't tell us anything?"

— Rae Hoffman (@sugarrae) June 12, 2014

Matt said IE8 referrers were back and Danny replied, “you mean the referrers that don’t tell us anything?” and Matt laughed. He said they are now showing you IE8 referrers again, though yes, those are simply lumped into the referrers from Google. I think his reason for mentioning this was so that any sites with a significant IE8 user base would know why they might suddenly be seeing a bump in Google referrals. He didn’t give a date as to when this occurred.

ON GWT KEYWORD DATA

Danny asks how matt said a year ago that GWT would store a year in kw data, asks where that's at #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Matt had said a long time ago that Google Webmaster Tools was working on showing you / storing a years worth of keyword data (right now they show 90 days worth). Danny asked when that promise would be fulfilled. Matt said he’d saw someone tweet during an earlier session to download the data every ninety days (it was a tweet of a comment I’d made in the Keyword Research on ‘Roids session). He said he knew this wasn’t ideal, but kind of gave a “it is what it is” kind of response on it. No timeline was given on when – or if – the “years worth of data” would happen.

IS LINK BUILDING DEAD?

Danny says at this point it seems it would be easier for matt to say what links ARE allowed #smx #smxYA danny asks is link building dead?

— Rae Hoffman (@sugarrae) June 12, 2014

Danny said at this point, Google should start telling us what IS allowed because it seems like that would be a much smaller list to keep track of than what isn’t allowed. Danny asked Matt if link building was dead. Matt said, “No, link building is not dead”.

So Danny clarified that he wasn’t asking if links in regards to Google’s algorithm was dead – he was asking if actually going out with the goal to “build links” was dead. Matt then referenced a blog post that Duane Forrester had written that stated:

“You want links to surprise you. You should never know in advance a link is coming, or where it’s coming from. If you do, that’s the wrong path.”

Matt agreed with the sentiment of Duane’s remarks on building links in that post – however, he said the part where Duane said you should never know in advance that a link was coming was “going a little too far” as far as Google is concerned. He drove home that it was ok to create amazing content knowing it will help drive you links – providing the content is actually amazing and that people are linking to it because it’s amazing.

.@mattcutts says "it's easier to be real than to fake being real" #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

And that about summed it up. Matt implied that what Google takes issue with isn’t a website developing awesome content that is created with the hopes of attracting links – he implied the issue was with “building links” through all the ways Google had explicitly stated were against their guidelines – and through bare minimum efforts where the content wasn’t spectacular and the purpose was solely to obtain a link (versus a link and users and conversions and publicity).

Danny asks can the really assess a page's value without links? matt says yes, it's possible #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Danny then asked if Google could really have an indication of a page’s value without links (in reference to the video featured in this article). Matt said yes, it was possible. Danny asked if Google could turn off links cold turkey then. Matt did his famous “uhhhh” and laughed. I assumed that meant the answer was no. ;-)

WHAT RUMORED OUTSIDE FACTORS ARE REALLY FOLDED INTO THE ALGORITHM?

The next few questions focused on things that have been rumored to be a factor in Google’s algorithm.

Danny asks if G is using author rank for anything aside from in depth articles – Matt says "nice try" #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Danny said it wasn’t a hard question to answer – is it being used, yes or no – is it being used in regular web search. Matt said Author Rank was being used for In-Depth articles (something Google had already confirmed). Matt definitely didn’t want to – and didn’t – give an answer. He didn’t say yes, but he also didn’t say no. Then Matt mentioned he was a fan of Author Rank – and the topic changed.

Danny asks if G is looking at site engagement re rankings – Matt says in general, they're "open to looking at signals" BUT #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

The BUT was that while they were open to signals, Matt is “extremely skeptical” of using site engagement factors at face value and scale in the algorithm because they are very subject to manipulation.

danny asks if sites will get a boost for SSL, matt says currently no boost #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Matt said there’s “currently” no boost for a site because they use SSL. Matt once again mentioned though that he’s a fan of SSL – he said that anything that makes the web more secure is better for all of us. Danny asked if that meant Google would default to showing the https version of a site over the http version if Google knew about and could access both versions. Matt said that at one point there was actually favoritism built in for the http version, but he believed that has since been removed.

Matt says +1 not used in general rankings – makes it clear they may affect personalized rankings if peeps in ur circle #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Later in the session, Danny asked if Google+ was dead. Matt responded no, with a hurt voice LOL. Matt said G+ data was not used in the general rankings. He was quick to add though that if you’re searching Google logged in, you’ll possibly see effects on your personal rankings as a result of your activity within Google+ (I concur).

danny tries to dig into use of social signals re FB etc – matt says this is why engineers don't want to come to search shows #smx #smxYA ha

— Rae Hoffman (@sugarrae) June 12, 2014

Danny tried to get some admission – positive or negative – on whether Google looked at social signals coming from networks *other* than Google+. Matt responded by joking that this is why search engineers don’t want to come to search shows. In other words, we got no answer to that question.

does speed matter danny asks? matt says sites that are extremely slow (he cites "like 20 seconds") need to worry, not others #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Later in the session, Danny asked if site speed mattered in regards to your rankings. Matt said that only extremely slow sites needed to worry about site speed (he used the example of “like 20 seconds”). AKA, he seemed to imply that a site that loaded in 2.4 seconds had no advantage over a site that loaded in 4 seconds.

WHAT IS THE DEAL WITH TIMED PENALTIES?

he says manuals have a time attached – smaller infraction, smaller the time #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

The day before the You&A, I’d submitted a question for Matt to Danny via Twitter. My question was:

“Matt has spoken a lot about timed penalties, saying the length often is determined by the infraction. Is there any instance where a site might get a timed penalty that does not show in GWT manual actions tab? And if all penalties, even those with ‘timers’ would show in GWT, how do you know if yours carries a ‘timer’?”

Danny asked my question using his own wording. Matt laughed and said he knew where that question had come from and called me out in the audience. I waved.

Matt said if you get a manual, it will be visible in GWT. He seemed to imply that all manuals had some kind of timer attached to them. The smaller the infraction, the “shorter the timer” so to speak. He definitely didn’t answer the part about how do we know how long the “timer” was set for whatever manual penalty we’d incurred.

He once again said that all manuals eventually expired (which is not new info) – but I’d like to note here that Matt has said in the past that if you don’t fix the reason for a manual and it expires, he’s confident Google will find out and hit you with another manual fairly quickly.

So, here’s my takeaway – Google will never tell us how long a penalty “timer” will last based on which rules were broken because that would mean we might be willing to take the risk if we know the “timer” would be short if we got caught.

The other thing I took away from this is that it looks like (thinking aloud here) there may be two aspects to removing a manual penalty. One is fixing it and having Google remove the manual action in GWT. That’s Google confirming you’ve fixed it and are no longer violating their guidelines. The second aspect is waiting for the “timer” to expire – and how long that takes is up for debate and based upon the “level” of your infraction based on a “bad to really bad” scale we have no insight to.

There have been multiple, multiple reports of people fixing manuals yet seeing no recovery despite Google removing the manual action notice after a successful reconsideration request in GWT. My guess is your timer has yet to run out, even if Google acknowledged you fixed the root cause by removing the manual action notice in GWT.

Matt was clear that manual penalties came with timers attached to them. He was also clear that all manual penalties eventually expire despite whether or not the offense that caused said penalty is fixed. What Matt was NOT CLEAR on was if the “timer” remained (however many months based on however bad what you did was) even AFTER you had the manual action notice removed in GWT.

Additionally, Matt was clear on stating all manual penalties show in GWT. I will however say that I don’t necessarily believe that is the case 100% of the time personally, but, according to Matt, that’s how manual penalties roll.

CAN YOU “REAVOW” A LINK?

danny asks if you disavow a domain once, and later decide you want to not disavow it, can you remove it? #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

So, you’ve disavowed a link (or someone you hired did in a crazy bid to save you from Penguin) that wasn’t actually a bad link. Danny jokingly asked, is there a way to “reavow” it? Matt said yes, that you could essentially “reavow” a link by removing it from your current disavow file and then reuploading the file with the link you’d like to “reavow” removed.

However, Matt seemed like he totally didn’t like the “taste” of those words in his mouth. I couldn’t tell if this was more because he pictured people “reavowing” some of their shitty links after getting a manual penalty removed with this knowledge or if it was because a link was somehow “damaged” with a disavow and he didn’t want people accidentally shooting themselves in the foot. Or it could have been neither of those. But, he definitely seemed like something about discussing “reavowing” links made him uneasy.

ON THE IMPORTANCE OF MOBILE

I've seen matt speak for yrs – he's being VERY stern voiced re how important it is for us to be getting our mobile shit together #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Matt spoke for several minutes of the topic of mobile. He kept saying how important it was for us to be mobile ready. He asked the audience how many people had auto-fill markup on their mobile site forms. Hardly anyone raised their hands. Danny said “that’s not mobile” and Matt said “yes it is”. He said that mobile dominant Internet “coming faster than most people in this room realize”.

ON NEGATIVE SEO

matt clarifies they know others are worried re it – he implied payday loan 2.0 (A) coming will help close some neg seo loopholes #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

This is where typing quickly while trying to live tweet will sometimes get you haha. In the tweet I said Part A when I meant Part B.

Danny asked Matt what was going on with negative SEO. Matt said Google was very aware of negative SEO – but then sort of clarified they are very aware about how many people are worried about negative SEO. He implied that Payday Loan 2.0 Part *B* would be closing some of the loopholes people are using for negative SEO.

SO ABOUT HUMMINGBIRD

Danny asked how search would be different for wearables.

matt replies w so about hummingbird LOL #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Matt replied, “so about Hummingbird” and pulled out his phone which has Google Now. He asked his phone “where is the space needle?” and his phone responded with the address. He then asked, “I want to see pictures” and his phone showed him pictures of it. Matt asked, “who built it?” and his phone answered. Matt asked, “how tall is it?” and the phone answered. Matt said, “show me restaurants near there” and his phone showed him a map listing of them. Matt said, “how about Italian?” and his phone showed him a listing of Italian restaurants. Matt said, “navigate to the closest one” and his phone enacted his map with the directions. The room clapped.

Matt said he thought this showed how wearable search would be different. He said Hummingbird was about connecting these dots. Matt also admitted this worked better with mobile than with desktop, because people tend to use more natural language with mobile. He said he expected desktop to improve as they learned more from the mobile use of it.

DO JAVASCRIPT LINKS PASS VALUE?

someone asks if JS links are handled like reg links, will they pass credit – matt says "mostly, yes" #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Matt said “mostly, yes” in response to being asked if JavaScript links are handled like regular links. Matt also pointed out that you could add the nofollow attribute to JavaScript links and Google would see it.

ON CONTENT / QUIZ MILLS

danny asks about buzzfeed's content / quiz mill #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Danny asked Matt how he felt about Buzzfeed and sites like them that essentially produce shallow content that people eat up on social media. Matt said Buzzfeed has contacted them asking why they don’t rank better. Matt said everyone thinks their own website is above average in quality, even when their average or below average. It was obvious he thought Buzzfeed was overestimating their quality in regards to how they should rank.

ABOUT ALL THAT YOUTUBE SPAM

danny asks what actions G takes against blackhat tactics being employed on sites like youtube (spam videos, etc.) #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Danny asked what action Google was taking against known spam tactics – and then used YouTube spam as an example. Matt said they keep their ears open – he seemed to imply they know about spam tactics well before they implement something to take action on them algorithmically. He said targeting these tactics algorithmically can sometimes take some time.

SHOULD YOU HIRE A LINK BUILDING SERVICE?

danny asks about hiring link building service – matt says "creativity" will trump every tool avail in the industry #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

Matt started to point out the differences between “link building” (consisting of PR4, with this anchor text, and “in content” type elements) and building links by “being excellent”.

. @mattcutts says white hat link building is called "being excellent" #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

He didn’t seem to have an issue with hiring someone to help you be excellent and help come up with creating ideas and help with their execution of those ideas. His issue was in hiring a “link building firm” in the 2009 sense so to speak. AKA, hiring a promotions / publicity / true marketing company is ok, hiring a “link building” company is bad. My take based on his comments – to be clear.

IS THERE A GOOGLE “WHITELIST”?

danny asks if there's a whitelist of sites re penalties – matt starts with "well…" #smx #smxYA

— Rae Hoffman (@sugarrae) June 12, 2014

The last question Danny asked was whether or not Google had a “whitelist” of sites immune to penalties. Matt was really dodgy on this one, simply saying it “could happen”. So Danny asked for an example – a specific site that had been whitelisted. Matt said he didn’t know a specific one to give.

Matt then made it clear that a whitelist would only exist for something that was a known false positive. I wish I could remember the exact wording for y’all to dissect to death, but I don’t. :) It essentially amounted to only in extreme circumstances where a site exhibited a false positive they couldn’t fix algorithmically for whatever reason – again, my take. Matt then said, unequivocally that there was no whitelist – at all – for Panda (I can’t remember if he included Penguin here as well or not).

EDITED TO ADD

On 6/20/14 SMX released the video of Matt’s full talk. Check it out below.

There ya go folks. Expanded coverage on the live tweets mixed with a few of my own opinions. Until next time…

The post You&A with Matt Cutts at SMX Advanced 2014 (& Where is the Penguin Update?) appeared first on Sugarrae.

Show more