2014-03-16

When I walked into the “Meet the Search Engines” session at SMX, I definitely wasn’t planning to “live tweet” it. Typically these sessions consist of Duane Forrester offering up clear directives to rank well in Bing while Matt Cutts gives very vague answers in relation to ranking in Google.

I have to say though that Matt – while still vague at times – was more forthcoming in this session than I’ve seen him in recent years, and thus my tweets ended up being multiple in volume.



*Photo (used with permission) credit goes to Virginia Nussey from the Bruce Clay blog

While live tweeting, I was merely parroting the answers that were given, vs. adding any type of commentary to them due to time constraints between questions. So, I figured I’d compile the list of tweets below and add my own thoughts where I have them. :)

The first question was from Danny Sullivan and was a general “do you have anything to announce” question. Matt mentioned several things.

A “more soft” version of Panda is coming soon

. @mattcutts says G is working on the next generation of panda – which may make panda appear "more soft" #smx

— Rae Hoffman (@sugarrae) March 13, 2014

The first was that Google would be releasing a new, “more soft” version of Panda. He then clarified that it would appear more soft to “most of the people in this room”. I took that to mean searchers wouldn’t notice, but SEOs would.

You need to get your site “mobile ready” and you’d better do so soon

.@mattcutts says he thinks mobile may overtake desktop for Google searches within a year #smx

— Rae Hoffman (@sugarrae) March 13, 2014

He then offered up that serving up a great mobile was quickly increasing in importance – noting that he personally believed that mobile searches would overtake desktop searches on Google within a year. More on why that matters for “you” further down below.

A well known guest blog network will be “hit” next week

. @mattcutts says a "well known" guest blog network will take a "hit " next week #smx

— Rae Hoffman (@sugarrae) March 13, 2014

Pretty self explanatory. Tons of chatter in the SEO world right now taking guesses on who that might be.

EDITED TO ADD:

Matt released a subsequent tweet on 3/19/2014.

Today we took action on a large guest blog network. A reminder about the spam risks of guest blogging: http://t.co/rc9O82fjfn

— Matt Cutts (@mattcutts) March 19, 2014

That the network he mentioned at SMX had been officially hit. Because he gave us some “advanced warning”, I’d been watching the branded terms for several networks that connect blogs with guest post authors. MyBlogGuest is no longer ranking for their branded terms since the above tweet by Matt so the suspicion is they were the unlucky “winner”.

EDITED ONCE AGAIN TO ADD:

On 3/19/14 the owner of MBG confirmed her site had received a manual action from Google. Barry Schwartz published a post on 3/20/14 citing a tweet from Matt Cutts:

@n2tech when we take action on a spammy link network, it can include blogs hosting guest posts, sites benefiting from the links, etc.

— Matt Cutts (@mattcutts) March 20, 2014

But, it shouldn’t come as any surprise. Finding the users of the service is easy as can be. Add in some accompanying IFTTT variables and I’m expecting service users will definitely see some sort of fall out. The real question is how much.

Are we getting organic keyword data “back”?

While I was heading to SMX, Larry Kim posted on Google+ that Google was working on a solution to the “not provided” issue. I took that to mean that Google was looking to bring back organic keyword data in some fashion.

During the opening keynote, Amit Singhal had made the following statement:

“Over time, we have moved to secure searches. Referrers are not passed to webmasters, but they are passed to advertisers. But webmasters get a lot of information in Webmaster Central.

But over a period of time, we’ve been looking at this issue. We’ve heard from our users that they do want their searches secure — this is really important to users. We like how things have gone on with the organic side of search.

So, in the coming weeks and months, we’re looking at better solutions for this. We have nothing to announce, but we have discussed with the ads side about how we should handle this in the future.”

After I was able to see the quote myself, I think in actuality, that statement more reflected that paid advertisers may have cause to worry about their future keyword data. But, and I’m being clear here, that was my interpretation based on the quote above and a slight comment Matt made in the MTSE session (I don’t remember the exact wording and Matt doesn’t run paid, so he was clear he wasn’t “in the know” in regards to what was going on with that side of things).

.@mattcutts says his interpretation re kws comment from Amit was not to expect to get back kw queries for organic anytime soon #smx

— Rae Hoffman (@sugarrae) March 13, 2014

What Matt did say was that he interpreted Amit’s comments to mean that Google was happy with the results of taking away organic keyword data in regards to creating a better experience for their users [insert slight eye roll from me here] and he didn’t see that decision being reversed.

Could a Penguin penalty follow you with a URL change even without a redirect?

The first question asked during the session from the audience was actually one I’d submitted. After seeing that John Mueller had made a comment that a penalty could follow you if you changed domains, even without using a 301 (if not much changed about the site other than the URL), there was some debate amongst several SEO themed private groups about whether that was in reference to Panda or if it also applied to Penguin.

Now, if you’re being hit for duplicate content (Panda) and change domains but not the duplicate content, it makes sense it would “follow you” without a redirect. The part I wanted clarification on was whether or not that was also true in regards to Penguin.

Since I was of the belief that comment also indicated a Penguin penalty could follow you, I asked my question fairly deliberately. Rather than ask if it applied to Penguin, I instead asked why a Penguin penalty would follow you – because if you don’t use a redirect, you’re essentially disavowing all links – so why the hell would Google then “hunt you down” so to speak when you’ve already said “Uncle” and had resigned to starting over?

My question was purposely asked to make the potential for a Penguin penalty to follow you even without a 301 if you changed domains a “fact” so to speak. The reason for that was to either have Matt say that “fact” was wrong, or to elaborate on why they would do so (vs. just asking if it was a fact and getting a yes or no).

.@mattcutts says a penguin penalty may follow you even if you change domains with no 301 #smx

— Rae Hoffman (@sugarrae) March 13, 2014

Matt’s response was that if you have a Penguin penalty, Google doesn’t want you to “change domains” as some sort of mass disavow. They want you to actually disavow the links for the current domain. And he definitely implied through his (longish) answer that Penguin could indeed follow you and gave a few reasons as to “why” Google would do so (sorry, don’t remember the exact wording, but it was essentially they wanted you to fix your issues and not “run away” from them so to speak). But the sentiment was clear – a Penguin penalty potentially following you without a 301 was an actuality.

Keeping your parameters clean is a best practice in both engines

.@DuaneForrester says keep URLs as clean as possible in regards to parameters for Bing

— Rae Hoffman (@sugarrae) March 13, 2014

.@mattcutts agrees with Duane in parameter cleanliness – not because of penalty worry but because of split link pop issues

— Rae Hoffman (@sugarrae) March 13, 2014

Duane commented here that webmasters shouldn’t simply rely on canonical tags but rather try and fix the problem at the source and clean up the URLs. Duane said canonical was meant for instances where you couldn’t do so, but that you shouldn’t be using tons of them.

Matt quickly chimed in saying that you could use the canonical tag as much as you wanted without issue. But, he also was quick to mention that it was indeed best to fix the problem at the source if possible to prevent “split link popularity” issues.

The mention of the potential for split link popularity left me wondering if using the Canonical tag doesn’t transfer link popularity to the canonical URL. I’ve always felt like Google has given the impression that a canonical tag essentially merges all aspects of a duplicate page to the correct source page. But, the statement above might imply otherwise.

But I hate when people dissect every word Matt says as if they have a beeline to his brain. His comment could have meant multiple things or nothing. But, that was the question mark that entered my head after hearing his comments on the issue. But, I’ve always been a fan of only using the canonical tag if fixing the issue at the source isn’t possible, so either way, it doesn’t change much for me.

Widgets are okay for some companies, but probably not yours

.@mattcutts says all widgets not created equal – some are legit, some aren't – cites "intent" < from me: damn shame you can't scale intent

— Rae Hoffman (@sugarrae) March 13, 2014

Ok, so my heading on this question is purposely me being a smartass. This topic was spurred by Danny asking if Getty was breaking Google guidelines by linking back to their site in their image embeds. Matt said no, because Getty’s “intent” wasn’t to manipulate Google. He cited several more examples of large brands doing this without the “intent” of manipulating Google. He also cited that keyword based links from widgets were really bad, regardless of intent.

The problem here for me was that you can’t scale determining intent. So if you’re not big enough for Google to look at and weigh in on your “intent” then you’d probably be at risk to get smacked for using widgets, even if your “intent” was legitimate.

Seriously, get “mobile ready” ASAP

.@mattcutts says they want mobile friendly sites to show in search results on mobile devices and he expects that trend to increase

— Rae Hoffman (@sugarrae) March 13, 2014

Matt had already mentioned that mobile may overtake desktop in regards to people searching on Google soon. He mentioned that sites not delivering a good mobile experience might not rank as well for users searching from a mobile device.

To be very clear – he did not say penalty or anything even remotely close to it.

Example cited was flash won't render on iPhones and is a bad experience on an iPhone searcher

— Rae Hoffman (@sugarrae) March 13, 2014

He used flash not rendering for an iPhone user as an example. If your site is flash based, then them serving it up as a result to iPhone users wouldn’t be a “good experience” for the user and they may choose to adjust their results for that searcher accordingly.

What this means for “us” is that getting sites mobile ready can no longer sit on the back burner (if you’re reading my non mobile site from a mobile device, then yes, I am a pot calling a kettle black). If being mobile responsive has been an item on your todo list that you haven’t yet found time to address (like I haven’t here on Sugarrae), then you’d better make the time – and soon.

How high is the “risk factor” for a site being penalized?

.@DuaneForrester says Bing doesn't give out a lot of penalties – says you'd have to seriously, seriously spam to get one #smx

— Rae Hoffman (@sugarrae) March 13, 2014

Duane was very clear in that Bing doesn’t levy a lot of penalties. He said you’d have to do something really, really bad to get a penalty from Bing. He implied Bing was smart enough to discount vs. needing to penalize.

.@mattcutts says re penalties that Google has manual spam detection teams looking at 40 different languages

— Rae Hoffman (@sugarrae) March 13, 2014

Matt on the other hand let us know that Google is policing spam – in 40 different languages when it comes to manual review of spam (I didn’t even realize there WERE 40 different languages). But, Matt also was quick to note that Google mainly relies on their algorithm to detect penalties, implying the large majority of penalties were algorithmic and not handed down manually.

How long do penalties last?

.@mattcutts says severity of infraction can affect the length of time a site will be penalized #smx

— Rae Hoffman (@sugarrae) March 13, 2014

Matt then noted that the time frame of your penalty can be affected by the severity of your infraction. According to John Mueller, the average site is probably looking at 6-12 months to recover from a penalty even after cleaning it up. (*cough unless you’re a big brand?)

.@mattcutts if you get hit by panda or penguin you HAVE to wait to for data refresh of that filter #smx

— Rae Hoffman (@sugarrae) March 13, 2014

I’ve often noted when I’ve discussed Penguin recovery that if your penalty is algorithmic, then you’ll have to wait for the next refresh of the Penguin filter to “be released” from said filter (same goes for Panda) after cleaning everything up. Matt has confirmed this to be true before, but did (blatantly) once again during the session.

Since Google says they’re no longer “announcing” the filter refreshes, Danny asked Matt to give some time frames for both Panda and Penguin data refreshes. Matt was reluctant to give an answer, so Danny started “suggesting some”.

.@mattcutts says panda is somewhat monthly and that data refreshes are rolled out over several days #smx

— Rae Hoffman (@sugarrae) March 13, 2014

After some back and forth, Matt said that saying the Panda filter updated somewhat monthly was a fair statement. He also pointed out that Panda refreshes are now done over a stretch of days vs. being a stark “hit” in one day as they’d been in the past.

When it came to Penguin, Matt was much more “dodgy” on giving a time frame than he was on Panda. Danny asked if there had been a Penguin data refresh since the last announced one (October 4th, 2013 for those not keeping track).

.@mattcutts says re Penguin he doesn't believe there's been another penguin refresh since last announced one (oct 4th) #smx

— Rae Hoffman (@sugarrae) March 13, 2014

Matt said to his knowledge, no, there had not been a Penguin data refresh since that date. Danny pressed on looking for a time frame for Penguin refreshes. Danny asked if 6 months was a fair estimate as to the time frame.

.@mattcutts says 6 months is somewhat a fair timeframe to expect penguin data refreshes but hem haws it could be longer as well #smx

— Rae Hoffman (@sugarrae) March 13, 2014

Matt really “ahhhh, uhhhh”‘ed this one, but then told Danny six months was a “somewhat fair” timeframe to expect Penguin data refreshes – and pointed out that the Penguin data refreshes are more complicated to implement so to speak than the Panda ones.

As the local carrousel becomes more prevalent, how do we better rank in it?

.@mattcutts says re carousel he can't highlight how to better appear in it – cites spammers as why they have to be hush, hush #smx

— Rae Hoffman (@sugarrae) March 13, 2014

Matt completely clammed up here and offered no advice at all. He cited spammers exploiting it as the reasons for needing to stay completely mum on the topic.

What should you be focusing on to rank in today’s algorithms?

.@DuaneForrester says priorities to rank in bing – content then usability then social then link building #smx

— Rae Hoffman (@sugarrae) March 13, 2014

Duane cited focusing on content, usability, social signals and link building – and he made it clear he stated them in order.

.@mattcutts says he agree with @DuaneForrester completely – adds google wants to rank popular websites #smx

— Rae Hoffman (@sugarrae) March 13, 2014

.@mattcutts says he knows you're tired of hearing "write good content" chooses to emphasize with write unique content, amazing content #smx

— Rae Hoffman (@sugarrae) March 13, 2014

Matt said he agreed with most of what Duane had said. He then tried to emphasize you needed “great content” while telling us he knows we’re tired of hearing him say that. He started to give a few examples of sites he felt were producing great content and “doing it right”. The one he seemed to mention most was Android Police.

Matt joked that as a Googler, he hated that they were able to somehow get leaks on Android, but was quick to point out that despite that, they were doing it right so to speak as far as being an example of “great content” under his definition of it.

Does Google have and use “Author Rank”?

.@mattcutts "author rank" (he didn't call it that, Danny did) is used when it comes to in depth articles

— Rae Hoffman (@sugarrae) March 13, 2014

Danny asked Matt if Google was using “Author Rank” to influence rankings. To be clear, Matt did not call it “Author Rank” but he also didn’t correct Danny on using the term. He simply said that yes, those signals were being used in regards to the In Depth articles appearing in Google. He did not elaborate on whether they were or weren’t being used to determine rankings outside of In Depth articles. He was clear in that he was only commenting on its use in regards to In Depth articles.

Google and JavaScript / iFrames

I didn’t live tweet about this one, but worth noting is that Matt mentioned that Google is now much more able to read and execute JavaScript. When Danny asked about Getty Image embeds, he specifically asked if the fact that they were in an iFrame meant they had no impact in regards to Google. Matt was coy about saying Google was getting better in regards to iFrames.

What do we need to know about Hummingbird?

.@mattcutts says lot of hummingbird is targeted to better understand queries and weight words to understand meaning #smx

— Rae Hoffman (@sugarrae) March 13, 2014

Matt answered this pretty quickly and succinctly as if there wasn’t much that needed to be said or done on the topic. He definitely implied via his statements that it was more about how Google treated search queries and less to do with “our websites” – which is something Ammon Johns (in my opinion, correctly) declared a long time ago.

How concerned do I need to be about Negative SEO?

.@mattcutts re negative SEO – he says a lot of people claim negative SEO who are not experiencing neg SEO but former bad practices #smx

— Rae Hoffman (@sugarrae) March 13, 2014

Matt seemed to feel that most people claiming negative SEO weren’t actually “victims” of negative SEO, but rather people who once employed shady practices (or once employed firms that used shady practices).

This part I tend to nod my head at after seeing multiple clients come to us for Penguin recovery telling us that “their old SEO firm said they got hit by negative SEO” when in fact, everything I’m looking at says that isn’t the case, but rather the scapegoat for old work coming back to bite them in the ass.

He, as usual, implied negative SEO isn’t something the average webmaster needs to worry about. There, I don’t necessarily agree. But I also believe there’s not much you can do about it if it hasn’t happened to you except worry – and that’s not productive for us as marketers.

I have some bad links, but haven’t been penalized – what do I do?

.@mattcutts says if you're aware of bad links to your site you should probably disavow them even if you're not penalized #smx

— Rae Hoffman (@sugarrae) March 13, 2014

Matt said that if you’re aware you have bad links, he’d probably disavow them. Someone tweeted at me that Google has always recommended removal over disavow and was a little confused by Matt’s immediate jump to disavowing them. To be fair, the topic of discussion heavily centered around when to disavow or not disavow and I think that was the reasoning behind Matt saying “disavow” vs. saying “remove then disavow” – AKA, don’t read too much into that.

After the session, Matt apparently read my live tweets of it.

@sugarrae only thing I'd add is if it's 1-2 links, may not be a big deal. The more it gets close to "lots," the more worthwhile it may be.

— Matt Cutts (@mattcutts) March 13, 2014

He tweeted back essentially clarifying that he wasn’t trying to incite paranoia. I took his response above to mean that if you have a few bad links you didn’t obtain yourself, you probably don’t need to worry. But if you have a shitload of comment spam you created during your 2006 link building campaign and haven’t been “hit” for it yet, you may want to be proactive in getting those links removed or disavowed to avoid being hit in future Penguin updates.

The post My “Meet the Search Engines” SMX West 2014 Takeaways appeared first on Sugarrae.

Show more