Search Engine Optimization Track: So You Want To Test SEO?
Editor’s note: We have issued an apology to speaker John Andrews. We chose to keep this original post intact so that readers had a reference of the situation. Read about the lessons we took away from this experience here: Risks and Rewards of Handing Over Your Blog. Bruce Clay, Inc. has nothing but respect and gratitude for the guest bloggers who pitched in to our liveblogging efforts and to the speakers who share their knowledge at conferences like SMX. For another view of this session from one of our SMX guest livebloggers, see Alan Bleiweiss’s coverage of So You Want To Test SEO?
This is a guest liveblog post by Gil Reich. Gil is the vice president of product management at Answers.com. He blogs at Managing Greatness: Strategy in the Age of Search & Social, where his favorite topics include User Generated Content, Search Engine Optimization, monetization, and leadership. Follow him on Twitter at @GilR.
OK, Vanessa Fox says it’s going to be an awesome session. So there you go.
She’s also going to accept questions by Tweets to her personal account. And if you just want to ask a question without technology, we’ll mock you, but we’ll take your question.
Moderator: Vanessa Fox, Contributing Editor, Search Engine Land
Speakers:
John Andrews, Seattle SEO Consultant
Jordan LeBaron, Senior Consultant, Omniture, An Adobe Company
Branko Rihtman, R&D SEO Specialist, Whiteweb
Conrad Saam, Marketing, Avvo
Ooh, the sponsored statement. It’s from Optify. Do they sponsor this post? Where’s my cut? Oh, should I cover them? They give you really advanced reporting that would be really interesting for this audience. New dashboard to really customize your view. Integrates with SalesForce, ExactTarget. International. They want to get to know you.
OK. “How many of you do some testing? Lots of you? How many think you have all the right processes in place?” OK, that guy.
Conrad steps up.
How is statistical testing done? You hear words like statistically relevant. What does that mean? He’s kind of visual, so he’s going to go visual on us. Which may be hard to blog, but we’ll manage.
If we had 5 men and 5 women and the men were 4 feet taller, you’d be confident that men are much taller than women. But if it was a half inch, you wouldn’t be so sure.
Shows that the average person has one breast and one testicle. Which is why you shouldn’t use averages. [Because the clothing you’d have to get would be really weird.]
Why you don’t need to test? Because we did a change and then our graph went way up. So we know this caused that. He’ll show later why even this isn’t really true.
He’s going through the fundamentals of statistical testing. Bell curve.
Types of test: Continuous & binary.
With sampling you take some set of data and interpret what the curve looks like.
When you have high variability your bell curve is flatter.
So the changes in variability is very important when determining what your data looks like.
[Yeah, this isn’t working for me, sorry. He’s trying to do Stats 101.]
OK, a real world example. We’re going to test a change. Check your confidence interval before talking to your boss.
Makes a change, shows in a spreadsheet the avg. rise and drop. Then he sees that it’s not all that different than his control group. This is bad.
Here’s the good way. Calculate a T-test. Excel has a function for this. [Wow, he spends 2 minutes explaining a bell curve then just dives into a 2-tail T-test and thinks we’re all following. Sorry dude.]
He took the outlier out and his confidence interval increased. Now this is cheating. But it shows what you should do is to increase your sample size.
OK, binary tests. You either do or you don’t. You can’t be half pregnant. [Unless you have one breast and one testicle.]
Abtester.com has a Confidence Calculator. [Cool. I feel more confident just looking at it.]
Uh oh, 30 seconds left. Run through things not to do:
- Bad sample set
- Seasonality
- Non bell curve distribution
- Not isolating variables
Goes back to the example he didn’t test. Says maybe it was something else, like getting out of the Google Sandbox.
OK, moving on to John Andrews. Vanessa says she thinks all the presenters have 1 slide that scares her.
John says don’t worry about if you don’t know stats, his presentation is all calculus and linear algebra.
He wants actionable info to rank better, insulate from sudden changes, and to avoid penalties.
“Scientific reports” of SEO testing are usually supporting claims. Title tag length. PageRank sculpting does or doesn’t work. There is or isn’t this or that penalty…
The value has shifted from the data to the claim. This is marketing, not science. [Huh? Isn’t science about having a hypothesis and testing it?]
Marketers make claims and tell stories. Scientists use evidence.
Scientists have a peer review process. If you pretend you’re doing science but you’re doing junk science you don’t get a second chance. [Man, this guy’s whole presentation is about blasting Rand Fishkin and others as junk scientists. And he’s not interesting about it, he’s just bitter and superior. Dudes, I came here for help learning how to test SEO and we’re in the middle of the second presentation and I have NOTHING! Stats 101 followed by SEOs Suck at Science.]
“If you really need to learn statistics” here’s a comic book version that they use in Japan with this girl who learns statistics to impress a guy.
“Instead of listening to somebody say what’s true” share your data and let us scientists analyze it. [Is there a Gong? Can I make this guy go away?]
How to contribute to the Science of SEO?
- Science is dull & expensive, so don’t do it yourself
- Most scientific experiments don’t produce significant results. That’s why you don’t do science. You like immediate results.
- Scientists learn by making mistakes and proving themselves wrong
Here’s what we need to do:
- Publish your data w/o making a claim
- Let others analyze your data
- They will cite your original data, and hopefully discuss it
Elsewhere, publish a discussion about your data and what the data might suggest. Cite your published data, and others’ published data.
- Make soft claims. “I think …”
- Offer analyses. OK to make mistakes
- Participate in discussions
Similar to what we do now, with an emphasis on opening it up and making softer claims.
“You are acting like a scientist (congrats!)” [This guy is so condescending. He’s unbelievable.]
Hypothesis: You’ll get more links by publishing your data and opening it to interpretation than by making claims. [Maybe. Would be interesting if he actually tested that instead of just asserting it and expecting us to accept it. So much for being a scientist and not a marketer.]
Next up: Jordan from Omniture. OK, dude, you’ve got a low bar to pass. Do it.
So you want to test SEO? [Yes, that’s why I’m at this session.]
How do you kill a vampire? What are the 2 primary ways? Stake through the heart. Direct sunlight. [Finally, I’ve learned something at this session. Oh wait, I already knew that. Damn!]
Wait, this isn’t true! Apparently people who read Twilight are misinformed about how to kill the undead. Or are they? What’s the truth?
So we’re going to test it. He claims to have never killed a vampire himself. I’ll believe him. [Wait, are those fangs?! Aaaahhh!!!!]
So whenever we have something this important, we turn to Craigslist.
Insecure High School loner seeking vampire who sparkles. [Haven’t learned anything yet, but he’s entertaining. Good start.]
He says don’t trust this guy. Shows a nice picture of Matt Cutts. Now a scary picture of him.
Why do we test? You need evidence to push changes through your organization.
Methodology:
Says he got a B- in statistics, so don’t worry about his presentation.
- Plan
- Execute
- Monitor
- Share
Maintain consistency so you can easily replicate and monitor on an ongoing basis.
You can test the impact of SEO elements on your conversion.
When you make a change, see if there’s a negative impact on conversions. If there isn’t, you’re safe to move ahead.
He really advocates measuring the impact of conversions.
Baseline Metrics:
- Visits or Searches
- Average SERPs. You can actually pull that from the incoming URLs
- KPIs (key success metrics)
Reports:
- Trended reports
- Granularity
Data points to consider:
- Channel
- Keyword Groups
- Keyword
- Entry Page
- Create a Dashboard
Key Takeaways:
- Don’t trust anecdotal recommendations
- Don’t forget conversions
- Consistent testing methodology
Last up is Branko Rihtman (@neyne). Hey, just remembered Vanessa (who is moderating) is a Buffy fan. Should have just asked her how to kill vampires.
He’s a scientist, one of those boring people John was talking about.
Born in Bosnia, lives in Israel. But being on Vanessa’s panel is the scariest thing he’s ever done. Favorite beer is Becks. OK, so we’re good to go.
The scientific method is a way to think about problems. The big points are in analyzing & interpreting the data and publishing the results. Things get published that didn’t really get peer reviewed.
Asimov: “The most exciting phrase in science isn’t ‘Eureka’ it’s ‘That’s funny.’” Keep your eyes open to anomalies. You can’t test everything.
Gather info and resources before you test.
There’s a great variety in terminology, which makes it hard to gather info.
Perform experiment and collect data.
Testing keyword choice: Don’t use either extreme of nonsense terms (sflkejf, 0 results) or highly competitive results (payday loans, 5.5 M results). Use things like “translational remedy” that are slightly competitive.
Multi-directional experiments:
State A –> change –> State B. Then undo the change. See if you went back to the way it was.
We don’t want our expectations to influence the conclusions.
Bounce the findings off someone.
Data analysis: Statistical analysis is very hard to do. Get help. Talk to a real statistician. Not to someone who has a nephew who has a cousin …
Avoid personal bias: “We observe what we expect to observe, until shown otherwise” —Ludwick Fleck
Go Social!
Ooh, a bonus. An SEO testing secret ingredient. Ethanol. Commonly found in beer. Find people who perform experiments and buy them beer. [Because that’s what the world needs, more drunk SEOs. Remember, friends don’t let friends SEO drunk.] Scientists can’t hold their alcohol, so go wild.
OK, questions.
Q: when you’re testing on a moving target (the algos), what are the challenges, what do you suggest?
Branko: The multiple direction testing method I discussed deals with this. Make the change, see the result, undo the change, see if you went back to the beginning.
Vanessa: I really liked that idea. And doing that a few times.
Branko: And on a few pages.
John: I do consulting [no kidding #sarcasm] and when I see sites that aren’t really good but are ranking, and then when there are algo changes I look at those sites and see if they were affected.
Vanessa: You can’t really tell by looking at the forums, because just because 2 guys are complaining doesn’t tell you about the others.
Q: What tools did you use to generate graphs?
Branko: There was a site called SearchArchives which doesn’t exist any more. We built our own tools. Use proxies.
Q: Do you use external control groups? Compare to others’ sites?
Conrad: We watch our competitors, but we’re entirely focused on our own site. If you watch your competitors you may stumble on a test that they’re running, and find yourself walking into some bad alleys, and some bad black hat ideas.
Vanessa: You need to know whether what they’re doing is helping them or not.
Branko: It’s a great idea, and I use websites that I have no control over as a control group. If the link going to my website has the same effect on another site.
Branko: Recent study of ranking factors in local search. They published Excel spreadsheets with data that anybody could go over. We joined forces and did stuff together.
Q: Can you give us some ideas if somebody wants to set up an SEO testing environment?
John: Just go into Google Webmaster Console, Analytics… just use it. You’ll learn why it may not be suitable for you in the long term, and that knowledge is priceless.
Branko: The problem with testing is that it’s usually isolated from the places we want our websites to be at. Creates an artificial environment, like using nonsense keywords. Isolate certain sections of your site, that are not so profitable, and experiment there.
Jordan: Like Branko said, set aside some pages on your site that aren’t doing so well.
Conrad: Make sure that the people creating and listening to the reports understand the basics of statistics.
This was a disappointing session to me. I thought it was going to be about testing your SEO efforts. Instead it was on reverse engineering the algorithms. I should have read the session description, and not just the title “So you want to test SEO?” which I guess I misinterpreted. Jordan and Branko were good. I wish Rand and Christine Churchill were on the panel. The PPC track usually has great stuff on analyzing and testing. That’s what I thought this session would be. But it wasn’t.