• Twitter
  • Google Plus
  • Facebook
  • LinkedIn
  • Email
  • RSS
  • StumbleUpon

imageGoogle recently changed how they displayed the Quality Score tips inside of AdWords. The biggest change is that some of the factors will now be relative to your competition. While this is a huge, and very welcome change, it also opens Pandora’s box to show where quality score might not make much sense.

How Much Does Landing Page Matter?

First off, landing page speed is no longer being displayed. That’s not really a huge issue as I’ve never seen a problem with landing page speed. They instead rolled the landing page issues into one single ‘landing page relevance’ score.

For years, if your landing page was dinged by quality score, you were in trouble. Rarely would you ever see a quality score above 3 or 4 if you had landing page issues. That no longer seems to be the case.

Now, when I see a 10; this is what I was expecting:

image

But this is what I was seeing across several accounts as 10s as well:

image

I was floored when I saw this keyword. A below average landing page experience, and yet the quality score was a 10. It didn’t take me long to find dozens of examples where this was occurring.

The problem with relative data is that Google doesn’t tell you what fits into above or below average. For example, if average is 1, is below average 0.99 or 0.9 or 0.8. If average was a .8 to 1.2 range; then below average is meaningful. If below average is 0.99; then below average might not mean that much.

It is Mostly CTR

For a long time, landing page was more of a negative than a positive to your quality score; and it was all about CTRs. That seems to be the case. In this example, everything is average or above average; yet the QS is still a 4:

image

So, in example 1, the landing page was below average, but the CTR was good enough to get a 10. In this case, having an expected CTR above average, and everything else as average meant the QS was a 4. That seems quite counterintuitive; especially when you see this keyword which is also a 4:

 

image

There’s no way that both of these keywords should be a 4. The first one is all above average or average. This one is below average. This should probably be a 4; but not the first one.

‘Relevance’ Still Matters

While relevance is technically another set of CTRs, its usually best to think of this as semantics. And they matter:

image

The majority of quality scores I saw at a 2 had issues with relevance, not CTR.

‘Average Ads’ Can be 3s to 10s

Google has made enough claims over the years about 7s being good and 6s needing a bit of help; but what is average?

image

image

image

I’m seeing words from Quality Scores 3 to 8 where everything is ‘average’.

I’m seeing quality score 7 words that are all average, or 1-2 items is ‘above average’.

I’m seeing quality score 9 words that that have less above average items than quality score 5 & 6 words:

image

You Can’t Diagnose Paused Words

 

image

If a word is paused; the metrics will all be below average. This does lead credence to the theory (that I subscribe to) that a paused word can’t hurt you. Google isn’t collecting metrics; therefore it’s below average as there’s no data coming in. A better error message here would be a good idea instead of just ‘below average’.

So, What Can You Takeaway?

A good landing page is necessary for conversions. A bad landing page (in Google’s eyes – not the searchers) could have a negative quality score affect; or it could not.

An ad with ‘below average’ expected CTR can have a quality score of 1 to 6. I didn’t see any quality score 7 or higher words with below average expected CTR.

Ad relevancy matters. You can have a 10 with a high relevance, but average everything else. I didn’t see any higher quality score keywords with below average relevancy.

The ‘average’ benchmarks seem to be different for each of the displayed data sets. The fact that landing page can be below average and get a 10; yet relevancy and expected CTR can be average and be anywhere from a 4 to a 10 is either an error in the quality score algo; or the ‘average’ range is quite large.

In the end; while these numbers are relative; I think we need a better scale than average, above average, below average. A 1-5 range (if you want to make it semantic with bad, poor, average, good, excellent – that will also work, Google) would be much more insightful.

The quality scores I’m seeing don’t make sense in many cases (and I only spent 10 minutes looking for these examples, they aren’t the strange ones – they are the norm); and I think it’s a range issue.

So, kudos to Google for showing some relative data; however, my hope is that Google goes much further with the ranges. The new quality score transparency is not that useful and will raise more questions that it answers.

Share and Enjoy

  • Twitter
  • Google Plus
  • Facebook
  • LinkedIn
  • Email
  • RSS
  • StumbleUpon