I’ve blogged before about how to interpret CTR and what conditions are necessary for a valid comparison of two ads. In one recent case we created two identical ads that went to different landing pages so that we could test the effectiveness of each landing page. The assumption is that each ad — being identical — will deliver virtually identical traffic to each landing page. Well, here’s what we found in AdWords:
Note that these were identical running for the same time period (% Served) and at the same ad position. Under such circumstances, we would expect the number of clicks and CTR to be roughly the same, but here they differ by a factor of 3.
Is this a statistically significant difference? We would expect random variation to produce a standard deviation of about 3 to 5 clicks (roughly the square root of the number of clicks), in which case these numbers are several standard deviations apart. That’s a lot. That’s highly unlikely in statistical terms. So what’s going on?
We often discuss with clients what the statistics mean, and we often point out that differences in performance are well within a standard deviation and thus we can’t draw firm conclusions without more data. In this case the differences exceed what could usually be explained by “normal” statistical variance, that something else must be at play. Possible explanations include the landing pages’ quality scores or algorithmic anomalies that favor one ad over the other. Either of these would generate discrepancies in the number of impressions and/or average ad position, neither of which we see. The only remaining explanation is that something about the landing page for each makes Google display them slightly differently, perhaps with more Sitelinks for the first ad than the second.