You are browsing the archive for Testing.

Avatar of brad

by brad

Step-by-Step Instructions For Testing Low Volume Ad Copy

8:30 am in PPC Marketing Blog, PPC News, Testing by brad

My latest Search Engine Land Column is out. It looks through how to test low volume ad copy so that you can still do testing with low volume ads and receive similar benefits to accounts that have lots of traffic

Ad copy testing is essential for anyone running a paid search account. Testing ad copy in accounts in high traffic accounts is fairly easy as you can add new ad copies to existing ad groups, wait, and then examine the metrics.

However, for low volume accounts testing is not nearly as easy as it could take years to collect enough data to make statistical significant choices. Therefore, you need to employ specific testing methodologies to understand how aspects of ad copy behave across multiple ad groups at once.

In today’s column, I’ll use video to show how to create, test, and measure ad copy tests for low volume accounts.

Cross Ad Group Testing & Measurement

Please note, if you are seeing this in an RSS feed or in email, you may need to click through to the site to view the video

 

 

Conclusion

Testing ad copy often leads to higher CTRs, lower CPAs, and ultimately, more profit for your paid search spend. However, when you have mixed signals such as click-through-rate and conversion rates, I prefer a simple number for testing: Profit per Impression.

Just as difficult as creating a test is interpreting the data. Utilizing pivot tables can save hours of analysis in combining data sets. If you are still learning to create pivot tables, please refer to Josh Dreller’s column on creating pivot tables.

Testing low volume accounts can be accomplished by utilizing these simple testing techniques. Testing is easy. The hard part is finding the initial momentum to sit down and write a few ads and then analyze the data.  Just remember, it doesn’t matter if you have a high or low amount of clicks each month  – you can do ad copy testing. When every test you run has the ability to increase your paid search account’s profits, ad copy testing should be an ongoing activity for every search marketer.

Avatar of brad

by brad

Automating The Ad Test Visualization Tool Download File

9:00 am in PPC Marketing Blog, Testing by brad

Yesterday, Chris wrote a fantastic post on Automating The Ad Test Visualization Tool.

However, I forgot to include the link where you can download the file. The post has been updated to include the excel file so that you can download it and do the same analysis as Chris walked through in the article.

You can download the excel file right here (.XLSX file)

Automating The Ad Test Visualization Tool

9:00 am in PPC Marketing Blog, Testing by chriskos

I came across a great post earlier this week written by Chad Summerhill of PPC Prospector, detailing an Excel process to visually report Ad Copy Tests. This allowed a user to quickly sort through a fair amount of data and confidently read results based upon the charts. You can see a winner, and ensure the data is normalized. Instead of using tools to read the numbers and report algorithm produced winners, it is much more efficient to display the results in a simple and direct fashion and allows the marketer to react.

It also referenced another spreadsheet created by the author that determined ad test statistical validity based upon confidence & expected performance. This was an important factor, and allowed us to make sure the data we were looking at would be ready to be judged.

While I read through the articles and their sources I started building the Ad Copy Charts, and downloaded the Statistical Validity tool. By following the steps laid out I had a five tab workbook: Charts, Ad Report, Data, ChartData, & Validity.

With a large portion of retail clients, I had to update the charts to reflect metrics that matter to them, and give me the data I need to drive returns. The data I download from AdWords is slightly modified from the default views. It is important to drill-down to a specific ad group. I then replaced View-Through-Conversions with the three Many-Per-Conversion metrics (Count, Cost, & Rate) & Total Conversion Value. AdStats

Once you have navigated to a specific ad group select the metrics shown here.

While the early versions looked at two ads over about six week ranges, I quickly realized there was going to need to be some flexibility, but also hard caps. 90 days and four variants seemed to be an appropriate mix that encapsulated nearly all the tests I would be conducting.

AdReport1

You can choose a range of data including 4 different ad variants through a 90 day period and paste into the Ad Report tab.

From the downloaded report, copy all of the data – do not include the headers or totals – into the Ad Reports.

This data directly produces charts to display Click-Through-Rates, (Unique) Conversions-Per-Impressions, Unique Conversions Rates, All Conversion Rates, & Return-On-Ad-Spend.

Charts1

Charts are produced with no additional effort beyond pasting the data, leaving you time to react to the data as opposed to crunching it.

Now, how it got there is where it gets interesting. Instead of pasting values from tab to tab, I used reference formulas that lead back to the original data. I had to make a leap from the downloaded Ad Report’s straight list of ads, broken out by days, so that unique ads are displayed side-by-side in the Data tab. I used formulas to read the dates to trigger when they reset (assuming that represents a new ad variant).

By creating an index, I used vertical lookups to retrieve the appropriate data and lay out unique ads side-by-side. This fed the cumulative formulas, for each ad, which was passed into the ChartData.

AdReport2

The Ad Report tab recognizes when the dates resets and reads this as a new ad before passing the information to the Data tab.

The ChartData tab allows a user to easily manipulate the charts by hiding columns and rows based upon how much data is being tested when it’s not maxed out at 90 days & four variants (two ads for six weeks for example, or three ads for a week). If you have three ads your chart will be off since you are reading zeros for the fourth. If the relevant data is squished to the left with half of the days being zeros, or 2/3rds of which as shown below, it is very hard to read. By hiding the rows and columns, they are not included in the charts and you can better see the trends emerge.

  Charts2

You can hide data to normalize charts when you are not taking advantage of the max range.
Here we see 31 days of data just after its pasted (1), which when the remaining 59 days of null values are hidden (2), the charts can be easily read (3).

The last tab, Validity is also mostly automated. This sheet was directly downloaded and is a stroke of genius – I merely added some window dressings. While the tab now displays four ads, the computations only take the first two ads as it originally performed. It automatically pulls the stats from the Data, compares them to what the sheet recommends, and provides visual tools throughout the workbook of whether the data inputted in the Ad Report is valid to test. The Validity sheet is also doing a quick ROI analysis on the ad variants. You can adjust the confidence level and expected conversion rate, which will factor into determining the statistically validity.

Validity1

The Validity tab provides visual cues throughout the worksheet so you can have confidence in the data you are reading.

In essence, you can paste your data into the Ad Report tab, and simply click on the Charts tab and see which ads won and if the results are consistent. In addition you can fine tune the statistical validity of the data you are comparing, and adjust a data set to reflect its scope. This allows us to quickly download the reports see what happened and respond accordingly.

I could not have made such an important tool so quickly without all of the hard work Chad had done. By reading through his articles, and his sources, I was able to take away some new ideas in how to sift through the data. After working in PPC for over 6 years, it’s always good to get a fresh perspective.

You can download the excel file right here (.XLSX file)

This article is written by Chris Kostecki, the PPC Program Manager of Exclusive Concepts’ Profitable PPC service.

Opinions expressed in the article are those of the guest author and not necessarily Certified Knowledge. If you would like to write for Certified Knowledge, please let us know.

Visualizing your Ad Test Results to Boost Confidence

7:52 am in Analytics, PPC Info, PPC Marketing Blog, Testing by chadsummerhill

This is a guest post by Chad Summerhill, Author of the blog PPC Prospector, provider of free PPC tools & PPC tutorials, and AdWords Specialist at Moving Solutions, Inc. (UPack.com and MoveBuilder.com).

Over the past few weeks I’ve been seeing more discussion about the math behind our ad testing efforts.  Can we trust our resultsIs it valid?  We don’t want to blindly make the wrong choice, and it’s hard to feel confident about math we don’t really understand.

I remember searching, with some difficulty, for information on how to calculate statistical significance for an ad test.  Then I came across some good information on marketing experiments.

After finding this resource, I created a spreadsheet method for ad testing with confidence (95% to be exact).  I guess I wasn’t the only one who could use a little confidence boost, because about 100 PPCers download this spreadsheet each month.

So, I thought that I would share one more way you can add a layer of confidence to your ad testing.  By charting the cumulative totals, by day, of the metric you are trying to improve (CTR, Conv. Rate (CVR), or Conversion-Per-Impression (I2C)).

Visualizing the cumulative totals allows you to see erratic behavior from your test ads that might have otherwise gone unnoticed.  I first saw cumulative totals being used in testing a few years ago when Offermatica (landing page testing software bought by Omniture) came on the scene.

You might see some wildly differing results at the beginning of a test, but as the test continues to run each ad’s performance levels out and becomes more consistent.  If it doesn’t, then you probably need to let it run longer.  As your sample size increases, variables outside the scope of your test (time, geographies, operating systems, etc.) will become more evenly distributed across your ad variations.

Visualizing cumulative totals is very easy to do in Excel, but you have to get the right data first.  You can’t segment your ads by ‘day’ in the AdWords interface, but you can download a segmented report.

Getting your ad testing data:

  1. Select the ad group you want to analyze from the tree menu in the AdWords interface.
  2. Click on the ‘Ads’ tab.
  3. Choose the appropriate date range for your test.
  4. Select these columns at a minimum (Clicks, Impressions, & Conversions).
  5. Click on the download icon.
  6. Click on ‘Add segment’ & choose ‘Day’.
  7. Click on ‘Create’.
  8. Open your file in Excel.
Download Ad Report by Day

Download Ad Report by Day


One you have your ad data in Excel, you need to prepare it for analysis.  You will have to move some data around and add a simple formula.

Preparing your data for visualization:

  1. Highlight your data and ‘Format as Table’.
  2. Filter for your champion ad.
  3. Copy your champion ad’s data (including the header row).
  4. Paste into a new worksheet.
  5. Go back and filter for your challenger ad (I’m assuming a two ad test).
  6. Copy your challenger ad’s data (including the header row).
  7. Paste into the cell directly to the right of your champion ad’s data.
  8. You should now have your challenger ad’s data right next to your champion ad’s data (make sure the dates line up).  You want one row of data with each ad’s performance for each day of the test.

    Preparing your data for visualization

    Preparing your data for visualization

  9. Delete the ‘Date’ column for your challenger ad as long as it lines up with the dates from your champion.
  10. Create your cumulative metrics (Champ_CUM_CTR, Champ_CUM_CVR, Champ_CUM_I2C, Challenger_CUM_CTR, Challenger_CUM_CVR, & Challenger_CUM_I2C) using a formula like: =(SUM($H$2:H2)/SUM($I$2:I2))
    1. For Champ_CUM_CTR use the formula above in the first free cell to the right of your data.  $H$2 = your first Clicks value for your Champion and by using the $ you lock the first value in the formula.  When you copy the formula down it will calculate a cumulative total.
  11. Repeat for the other cumulative metrics above.

Preparing your data for visualization

Preparing your data for visualization

Your data should look something like the image above.

  1. Copy all of your data & Paste Special > Values in a new worksheet.
  2. Delete all of the columns except the date and your new cumulative totals.
  3. Reformat your data appropriately (i.e. the date column formatted as a date).
  4. ‘Format as Table’ again.
Preparing your data for visualization

Preparing your data for visualization


Now you’re ready to visualize your cumulative test metrics using multiple Excel line charts.

Visualizing your cumulative metrics:

  1. Select any cell in your new table and click ‘Line Chart’ from the ‘Insert’ menu in Excel.
  2. Copy your new line chart and paste it twice in your worksheet, so you have three copies.
  3. Delete everything but CTR from the first chart.
  4. Delete everything but CVR from the second chart.
  5. Delete everything but I2C from the last chart.
Cumulative Metrics for PPC Ad Testing

Cumulative Metrics for PPC Ad Testing


As you can see from the charts above, the results at the beginning of the test were quite erratic.  It would have been unwise to pick a winner within the first two weeks of the test.  By the end of the third week, you start to see a stable trend and your statistical check should confirm that your sample size is big enough to pick a winner (for this particular test, some test may take longer).

By adding a visualization of your ad tests, you add another layer of reinforcement for your decision making.  When you see your cumulative metrics diverge and stabilize after the first couple of weeks you can be fairly confident that the trend will not reverse.

Opinions expressed in the article are those of the guest author and not necessarily Certified Knowledge. If you would like to write for Certified Knowledge, please let us know.

Certified Knowledge icon

Certified Knowledge | Terms of Service | Privacy | Affiliate Program

Google AdWords, PPC, & Internet Marketing Training | Chicago, IL USA