Pro Modules
OptimizerThe optimizer plugin offers some simple tools to help you track how well your site is performing, and suggests helpful information about how to improve your results.
It basically provides two kinds of tests--conversion tests, and comparison tests. Here's how they work.
Conversion Tests
Let's say you have an offer page for a class, and you want to measure how well that offer page is performing. That is, out of 100 people who see your offer, what percentage of them decide to join? This is a conversion test.To gather this data, simple put this code on your offer enrollment page:
<(optimizer mode=start id='my_class_test')>
Then, on your offer congratulations page, you add this line:
<(optimizer mode=end id='my_class_test')>
Then, give your site some time to attract visitors, and get signups. Or go to those pages yourself and test things out. You won't see anything on the page, but it is tracking every view of the offer page, and every view of the success page.
To see the results, go to site.optimizer and look for "my_class_test" in the conversions section of the page. You'll see how many views there have been, how many people signed up (your conversions), and the percentage.
If you do this for several classes, you can see which ones are performing best, and which ones need to be improved. By studying these offer pages you may be able to glean ideas to strengthen your offers.
Hint: to automate this for all your classes, try putting the start function in the enroll page for your class offer (code.offer.class-enroll) with a unique id for each class, like this:
<(optimizer mode=start id='class_test_{p2}')>
Then on the congrats page (code.offer.class-congrats), you would put the counterpart:
<(optimizer mode=end id='class_test_{p2}')>
This will generate a nice comparison over time of how your various classes are doing.
Comparison Tests
Comparison tests, also called A/B Tests, allow you to compare two different options and see which options converts better. Let's say you wanted to test a red button verses a blue button to see which performs better. This takes a slight bit more setup, but it's not hard.First you need to create two sister pages, with page names 1 and 2, that contain your button code. For example, you could create code.buttontest.1 and code.buttontest.2.
Now you just insert the optimizer code on your offer page wherever you want to pull in one of those buttons randomly and test the impact. For this, you just need to add a source line:
<(optimizer mode=start id='button_test' source=code.buttontest)>
Then on the success page, you would put its counterpart:
<(optimizer mode=end id='button_test' source=code.buttontest)>
On the first page, the viewer will get either the red or blue button randomly. In fact if they refresh the page, multiple times, they will see different colors each time!
The success page does not display either button (though the source parameter is required to indicate it is a comparison test), but simply tracks the conversions for each option.
When you go to the site.optimizer page, you will see button_test.1 and button_test.2 side by side, with their respective views and conversions.
The Optimizer Command
Sometimes it is easier or more reliable to put the start or end code in a form, rather than on a page. So for example, you could put the start code on your offer enrollment page, and embed the end code as a command in the actual enrollment form--on the same page. To do that, you just add this line to your form. This code, for example would allow you to test the color options across all your classes, to see which performs better.[command optimizer mode=end id='button_test' source=code.buttontest]
While that proximity (having both code snippets on the same page) helps to improve the accuracy of your results, it's not always convenient or possible. Especially if there are several steps between the start and end of a process. Just know that the optimizer command works exactly the same as the optimizer function.
Final Tips
It's a great idea to be constantly testing and tweaking your site. Data can help you make informed decisions that lead to better membership participation and engagement. But designing good tests is not always easy.First, you want to start with a clear test goal, and then design a specific test to verify it. Let's say you are running conversion tests on all your blog articles, and you notice two or three articles that are converting especially well. You study those articles and notice they all have something in common: short length, monochrom pictures, a specific offer at the end, a blue button, whatever. Let's say it is a monochrome picture. That now becomes your hypothesis.
Next, you create a comparison test to verify it makes a difference. You create a blog post with two picture options, the same picture, but one grayscale, the other full color. And you let that test run for awhile. As you watch the results come in, you notice they color picture ran just as well or better than the grayscale picture. That means your hypothesis was incorrect, and the success of those high converting articles must be something else. So it's back to step one.
When designing your test, you also need to be careful where you place your start and end code to avoid inaccurate results. If I were to put my start code on the offer page, instead of the enroll page, I might be counting graduates, or students currently enrolled in other classes who visit that page but can't signup--skewing my results. Similarly, if I placed the end code in the classroom, I would be counting people who visit the classroom multiple times, skewing those results. So careful test design is important.
It's also worth remembering, the more traffic you have, the more accurate your tests will be. Generally speaking, tests of less than a few hundred are probably not going to be conclusive, unless there is a large difference in the test results. When your site is small, the longer you run your tests the more accurate they will become.
And last, don't rely exclusively on tests. Use common sense and good judgment. You know your audience, and get feedback in more subtle ways than the optimizer module can give. Perhaps one blog article was run during a holiday which affected its readership, or one class was a more compelling topic, and the button color really wasn't the factor. Conversion tracking and a/b tests are just tools to provide extra data. It still requires your analysis and any change it suggests is ultimately your decision.