The results of A/B or MVT testing are calculated on the basis of tangible performance indicators. Depending on the nature of the test, the KPIs will vary but what is generally measured is sales revenues, conversion rates, add-to-cart rates, the session value, etc. In this manner, the performance for the different test versions are quantified and compared to ultimately declare a winner.
During a test campaign, it’s also tempting to ask the question: “What’s my return with this test?” Because, yes, with this type of experiment, greed is good!
To extrapolate the revenues generated by a test version across the entire audience for the test period, I suggest calculating a complementary indicator: the campaign balance.
Let’s take the case of an A/B test with a control version (VC) and a test version (V1). The campaign balance projects the session value from the control version to the number of V1 sessions. This makes it possible to calculate the added-value generated by the test version at an equivalent session level:
Campaign balance = Sales revenues V1 – (Session value VC x Number of sessions V1 )
where VC = control version and V1 = version 1 (or the sum of versions V1/V2/…/VN for a test with multiple variations).
The campaign balance is monitored on a daily basis, and it is cumulated over the test period to obtain the campaign balance.
Calculating the campaign balance daily makes it possible to follow the ongoing test performance and help in decision-making as to how to proceed with testing.
In the case of an MVT test with 8 variations, if the campaign balance is positive overall, one can be allowed an little additional risk in deciding to maintain or delete a version. For example, if guaranteed that the test is a financial win overall, one can choose to maintain a version that’s underperforming for a few extra days if one suspects better results in the days to come.
Further, it is always validating to end a test by saying: “if all tested visitors had gone through winning test version 1, we would have earned 1082€ over the test period.” With this information, it’s easy to determine the test’s short-term return on investment versus the tool’s license cost and mid- to long-term returns by extrapolating the results over a longer period.
Be careful however to make this projection over a longer period in a reasonable manner. It may be tempting to extrapolate the version’s gain over the entire year but this practice could prove dangerous given seasonal fluctuations that can change each version’s performance.