We adopt A/B testing as an indispensable method for measuring the real impact of the features we develop and want to release. Unfortunately, “before and after” analyses are often unusable due to temporal factors such as timing, campaigns, or other effects. When we want to proceed with an A/B test, we face scenarios where multiple tests must be run across different pages.
In these situations, there are three basic approaches we can choose from:
- Running tests sequentially
- Separating the test audiences
- Running both tests simultaneously
In this article, I will discuss the approach of running two tests simultaneously across the entire audience, what method we should follow, and what points to consider.
- How much do simultaneous tests influence each other?
- How should we interpret the results?
- How can we verify that randomization is achieved?
- Can we analyze the variants together?
Firstly, I would like to mention about central limit theorem and random distribution.
Central Limit Theorem
CLT forms the basis of many analyses in both theoretical and practical statistics. It states that the distribution of means from random samples of a population approaches a normal (gaussian) distribution as the sample size grows, regardless of the population’s initial distribution.
In real life, this can be summarized as follows:Imagine you have a coin and flip it five times. If you get heads every time, can you conclude that the coin will always land on heads? Of course not. This happens because the sample size your coin flips is insufficient. With enough samples, the average of your random results will approach a normal distribution.
Random Distribution
Random distribution is a critical method for ensuring that statistical analyses and scientific studies produce unbiased, reliable, and generalizable results. It refers to allocating individuals in a population or data in a system to groups, points, or categories without any patterns, bias, or order.
Without random distribution, CLT becomes difficult to apply, as biased sampling prevents results from approaching a normal distribution. Therefore, ensuring both random distribution and proper application of CLT is essential for successful analysis.
How Can I Interpret the Results of Two Simultaneous A/B Tests?
Let’s assume we are running two different tests on the homepage and the product detail page.
In the first table, we see how the distribution between the two tests interacts. The homepage experiment’s control and test groups show an equal distribution of the PDP experiment’s control and test groups. The next step is to ensure that the sample size is sufficient.
In the second table, however, the situation is different. The test group in the homepage experiment includes 90% of the PDP experiment’s test group. This indicates a problem with the distribution. If the test group in the PDP experiment performs poorly, this would unfairly favor the control group in the homepage experiment, leading to incorrect conclusions.
To avoid such situations, it would be beneficial to run A/A tests beforehand and validate these distributions across pages.
Conclusion
Whether or not multiple tests are being run, don’t hesitate to interpret the results under the headings of sufficient sampling and random distribution. Running multiple tests simultaneously can bring efficiency to both your planning process and product insights. Tests conducted separately fail to analyze their combined performance when live simultaneously. In contrast, interaction effects between tests can be easily analyzed when they are run together.
The most critical point to remember is that each test should have a clear purpose and support each other’s results. A well-executed A/B testing strategy will strengthen your approach and help you achieve long-term business objectives.
Final Note:
If your website or application does not have sufficient traffic or sample size, the approaches discussed may not yield meaningful results. In such cases, adjust your strategy to match your available data volume.
Thank you for your time; sharing is caring! 🌍