Best Practices: Monetization A/B Testing

A/B testing is a crucial component of any business strategy, to ensure your monetization approach is maximizing revenue and growing your business. With ironSource’s A/B testing tool, you’ll be able to experiment with different ad implementations to conclusively understand how your users engage with ads and choose a winning strategy.

A/B testing can be complex and if not done properly there’s a risk of making incorrect conclusions. In this guide you will find best practices on conducting the optimal A/B test so you can make the best decisions based on accurate results.

  1. Test one variable at the time: As you optimize your ad strategy, you’ll probably find that there are multiple features you want to test. The only way to effectively evaluate the significance of any change is to isolate one variable and measure its impact.  This means that you can’t set a new network live while also changing the reward amount of a video placement. You can absolutely test more than one variable, just make sure each gets its own unique test.
  2. Leave your control group alone: Once you’ve identified the variable you want to test, make sure you leave your control group (group A) unaltered. Meaning that the settings on group A exists already as your current ad implementation, and no changes should be made. Changes should only be made on group B, to challenge the current implementation and compare the results.
  3. Identify your goal: Before starting a test, take some time to think about what KPIs you want to improve, and how significant you want the improvement to be.  If multiple KPIs can be affected, decide which carry the most weight in case they’re impacted differently. This will ensure you examine your results effectively so you make an informed decision on the best strategy.
  4. Test in increments: Even a simple change can have major impact, so it’s important to start your test on a small percentage of users (i.e. 10-20%) to ensure that any potential negative results don’t significantly impact your business. However just because the results look positive on a small group of users, doesn’t mean those results will hold when rolling out to all users. Update the traffic allocation on group B to be much larger (i.e. 80-90%), to mimic a full rollout and confirm the test results will hold true.
  5. Give it enough time: Each test needs sufficient time run to produce useful data. Otherwise it’ll be hard to know if your results are reliable. Also, group B is going live with all new instance IDs, which need time to learn and gather experience. Each test should be given a 2-3 day runway for the instances on group B to equalize with group A. We recommend that a test should run for an absolute  minimum of one week, with 2 weeks or more being ideal.
  6. Give it enough traffic: If your test group doesn’t have enough traffic, that can also make your data unreliable. Performance can fluctuate a lot when only exposed to a small amount of traffic. A good rule of thumb is that group B should receive 10K daily impressions, or 1K daily impressions for each active instance. Consider increasing the test group allocation if this threshold is not met.
  7. Always be testing: Incremental changes can quickly add up to drive additional revenue. There’s always room for more optimization, so don’t just stop at one test. Keep testing!
You can read this article in Korean