Best Practices: Monetization A/B Testing

A/B testing is a crucial component of any business strategy, to ensure your monetization approach is maximizing revenue and growing your business. With ironSource’s A/B testing tool, you’ll be able to experiment with different ad implementations to conclusively understand how your users engage with ads and choose a winning strategy.

A/B testing can be complex and requires a proper strategy to ensure correct conclusions. In this guide you will find best practices on conducting the optimal A/B test so you can make the best decisions based on accurate results.

  1. Use rates for the instances in group B: When setting up the test and for the first few days of the test, set rates for the instances. This will allow the instances in group B to “learn” and stabilize their eCPMs. Using rates (combined with the “Sort by CPM” function) at the beginning of the test will allow the waterfall to be sorted according to descending eCPM from the start of the test. This is especially important when using in-app bidding, and will ensure that the bidders are placed properly in the waterfall. Once rates are set, the bidder can be activated immediately.
  2. Test one variable at a time: As you optimize your ad strategy, you’ll probably find that there are multiple features you want to test. The only way to effectively evaluate the significance of any change is to isolate one variable and measure its impact.  This means that you can’t set a new network live while also changing the reward amount of a video placement. You can test more than one variable, just make sure each variable gets its own unique test.
  3. Leave your control group alone: Once you’ve identified the variable you want to test, make sure you leave your control group (group A) unaltered. Meaning that the settings on group A exists already as your current ad implementation, and no changes should be made. Changes should only be made on group B, to challenge the current implementation and compare the results.
  4. Identify your goal: Before starting a test, take some time to think about the KPIs you want to improve, and how significant you want the improvement to be.  If multiple KPIs can be affected, decide which carry the most weight in case they’re impacted differently. This will ensure you examine your results effectively so you make an informed decision on the best strategy.
  5. Test in increments: Even a simple change can have major impact, so it’s important to start your test on a small percentage (10-20%) of users to ensure that any potential negative results don’t significantly impact your business. Once you see positive results on a small group of users, it’s not a certainty that those results will hold when rolling out to all users. Update the traffic allocation on group B to be much larger (80-90%) to mimic a full rollout, and confirm the test results hold true before expanding to 100%.
  6. Give it enough time: Each test needs sufficient time run to produce useful data. Otherwise it’ll be hard to know if your results are reliable. Also, group B is going live with all new instance IDs, which need time to learn and gather experience. Each test should be given a 2-3 day runway for the instances on group B to equalize with group A. We recommend that a test should run for an absolute  minimum of one week, with 2 weeks or more being ideal.
  7. Give it enough traffic: If your test group doesn’t have enough traffic, that can also make your data unreliable. Performance can fluctuate a lot when only exposed to a small amount of traffic. A good rule of thumb is that group B should receive 10K daily impressions, or 1K daily impressions for each active instance. Consider increasing the test group allocation if this threshold is not met.
  8. Always be testing: Incremental changes can quickly add up to drive additional revenue. There’s always room for more optimization, so don’t just stop at one test. Keep testing!
You can read this article in: