Monetization A/B testing

What is monetization A/B Testing?

An A/B test is a controlled test that splits users into different groups, and exposes each group to different app configurations. This allows you to test different monetization strategies and compare results through data analysis, thus helping you understand which strategy performs better.

How does ironSource’s monetization A/B Testing work?

With ironSource A/B testing module, you can define traffic allocation between A and B groups, and setup the app configurations according to the variables you want to test – all with no development effort required. Once the test is live, each user is assigned to one of two groups, and will be exposed to the configurations that you have defined for each respective group. Once a user belongs to a group, that user will remain in the same group until either the test is terminated or the traffic allocation between groups is updated. Based on test results, once you’ve reached a conclusion on which group performs better, you can terminate the test by applying the configurations from the chosen group to all users.

What can you A/B test?

There’s an endless amount of tests you can run with our tool. Some of these tests include rolling out new ad units, adjustments to placement level reward or capping/pacing settings, bidding vs traditional waterfall, different waterfall optimization strategies, etc. Reach out to your account manager or contact us to discuss which strategy is right for you.

Test setup

Step 1. Navigate to the A/B testing page and click on Create New Test.

1
Step 2. 
Setup your test name, description and define the traffic split between groups.
2

Step 3. Setup network configurations on group B.
At this stage you are required to input new and unique instance IDs for each one of your app’s configured instances. This allows us to track unique revenue to compare performance between groups.

3

Keep in mind: 

  • If you do not want to duplicate a specific instance, you can mark it as inactive and this instance will not be included within group B’s configurations.
  • ironSource’s instance IDs are automatically generated for group B.

Step 4. Click on ‘Start A/B Test’ and apply the desired app configurations on group B.
The test will start the next calendar day according to 00:00 UTC.

4

Once the test is scheduled to start, ironSource duplicates all existing app configurations from group A (the control group) to group B (the test group), including all placement settings, waterfall positioning, etc.

Note: Networks and ad units that do not support multiple instances on ironSource platform can only be included in A version of the test. 

Important!
When setting up the test and for the first few days of the test, use rates for the instances. This will allow the instances in group B to “learn” and stabilize their eCPMs.  Using Rates (combined with “Sort by CPM” function) at the beginning of the test will allow the waterfall to be sorted according to descending eCPM from the start of the test. This is especially important when using In-App Bidding, so the Bidders will be placed properly in the waterfall. Once Rates are set, the Bidder can be activated immediately

Once group B is created, you can adjust the platform settings according to the desired variables you would like to test. A toggle will be added to each relevant platform page which allows you to switch between group A and B. Each change applies ONLY to the group you are on.

5

Relevant platform pages include:

  • Ad Units & Placements
  • Server Side Networks
  • SDK Networks
  • Mediation Management
  • Segments

Step 5. Terminate Test
Once you’ve analyzed results and reached a conclusion about which group performs better, you can choose the winning group and terminate the test by clicking on the “Continue with A” or Continue with B” buttons. Terminating a test means the winning group’s settings will apply to 100% of the traffic on your application.

6

Note: Once you’ve terminated a test, this action can’t be undone.  The test will be terminated the next calendar day according to 00:00 UTC.

Once a test is terminated, it will be added to the History tab including the name, start date, and end date. 

Monitoring and Reporting

To monitor an active A/B test or review the results of a terminated A/B test, use either the A/B test Overview module or the platform reporting pages (Performance and User Activity reports).

A/B Testing Overview:

On the A/B testing Overview page you are able to perform the following actions on an active A/B test:

6

  • Edit Test: Name, description and traffic variation are editable during a test. Note that a traffic variation change will take place the next calendar day according to 00:00 UTC.
  • Show Reports: Redirect to Performance reports page, with the app pre-filtered, test start/end dates selected, and the data broken down by A/B.
  • Terminate Test: Terminating a test will apply the winning group’s configurations to 100% of the traffic on your application.

The Overview page also displays metrics comparison charts between group A to B, from the last traffic variation change to the current day’s date.

3

App Level metrics:

  • Revenue
  • DAU
  • ARPDAU
  • Day 1 Retention
  • Day 7 Retention


Ad Unit Level Metrics:

  • Revenue
  • Impressions
  • eCPM
  • Fill Rate
  • Impressions / DAU
  • Impressions / DEU

A/B Testing on reporting pages

A breakdown by A/B is available on the Performance and User Activity reports, with all metrics supported. To view reports broken by A/B, you must select one application which have run or is currently running an A/B test.

5

All ‘break by’ options are still supported during an A/B test. Note that duplicated instances, tags, placements and segments on group B (those that already existed prior to test creation) are connected by name. For example, when breaking down by instance, you will see aggregated data from both A and B groups for each duplicated instance name.

Best Practices

A/B testing is a crucial component of any business strategy, to ensure your monetization approach is maximizing revenue and growing your business. With ironSource’s A/B testing tool, you’ll be able to experiment with different ad implementations to conclusively understand how your users engage with ads and choose a winning strategy.

  1. Use rates for the instances in group B: When setting up the test and for the first few days of the test, set rates for the instances. This will allow the instances in group B to “learn” and stabilize their eCPMs. Using rates (combined with the “Sort by CPM” function) at the beginning of the test will allow the waterfall to be sorted according to descending eCPM from the start of the test. This is especially important when using in-app bidding, and will ensure that the bidders are placed properly in the waterfall. Once rates are set, the bidder can be activated immediately.
  2. Test one variable at a time: As you optimize your ad strategy, you’ll probably find that there are multiple features you want to test. The only way to effectively evaluate the significance of any change is to isolate one variable and measure its impact.  This means that you can’t set a new network live while also changing the reward amount of a video placement. You can test more than one variable, just make sure each variable gets its own unique test.
  3. Leave your control group alone: Once you’ve identified the variable you want to test, make sure you leave your control group (group A) unaltered. Meaning that the settings on group A exists already as your current ad implementation, and no changes should be made. Changes should only be made on group B, to challenge the current implementation and compare the results.
  4. Identify your goal: Before starting a test, take some time to think about the KPIs you want to improve, and how significant you want the improvement to be.  If multiple KPIs can be affected, decide which carry the most weight in case they’re impacted differently. This will ensure you examine your results effectively so you make an informed decision on the best strategy.
  5. Test in increments: Even a simple change can have major impact, so it’s important to start your test on a small percentage (10-20%) of users to ensure that any potential negative results don’t significantly impact your business. Once you see positive results on a small group of users, it’s not a certainty that those results will hold when rolling out to all users. Update the traffic allocation on group B to be much larger (80-90%) to mimic a full rollout, and confirm the test results hold true before expanding to 100%.
  6. Give it enough time: Each test needs sufficient time run to produce useful data. Otherwise it’ll be hard to know if your results are reliable. Also, group B is going live with all new instance IDs, which need time to learn and gather experience. Each test should be given a 2-3 day runway for the instances on group B to equalize with group A. We recommend that a test should run for an absolute  minimum of one week, with 2 weeks or more being ideal.
  7. Give it enough traffic: If your test group doesn’t have enough traffic, that can also make your data unreliable. Performance can fluctuate a lot when only exposed to a small amount of traffic. A good rule of thumb is that group B should receive 10K daily impressions, or 1K daily impressions for each active instance. Consider increasing the test group allocation if this threshold is not met.
  8. Always be testing: Incremental changes can quickly add up to drive additional revenue. There’s always room for more optimization, so don’t just stop at one test. Keep testing!

Now check out our best practices guide for top tips on how to run an effective A/B test.

You can read this article in