A/B testing in ASO is the process of comparing two or more variations of visual or textual elements to determine what the store visitors perceive as the most appealing option. You can conduct A/B testing on screenshots, icons, or textual metadata within the context of Google Play. RadASO team will take you by the hand and explain what is A/B testing in ASO, the key differences in A/B tests for the App Store and Google Play, and how to do it correctly.
Join the open ASO & User Acquisition community on Discord - ASO Busters! Here, engage in insider discussions, share insights, and collaborate with ASO and UA experts. Our channels cover the App Store, Google Play, visual ASO, ASA, UAC, Facebook, and TikTok.

Split the total number of users into two groups: A and B. Group A continues with the usual experience and sees the current screenshots. Group B receives a new experience and views fresh new test screenshots. Continue testing until you identify the group with the superior installation conversion rates.

During the test launch, there is an opportunity to select various parameters:
However, setting the parameters of whom the test will be displayed to or controlling the audience demographics is impossible.
The main objective of A/B tests in ASO is to improve conversion rates in one of the variants. Sometimes, minor changes, such as a different color for the CTA button, lead to significant differences in user interaction with the application. When creating a hypothesis, specify what you will change and why.
Choose an element that will be changed (tested) and, in your opinion, will have a significant impact on users. For example, the background in one of the screenshots. The hypothesis may be that changing it will increase conversion.

Let's look at examples:
Example 1. Changes are not immediately apparent on the sixth screenshot. Most users only look at the first few and don't scroll to the end. Therefore, such a test is not useful since its results do not allow you to draw a meaningful conclusion.

Example 2. Changes are immediately noticeable on the very first, most conversion-driven screenshot. Only one crucial shift is being tested, not several simultaneously. The results of this A/B test will reveal what users find more alluring for viewing and downloading.

*Build – is a new version of the application. Updating the icon is only possible when updating the application version in the store. In other words, the term "build" refers to a specific version or variant of the application that is ready to be downloaded and installed on the users' devices. It contains all the necessary files and data for users to install and use the application.
More about optimizing graphic elements in the App Store and Google Play can be found in the article 'Graphics in Mobile App Promotion in the App Store and Google Play (ASO) – How to Optimize Graphic Elements.'
1. Navigate to the Product Page Optimization tab in the App Store Console.

2. After naming the test, specify the type of test you are launching (A/B, A/B/B, or A/B/C test, etc.), the countries for displaying this test (by default, all 39 countries are selected), and an approximate test duration.

3. Upload your graphic materials.

For a more detailed description, read the official App Store documentation.
1. On the Store listing experiments tab in the Google Play Console, select the countries where you wish to conduct the test. Unlike the App Store, you can only choose one country for one test or opt for a test in the default country (i.e., for all countries without localized graphic or text materials, depending on what you are testing). So, determine whether the test will be conducted in the default or a specific country.
More information can be found in the official documentation.

2. Configure the metrics that affect the accuracy of the test and determine the number of downloads:

3. Determine what to test. Unlike what is the case in the App Store, you can test not only graphic elements but also text (full and short descriptions).
For A/B tests, you can only upload screenshots in one size. Google will automatically adapt them to other formats.

1. On the Store listing experiments tab in the Google Play Console, select the countries where you wish to conduct the test. Unlike the App Store, you can only choose one country for one test or opt for a test in the default country (i.e., for all countries without localized graphic or text materials, depending on what you are testing). So, determine whether the test will be conducted in the default or a specific country.
More information can be found in the official documentation.

2. Configure the metrics that affect the accuracy of the test and determine the number of downloads:

3. Determine what to test. Unlike what is the case in the App Store, you can test not only graphic elements but also text (full and short descriptions).
For A/B tests, you can only upload screenshots in one size. Google will automatically adapt them to other formats.

Dictionary:
Example 1:

Most likely, test screenshots A and B will win. However, if the result in the Performance column is not entirely in the 'red' or 'green' zone, such results should not be considered 100% reliable.
Let's calculate the expected conversion change:
Conversion will increase by 4.75%. If the current conversion was 30%, the projected conversion will be: 30 + (30 * 4.75 / 100) = 31.43%*
*Important! Do not add the average Performance percentage to the current conversion; instead, change the current conversion by that percentage.
Example 2:

Both variants displayed significantly negative outcomes. Conclusion: the test was unsuccessful.
Example 3.

The same test variant produces different outcomes: in V1, it results in a favorable outcome, while in V2, the opposite occurs. In such a case, calculations using the formula won't yield reliable results to base your decisions on. V1 and V2 should yield more or less similar results.

Glossary:
The Confidence and conversion rate improvement indicators in the chart below demonstrate that this test is a winner.

After adopting the winning test variant, measure the conversion once again.

A/B tests are an ongoing process because user preferences constantly change. Today, they might be drawn to a blue background, but later, red might receive more attention.
It's also important to evaluate the results accurately. The test winner doesn't always guarantee an improvement in conversion, and vice versa, and drawing conclusions too hastily can lead to unexpected outcomes.