Download our FREE Testing Toolkit for A/B testing ideas, planning worksheets, presentation templates, and more!
Get It NowA/A testing is the tactic of using A/B testing to test two identical versions of a page against each other. Typically, this is done to check that the tool being used to run the experiment is statistically fair. In an A/A test, the tool should report no difference in conversions between the control and variation, if the test is implemented correctly.
Why would you want to run a test where the variation and original are identical?
In some cases, you might want to use this to monitor the number of conversions on the page where you are running the A/A test to track the number of conversions and determine the baseline conversion rate before beginning an A/B or multivariate test.
In most other cases, the A/A test is a method of double-checking the effectiveness and accuracy of the A/B testing software. You should look to see if the software reports that there is a statistically significant (>95% statistical significance) difference between the control and variation. If the software reports that there is a statistically significant difference, that’s a problem, and you’ll want to check that the software is correctly implemented on your website or mobile app.
When running an A/A test, it’s important to keep in mind that finding a difference in conversion rate is always an identical test and control page is a possibility. This isn’t necessarily a poor reflection on the A/B testing platform, as there is always an element of randomness when it comes to testing.
When running any A/B test, keep in mind that the statistical significance of your results is a probability, not a certainty. Even a statistical significance level of 95% represents a 1 in 20 chance that the results you’re seeing are due to random chance. In most cases, your A/A test should report that the conversion improvement between the control and variation is statistically inconclusive—because the underlying truth is that there isn’t one to find.
When running an A/A test with Optimizely, in most cases, you can expect the results from the test to be inconclusive—the conversion difference between variations will not reach statistical significance. In fact, the number of A/A tests showing inconclusive results will be at least as high as the significance threshold set in your Project Settings (90% by default).
In some cases, however, you might see on the that one variation is outperforming another or even that a winner is declared for one of your goals. The conclusive result of this experiment occurs purely by chance, and should happen in only 10% of cases, if you have set your significance threshold to 90%. If your significance threshold is higher (say 95%), your chances of encountering a conclusive A/A test is even less (5%).
For more details on Optimizely’s statistical methods and Stats Engine, take a look at How to Run and Interpret an A/A Test.
Use the testing toolkit to start or scale your testing program.
Help your organization master one of the four key initiatives covered in this webinar series.
Meet Optiverse — a place to explore, learn, and connect around experience optimization.
Try Optimizely free for 30 days
You can get the very best of Optimizely without spending a dime.
Try it out for 30 days, on us.
Something went wrong
If this error persists, please let us know at our support page.
Our engineering team has been notified.
Contact Sales
Please tell us about yourself and your company (all fields required):
Thank you
Create a developer account
Get a free account with full access to Optimizely's APIs and SDKs.