A/B tests for eCommerce: Why and how to run A/B tests on your site

Written by Dominic Duffin
Last updated 12 February 2024
A/B tests for eCommerce: Why and how to run A/B tests on your site
On this page
Open

A/B testing (also known as split testing) is a popular concept in marketing and user experience.

If you are wondering how it works or why you might want to do it on your eCommerce site, this article has you covered with a high-level introduction to A/B testing. It is intended to help you understand why you should run A/B tests in eCommerce, and how to do it effectively.

What is A/B testing?

A/B testing is where you create two (or more) versions of your site and randomly show each version to a proportion of visitors to evaluate proposed changes and measure their impact on user behavior. By using your analytics software to compare user behavior on each version, you can get a quantitative view of the impact on your business.

A/B Testing

You can test a wide variety of metrics through an A/B test. Commonly tested metrics include the  click-through rate, conversion rate, revenue and user retention, but you can focus on whatever user metrics are most important for your goals and objectives.

Why run an A/B test on an eCommerce site?

For an eCommerce business, a website is not just part of a marketing strategy. It is a critical piece of business infrastructure, the funnel for all existing and future revenue. When making changes to the design or functionality of your site, it is therefore important to get a detailed understanding of the impact these changes will have on user behavior.

In an eCommerce environment, A/B testing can help with:

  • Testing whether proposed new features or changes to the user experience will actually increase conversions

  • Testing different cart and checkout flows to find out which ones maximize conversions and minimize abandoned carts

  • Trying out different product options, pricing strategies or promotions to see which ones maximize customer spend

  • Catching changes that unexpectedly reduce revenue or conversions before you release them to all your customers

  • Experimenting with different colors or positioning of user interface elements to find the ones that maximize click-throughs and conversions

Whatever you test, the important thing is: A/B testing generates real quantitative data based on the behavior of users on your site, to determine the optimizations you should make. No more relying on generalized statistics from marketing gurus that say “when sites feature X, users spend Y more” or the like, without knowing whether the “users” represent your market. Now you can use real data about genuine customers on your site.

An example scenario for A/B testing in eCommerce

Let's say you are considering two different flows for customers to purchase products on your site:

One flow could have a one-click checkout on your product page, to create the most seamless experience enabling customers to checkout immediately. You might be expecting this to increase conversions by reducing abandoned carts.

The other flow might take customers to a cart page where they can be shown additional products to purchase before they go to checkout. You might decide to accept the risk that a longer flow will lead some customers to abandon their cart because of the compensating opportunity to sell additional products to other customers.

If you make some educated guesses, pick one of the options and move on, you will never be sure if you lost an opportunity to increase revenue or customer growth. By carrying out an A/B test, you can check the metrics and see which option is best for your business. Say, for example, you find that the one-click flow increases conversions from 9% to 10% but the average customer spends 20% less. If you are optimizing for revenue today you might decide to go with the cart page. Or maybe you decide the extra 1% conversion rate, which might translate into a larger long-term customer base, aligns better with your goals.

Regardless of what you are optimizing for, the important thing is that you can make a decision on which flow to use based on real quantitative data about how your customers will behave with each flow. A/B testing takes the guesswork out of your eCommerce site, so you can focus on deciding what goals and objectives you want to optimize it for.

What makes a good A/B testing tool?

There are a number of things to consider when choosing an A/B testing tool:

  • Does it impact the performance of your site? Look for a tool that runs in the background with no measurable impact.

  • Does it integrate with the tools and services you already use, such as your content management system and site hosting? Do these services include built-in A/B testing? A built-in solution will usually be more cost-effective and better for your site’s performance.

  • Is it easy to connect it with your analytics software and other software you use to monitor your site’s metrics?

  • If you work with developers, does it integrate smoothly with the code management platform (such as GitHub or GitLab) they use?

At YYT, our modern headless commerce stack allows us to use Netlify’s Split Testing feature to run performant A/B tests that integrate seamlessly with our development workflow and popular analytics software including Google Analytics.

How to analyze the result of an A/B test?

Once you’ve identified your goals and objectives, chosen the potential changes to your site you want to test and selected an A/B testing tool, the work isn’t finished. You now need to analyze the results of your test and make sure you draw the right conclusions. The most simple analysis is to just compare the aggregate data for each version. While this might work okay, to get the best results from A/B testing you should take a more sophisticated approach.

To get a complete understanding of the impact of proposed changes, you should compare the results not just on your entire audience, but on individual segments of your audience. This might include different devices (mobile, tablet, desktop), different markets (Europe versus the US, for example) or other distinct customer segments that are important for your business. It is also a good idea to check each step in your sales funnel to make sure the change your testing doesn’t cause an unexpected drop in conversions.

You might get a clear result where the metrics on all the segments you’re analyzing point in the same direction. Then you can just go ahead and implement the most successful option. Quite often this will be the case, but you might instead find that a change performs well with some segments but not others.

Coming back to the example of the one-click checkout page, let’s say that on mobile the conversion rate increased from 9% to 11% and that mobile customers on average only spent 5% less. If you are optimizing for revenue today, you now have a third option: Use the cart page on desktop and the one-click checkout on mobile.

Whatever the result and whatever objectives you want to optimize for, the important thing is: The more granular your data, the more accurate conclusions you can draw from the test. By looking beyond the aggregate results and analyzing the metrics separately for each segment, you give yourself more options to further optimize your site to deliver for your business objectives.

How we can help

  • check

    Designing and building a new site

  • check

    New features development

  • check

    Site speed and conversion rate optimisation

  • check

    Technical optimisation for SEO

  • check

    eCommerce strategy and marketing

  • check

    Ongoing maintenance and support

YYT built our new Shopify website from scratch in only 2 months. We were impressed by their level of expertise and flexibility.
Jessica Warch
Jessica Warch
Co-Founder & CEO at Kimaï