A/B testing has been a longstanding tenet of digital marketing management, as a way of measuring the impact or success of a specific campaign. Indeed, a rudimentary, analog A/B test was run on marketing copy as far back as 1923, when the advertising pioneer Claude Hopkins used the return rate of promotional coupons to measure the impact of different campaigns.
What is A/B testing?
Fundamentally, an A/B test is a method of testing changes to a webpage, app or digital platform against the current design and determine which approach produces the best results.
At its most basic level, users are unknowingly split into separate groups, with one set being shown an existing approach, while the other is shown a new design. These variations are compared to each other time to see which one generates the most conversions, clicks or time spent on your website.
You can attribute its popularity to the fact that it gives (mostly) decisive results, even with relatively simple experiments – it’s very straightforward for digital managers to make iterative changes and be in a position where they can report positive progress back to the wider business.
A/B testing = short-term thinking?
However, while it might provide decisive results, are A/B tests really the best way of understanding and serving your audience? One problem with relying on quantitative, binary testing in this way is that it doesn’t lend itself to a longer-term, more sophisticated approach to understanding your audience.
Essentially, upon completion of the testing period, the person running the A/B test has three different options:
• Option B is significantly better than A. Choose B.
• Option B is significantly worse than A. Keep Option A.
• B not statistically different from A. Er, try something else?
This approach leads to a ‘conversion first’ approach, which turns them into little more than numbers on a screen. Focusing purely on conversions can result in short term ‘fixes’ that could end up doing more harm than good in the long term.
It’s also very hard to do effectively on platforms that enjoy exceptionally large audiences. If a platform enjoys an audience of 10 million users, making any significant changes, even for 5 million of those people, becomes fraught with risk.
Even if organizations evolve their testing approach to a multivariate model (i.e introducing multiple options which increases the testing sophistication), this brings with it its own challenges, and presents results that aren’t necessarily immediately actionable thanks to their relative complexity.
Scenario A using standard A/B testing:
A streaming company wants to optimize its homepage in order to convert free triallists to annual subscribers. It creates two versions of the homepage featuring different calls to action, and splits it’s entire userbase in half (whether they are subscribers, triallists or visitors interested in what content they can offer) and test the two versions. One version shows a slightly higher conversion rate, so the company decides to implement it for all users.
How does mtribes approach A/B testing differently?
Whereas A/B testing can be seen as being similar to a sledgehammer in approach – essentially making changes to users’ experiences based on the results of a single all-encompassing test – the approach mtribes takes to audience segmentation and subsequent personalization is more akin to a scalpel.
mtribes allows you to configure and apply tracking rules for each user Tribe individually, choosing from a broad range of parameters including user sign-up attributes or ongoing behavior. These parameters can then be used to personalize individual Tribes’ experiences.
This means that mtribes flips the A/B testing approach on its head: by using Tribes to intelligently segment an audience and personalize their experiences, it’s possible to then test the performance of relevant UX and content for each specific audience.
On top of this, mtribes offers users the ability to intelligently target these Tribes, using the Targeting feature. This is designed to unify a range of functions, including feature configuration, drag and drop ‘rail scheduling’, A/B testing and audience insights into a single ‘targeting’ interface.
By doing so, you could, for example, create tailored messaging that is served to different users even if they are consuming the same content; adapting the call-to-action, or scheduling targeted secondary content or features depending on their subscription tier and monitoring each CTA’s impact. In this way, users are in a position to be far more precise in their approach,
Using mtribes in this way also means organizations are in a position to focus on a much longer-term testing approach. Rather than the ‘quick-win’ split test, mtribes functionality provides users with a dashboard that continually measures the performance of different content or UX choices for each Tribe.
It still allows you to carry out a focused A/B test on any of these segments if required, but the tool also presents the user with real-time information that allows for ongoing analysis of the relative performance between Tribes and features.
Scenario B using mtribes:
A streaming company wants to optimize its homepage in order to convert free triallists to annual subscribers. Using mtribes, it already knows what proportion of triallists have used the platform extensively, and it wants to test two different but tailored experiences containing targeted calls to action, content and features for this specific audience.
It creates two new versions of the homepage specifically for triallists, and tests two separate calls to action on a small but statistically significant segment of this audience to learn more about what works. One version shows a slightly higher conversion rate overall, but also identifies higher engagement for users who are also part of a different Tribe. The company implements this Scenario for a larger proportion of triallists but also schedules additional content rails targeting this secondary Tribe to further optimize sign-up rates.