The Fresh Egg blog
Latest digital marketing news
Over the past few months I’ve been really getting to grips with the world of Conversion Optimisation Testing - the process of testing to improve the performance of a landing page or conversion funnel with a view to increasing leads, sales or completed objectives.
Conversion optimisation can be a tricky subject, with many caveats. The field is littered with recommendations, best practice guidelines and suggestions for how to maximise the conversion potential of a website. Additionally there is no shortage of tools and packages designed to empower designers, developers and online marketers to collect and analyse data with a view to making informed changes to key converting landing pages and conversion funnels. But with this plethora of resources, the process can feel daunting and a little overwhelming. Trying to establish where to begin and what methods to use can seem like an impossible task, which is why I’m writing this blog post.
In part this is to divulge my findings over the coming months, but also as a way to formalise my findings and provoke some discussion on best practices and the preferred methods used by other people. Conversion Optimisation testing offers the opportunity to test a website in a live environment. This is an advantage over the alternative method whereby a new page or section is created to replace an old one with the impact being monitored. Live testing means all versions are subject to the same market conditions as they are running at the same time, enabling us to limit the impact of other factors which may affect the test.
There are a variety of methods which are recommended for Conversion Optimisation Testing, and the truth is, the correct one will depend entirely on the fundamentals of your given project, but two predominant methods are discussed again and again as the most effective methods of Conversion Optimisation Testing.
A/B (Split) Testing
The first and most simple is known as “A/B” testing. A/B testing in its simplest form is when a separate, second variation of the page being tested is created with various alterations from the control page (original page). Traffic to the page is then split between the original version (Version A) and the new version (Version B). The performance of the two pages is then monitored for a period of time and the page which performs the best is then selected. At this point another variation of the page may be created (Variation C) and traffic is then split between the winner of the initial test, and the newest version of the page. This process would typically continue until all logical variations have been created and tested. As the test progresses, each new variation that is developed must take into consideration the learned knowledge from the test which preceded it. Once the A/B testing has been completed and all logical page variations have been created and tested, the highest performing page is selected and permanently replaces the original variation. Different tools allow you to conduct split testing in slightly different ways, for example it is common to create more than just one variation (a/b), for example; if a site receives enough traffic it may be possible to create two or three additional variations of the control page (test versions a/b/c/d) and split the traffic between all of them equally. The amount of variations which can be tested at once typically depends on how much traffic the website receives on a day to day basis, as attempting to create multiple variations on a low traffic website will mean statistically useful results take a long time to accumulate. Google Website Optimiser is a common tool for performing A/B testing, and takes a more structured approach to A/B testing. It allows us to create two variations in a single experiment, and once a winner is found, a new variation (C) can be tested in a second experiment against the winner from the first experiment and so on.
Multivariate (MVT) Testing
Deciding which method of testing to use on your website
A/B testing is the easiest method of conversion optimisation testing, as the implementation of this type of test is so simple. Two variations of a page are created (e.g. mywebsite.com/index1.htm / mywebsite.com/index2.htm) and then traffic is literally split 50/50 between the two. There are also less technical implications of testing in this way; typically, cookies are used to maintain a consistent user experience – once a user has seen a particular variation of a page, this is the same variation they will see if they leave the site and return at a later date with the same computer and web browser. Google Website Optimiser offers a pain free method for implementing this type of test requiring only the addition of the following code to the pages being tested:
Another significant benefit of testing in this way is the seamless integration with Google Analytics. As two separate pages are being created, within Google Analytics, a report can be generated on the two pages which can then be compared against each other after an appropriate amount of time has passed – allowing us to not only see if a page conversion rate has increased, but also to see if other metrics such as Bounce Rate or Time on Page have shown an improvement as well.
In terms of flexibility, again A/B testing has great benefits regarding the design and functionality of a page. Because we’re creating a completely new version of the page we are not limited or restricted by other elements which may need to be considered when testing in combinations as with an MVT experiment. Due to the fact that an A/B test will usually involve two (or more) separate and unique variations the margin of difference between the performances of the variations is typically much greater and a winning variation is usually identified quickly. Faster results means less time testing and more time for your winning combination to start converting visitors.
The simplicity of this method of testing also means that permanently changing from one variation to another is a simple process too, requiring only the removal of the GWO js code snippets. Because of these benefits A/B testing is almost always ideal for relatively low traffic websites or websites where conversion levels are minimal – such as blogs or educational sites.
A/B Testing also makes testing conversion funnels very simple, but potentially a little hard to manage. Traffic can be split between two pages, which then lead down different conversion funnels to a shared conversion goal page. This is simple to implement but has potential issues regarding the resource required to develop these entirely separate funnels. Furthermore, changing elements along the conversion funnel makes it even more difficult to identify exactly what increased or decreased conversion rates. For example, something which serves to increase conversion rates at the beginning of funnel A may be countered by something which decreases conversion rates at the end of funnel A; with a simple A/B test this would be difficult to measure.
Many different factors have an impact on website conversion rates, which is where A/B testing is lacking; the simple testing of one page over another doesn’t offer enough specific information relating to what is aiding in a visitor’s path to conversion. It might be an unclear CTA, or an off-putting image acting as a barrier to conversion. Creating a new version of the page may well improve conversion, but the downside of this is that we haven’t learnt which elements of the original variation weren’t working, and more importantly, which elements of the new version are working. MVT testing allows for a more refined strategy, because we can make much smaller tweaks and changes and then test and analyse their impact, and more significantly, we can also test which elements are working well in combination and which might be working against each other. For lower traffic websites minor changes to colours, headings and images may not have a measurable impact, which is why simple A/B testing is a viable alternative. However for websites with a high volume of traffic, MVT Testing is the preferred method as this factorial process of testing is carried out by defining key areas of a page, on which elements can be alternated independently of other defined areas. For example; this might include but is not limited to:
Unlike an A/B test, all variations and combinations are displayed on the fly within a page, and so thought must be given to the impact of swapping various elements in and out of the defined areas on the page and how (if at all) they will work together. This seems simple enough when we’re considering swapping one image for another but consider the notion of moving a primary CTA from one defined area to another – in some cases this might offer a combination with two CTAs and one with none. In this case it is important to have a plan in place for ensuring that only the combinations you want to test are displayed to users.
Other complexities relating to an MVT test include the integration with Google Analytics. Different tools offer different levels of analytical reporting. In the case of Google Website Optimiser, the analytical reporting is minimal and therefore intetegration is required with Google Analytics for more comprehensive analysis. The most documented method for integrating a Google Website Optimiser MVT test with Google Analytics is to use custom variables to identify combinations, which is then fed into Google Analytics. This is a technical implementation and opens the floodgates regarding the impact this might have on our existing website tracking, and as such should be implemented with care and consideration.
To summarise, the key benefits of each method of testing are outlined below:
|Multivariate Testing||A/B Testing|