Data in this article is taken from our form performance benchmarking report, an open and free resource to explore benchmarking data across millions of form sessions.
When exploring form performance data and metrics, there is a strong temptation to ask questions like ‘is this metric bad?’ and ‘if we lower this other metric, will our conversion rate go up’?
There has always been a gap when looking at user behaviour data - understanding the definition of a given bit of data or a metric is one thing, but asking what does this actually mean is another. Is it bad? Should we be aiming to reduce that number? Increase it? Would decreasing it make a difference? A difference to what? Is that number important?
Some UX and analytics products have attempted to try and help fill this gap with calculated health scores, or digital user experience scores, frustration metrics and generally try and offer some form of judgement based on the data itself. By looking at the quantitative data, and only the data, they try to help you prioritise what to do next by giving what is effectively an opinion based on the data itself. All done automatically by some code.
It sounds like the dream - imagine an analytics product that contained no data at all, but when you logged in there was simply a list of plain language recommendations like this: “if you change the position of this button and the microcopy of the call to action button, you’re conversion rate will increase by 25%”. What a dream that would be.
Sadly, this is impossible. At least at the moment.
All analytics products are looking through a very small window at a very small part of a very large picture of what it means to interact with your brand online. That doesn’t mean the findings of each product are useless - far from it - you need insights wherever you can get them to build up a large and complex picture of your customers, work to understand them and improve the customer experience based on that data. Take away too many inputs, and your picture will be incomplete, and wrong. But try and conclude too much from too little, and you’ll be just as wrong in the conclusions you make.
In the past, we’ve been guilty of this (though we’ve done our best to avoid it). We’ve had ‘problem scores’ and ‘health metrics’ when looking at form and field data, doing our best to be helpful to our users by going some way to interpret the metrics we collect, to offer help and advice to our customers who are short on time and just want to know what to do. It’s easy to understand the temptation, but by looking at some of our recent benchmarking data, we can also quickly see why it might be misleading.
For example, a common piece of ‘wisdom’ about online forms is that the shorter they are, the better. Removing unnecessary fields reduces the amount of mental and physical energy required to complete a form, and will improve the overall user experience. This may be true. But how would that improvement show itself? By an increase in conversion rate? Could we say that long forms = bad, and shorter forms = good? Is it true that shorter forms will naturally have a higher conversion rate?
Well, in short, no.
You can see from the above that the trendline is a) almost embarrassingly arbitrary, and b) almost flat. There is hardly any correlation at all between the number of fields and the conversion rate of the form page.
Some forms have less than 10 fields and a conversion rate of under 10%, others have dozens of fields and a conversion rate of 90%.
This may seem obvious to some, as all context about these forms has been stripped in the above - are they registration forms buried at the bottom of a long landing page stacked with other content, or Step 8 of 8 in a long application for a mortgage? Context is key, but that won’t stop you finding instances of people saying ‘short forms are better, so remove fields where possible’. You should be extremely wary of broad stroke advice like this. Context and nuance are needed.
There is another issue when looking at form metric data - confusing correlation with causation.
Take this set of data, which compares the amount of time a form takes to complete with the average number of times users have to return to fields to change information they’ve previously entered (think of this as a correction):
There’s some correlation between these two metrics, but could we say that more corrections mean a user is forced to take more time on the form? Or is it that those that take more time in a form are simply more likely to go back, check over, tweak their inputs to ensure they are correct before submitting? It’s not clear.
We also have to consider how messy real world data is. Another common piece of wisdom might be that ‘the more corrections your user makes (because of poor field validation, unclear labels etc), the lower your overall conversion rate will be.
Let’s look at the data:
Again, there is some correlation there, but the dots are hardly tightly grouped along that trend line, There are forms with higher average field returns (those corrections) and higher conversion rates. There are forms with a low correction rate and low conversion rates. A change in one does not guarantee and change in the other.
How is this possible? Well, for one, you may have a form that customers are simply desperate to complete. No matter how many times their inputs get rejected as invalid, they return to try again and try to submit the form. They are so determined that numerous error messages can’t stop them submitting your form. Other forms may inspire such apathy in users, that they simply don’t bother to try to amend anything they get wrong - they just leave. If you became solely focussed on reducing average field returns, there would be no guarantee it would yield any improvement in conversion rate.
Whilst making broad judgemental or prescriptive statements about forms can be difficult and often misleading, that does not make the data useless.
So you have a relatively high correction rate in your form - is that bad? Well, would it be possible to reduce this, whilst maintaining your form’s conversion rates? That would be a good thing right? If your form could be completed perfectly on the first attempt by users, every time, is that not something you should strive for? Or, failing that, could you still improve the overall UX and see time to complete and corrections drop whilst maintaining completion rates? Or, if you could improve the conversion rate of the form, it may not matter that corrections increase slightly, since it may be an indication that people really want to complete it. Or, you may find that by drastically changing your form, your conversion rate increases but as a result you get more form submissions from less qualified leads (in the case of a lead generation form, for example).
Context is key - if your form asks for name and email address and each user has to return to fields 12 times, something is going wrong. That should be relatively simple information for a user to provide. If you’re asking users complicated questions like configuring a loan repayment structure that suits them, changes to inputs may be inevitable. But you can still always try to improve against your own metrics, and importantly with a great understanding of customer sentiment too.
Take this last visualisation:
No trend line here, but if there were it would imply a correlation between a form's view to the starter rate (of the people that see the form, how many start interacting with it) and its start to completion rate (of the people that start interacting with it, how many successfully submit it). Forms that invite people to start interacting with them also seem to be more easily completable. Perhaps this is an interesting measure of overall user experience - great looking and user friendly forms invite people to start them, and are easy to complete too. We’d never say this for certain, but as ever it’s interesting to see.
So even if that single user experience score sounds attractive, you might be tempted to focus your time and energy into changing it, and ultimately in doing so would not improve the overall user experience of your website at all. Changing one metric can't guarantee the improvement of others. Its frustrating, but true.
None of this is easy, but we find it fascinating. Looking at the data in more detail only serves to show how much expertise is needed to be able to reliably make impactful changes to online form experiences. It's experience we have, and you can tap into as well. It may not be easy, but we’re here to help.