Understanding Customer Experience (CX) is both an art and science.
An experienced industry guru can walk into a store or restaurant and point out multiple improvements to layout or staff behaviour that will improve the customer journey. That’s the art.
The science element is interpreting what customers say, do and think about your business and brand experience. But to get scientific you need data, and in a CX context, that will be a combination of customer activity (such as purchases or footfall) and customer feedback. Collecting data and measuring changes in the former is relatively easy. The same cannot be said about customer feedback. Capturing feedback in stores and restaurants is hard, but there are ways to do it (see our blog post on ways to capture feedback in physical locations here).
When you have figured out how you plan to get the data, you then need to consider what questions to ask. This post covers some of the most common metrics used by chain stores to measure customer experience, as well as how to calculate and apply them. High performing multi-unit businesses use these metrics to gauge how well they’re doing and as levers to drive improvement in retail experience and loyalty.
Net Promoter Score (NPS) is a CX metric that is generated by asking customers a standard question that reads as follows: “How likely is it that you would recommend [insert brand/product/service] to a friend or colleague? Where 0 is not at all likely and 10 is very likely.”
From there, it’s a simple matter of subtracting the % of detractors [those who answer 0-6] from the % of promoters [those who answer 9-10]. Passive customers [those who answer 7 or 8] are omitted from the calculation.
For example, given the following set of results:
68% of customers rank you a 9 or 10 [promoters]
18% of customers rank you at 7 or 8 [passives]
14% of customers give a 6 or less [detractors]
The NPS would be: 68 – 14 = 54
NPS gauges how many of your customers are invested enough in your product or service to act as promoters for it to their friends, family, and colleagues. All else being equal, an improving NPS should be an indicator of future sales growth. In a world where there are digital distractions every way we turn, customer advocacy has become a very important element of the marketing mix and NPS is the best way to measure that. It is the most broadly adopted metric used to measure customer experience, at least among businesses who have formal CX strategies.
Many experts would suggest NPS is best used as an overall relationship-based metric, but in my experience, it works just as well in a transactional context. However, asking the question on a relationship basis and a transactional basis in the same survey can cause confusion and should be avoided.
A CSAT question – or Customer Satisfaction question – asks customers how satisfied they were with their recent interaction with your business or product. CSAT can be tracked on a numerical scale (e.g. 1-5) or by giving a range of options from “Very Unsatisfied” through to “Very Satisfied”. In the latter, the percentage of customers choosing “Satisfied” or Very Satisfied” is your CSAT metric.
Unlike NPS, which has very strict application rules, CSAT can be asked in a number of different ways. This means it can be customised to fit your specific needs and is a very flexible way to measure customer experience. It can be useful when you want to get more granular, drilling down into specific touchpoints of your customer’s journey. For example, in a retail context, you might ask consecutive CSAT questions to determine satisfaction with store appearance, service and value for money. This can help you identify what areas you should focus on improving in a specific location.
Customer Effort Score (CES) questions are sometimes referred to as Shopper Effort Score questions by retailers. These questions can be asked in a number of different ways, but the focus is always on measuring customer experience in terms of how much effort the customer has to put in to get their issue resolved or complete the task they set out to complete.
An example in a retail context might be:
Similar to the NPS calculation, CES = % of “Easy” responses – % of “Difficult” responses
“Effortlessness” is key here, so the higher your CES number the better.
CES can be effective at identifying bottlenecks in the customer journey that are definitely causing frustration and are almost certainly costing you sales.
Before NPS, CSAT and other slightly more complicated metrics were developed, the standard measurement methodology was a rating out of 3, 5 or 7. Rating questions are probably the most common way to measure customer experience due to the types of platform that they are the preferred metric on and also because they can be asked multiple times without upsetting the customer (you probably answer one every time you take an Uber). They’re simple numerical or star ratings and they shouldn’t be discounted simply because they’re so simple. It’s that simplicity and the ease with which both consumers and employees can understand them makes ratings a highly effective means of measuring customer experience.
It may be a bit of a stretch to describe a Yes/No split as a metric, but this question can be very useful in certain circumstances. When you want to boil down customer experience into the most simple information possible, Yes/No questions are a great option.
We find them particularly useful for determining staff compliance, as in the below example. When the customer answer is black and white, why confuse them with a more complicated array of possible answers.
Due to the binary simplicity of a Yes/No question, in some cases, it may be useful to add a dynamic question, particularly where the answer is “no”. The additional qualitative data can shed additional light on why the customer responded negatively.
Tracking metrics like the ones listed above is critical in any multi-unit business. They tell you whether you’re improving or getting worse. They indicate where you’re strong and where you’re weak, and whether your customer experience strategy is effective. They indicate which store managers are doing a great job and who needs more training, which can be very useful when it’s difficult to compare different locations on financial performance alone. All very valuable business information.
However, it can be dangerous to put too much emphasis on metrics and awarding bonuses based on them is potentially very dangerous, although it is commonplace. A few employees will always attempt to game such a system in order to achieve the results required to get a bonus, rather than focusing on improving the actual customer experience experiences of fellow human beings. We’ve previously written about how “Improving NPS does not (always) equal Improving CX” and this should always be borne in mind when reporting or consuming CX metrics.
Finally, if you’re looking for more detail on how to use these metrics to measure customer experience in your stores or a delivery context, this blog post would also be worth a read: What Customer Feedback Questions Should You Ask in a Retail Store
Subscribe to Our Blog