The ultimate question. That’s right, it all sort of boils down to just one question. The long-term state of your business can be reflected by a simple fact – whether your customers would recommend you or not.
Our clients often feel intrigued about the idea and want to gain further insight into what’s actually going on with their customers. Deciding to act to improve their business results and general customer satisfaction is crucial, but how to start?
If you can’t measure it, you can’t manage it
Every now and then, at least some of your customers probably give you feedback on your services one way or another – while talking to your team member, while suggesting some changes to the way you handle a particular thing, actively praising you or complaining. However, if you don’t introduce a system of capturing that feedback and making it measurable, you will never be able to actually understand what’s there to improve or manage your customers’ experience.
By introducing a measurable system to handle your customers’ opinion about you, you can set a benchmark to compare yourself and your progress over time. There’s a handful of ways to use those metrics as a comparison, two most important ones for starters though are:
So which metric to manage your customer experience should you choose? First of all, you need to remember that whatever you opt for, the results need to be clearly understood across your company and that the metric itself matches your goal. Think about what you want to achieve with it – is analysing and improving the overall experience of present and future customers enough or, rather, do you want to have the power to actively manage their future intentions and influence the success of your business?
No one likes filling out long, multi question surveys (you will get less responses with those), plus it’s hard to analyse a long, open-ended answers on top of that. Your system then should be based on a one-liner, that can be easily metricised and only after that you can invite customers to leave additional explanation of their score.
I noticed many companies either mistake one metric for the other or try to somehow blend the two. Remember though that they are quite different and serve different purpose.
To measure your customers’ satisfaction (CSAT), use a simple and kind question with a basic scale. >> On a scale 1-5, how satisfied are you with us/ our service/ our product? << is a good option to understand how well any recent interaction was received. The scale is familiar and very intuitive to most, so it kinda feels like giving you a grade. And as it usually works with grades, one customer can give you many on their journey and these can differ. This, in turn, will later on give you an average result of how satisfied your customer base or a particular customer is/ was, how satisfied was she on her journey, as well as whether some of your employees usually receive better scores than others.
CSAT is your metric of choice if you want to measure typically transactional events like support communication, problem solving, etc. and learn HOW SATISFIED your customers are with it (duh). It will tell you how happy your customers are with your service or product but that’s pretty much it – you still don’t know anything about their future intents or engagement. For this reason, at Casbeg we use it only with our newest customers, just to be sure we’re on a good way to secure customer’s success and nice experience.
With customers that are with you for a while however, NPS (Net Promoter Score), is gold if you want to get some predictive insight. >> How likely are you to recommend us/ our service / our product? << with a scale of 0 (not at all likely) to 10 (extremely likely) will place your respondents in one of three segments:
Based on the above you can set company goals, adjust processes and focus on improving the results – both individual and general. NPS, or how it’s popularly called – loyalty score – tells you a lot more than a satisfaction score – it’s one thing to be happy with using a product or service, but a whole different one to put your name on the line and recommend it to someone. Calculating your general NPS score though is more complicated than with CSAT. Lots of companies, even some with a steadily implemented NPS process, are only taking into account the average score they get rather than the properly calculated score.
While this approach may work for internal communication purposes, an accurate calculation is required if you’re thinking of benchmarks and goals.
NPS = % of promoters – % of detractors
In simple words, if 60% of respondents are 9s and 10s (Promoters), and 10% are 6s and lower (Detractors), your result is 50%, or 50 in net score. The score can range from -100 to 100 (all Detractors vs all Promoters).
If you get 0 score, you either have all Passives or as many Promoters as Detractors – you should always strive to get a positive score of course and anything above 50 is good, because by a simple calculation it means that you have more than a half of Promoters in your customer base and less than a half of Detractors. Keep in mind however, that each survey has its purpose – don’t stop at just measuring. As Jessica Pfeifer aptly put it once, “a good NPS is one that is better than your last NPS”.
Asking the right question at the right time is crucial and can help you improve customer experience. For example, I recently was put in touch with several support consultants of an accommodation booking platform, following a refund request. After exchanging dozens of messages (some more helpful than others), my original request was denied which, as you might guess, didn’t make me a fan. What made things worse though, after each email exchange I would get an automated NPS survey.
Had it been a CSAT survey, I would have probably graded them several times, sometimes with a 1, sometimes with a 3, and yet another time with a 4, which would make my general score a 2.6. Seems about right. Would I recommend their product? Hell no. Did my answer change with the fourth survey? Absolutely not, it was only successful at annoying me and reinforcing my dissatisfaction to finally make me an active and determined detractor. Let’s hope the lowest scores at least will help them avoid similar situations in the future 😉
When is the right time to send the survey then and who should get it? First of all, you should map basic milestones of your customers’ journey. It will prove far more effective to survey clients when the respondents have had time and chance to get some experience with the value you provide. Don’t base your planning solely on time though, it’s actions and steps you should be focused on. Remember that NPS is about listening and learning, so don’t skew the results by avoiding to ask chosen individuals for an opinion.
This year’s report on NPS benchmarks by CustomerGauge shows a remarkable domination of one channel over any other – email. Even if you don’t base your customer communication on it or can implement an in-app solution, email gains over 70% popularity, followed by an over 20% phone popularity – this is mostly to take more control over the response rate that can be pretty low otherwise.
At this point you might be wondering “OK. But how such a message should look like and what could I do to get my clients to answer the question?” If you want to gather feedback from more than just a third of your surveyees, try to activate them – send them a direct message, following some of these tips:
Et voila! Ready to send. At Casbeg, a typical NPS survey looks like this:
Simple, easy, effortless. Of course your work isn’t done here. It’s not just about sending the survey and knowing your results – you need to act on them. In the next article devoted to this subject, I will share with you some how-to tips on setting the right processes and taking the next steps to get the most business value based on your customers’ feedback.
One million zlotys of annual revenue is 83,333 PLN per month. If you have a subscription-based…
B2B companies often have long sales cycles. Ours is one of the exceptions, because the average…
Bigger companies usually have resources to produce large amounts of diverse content, but often this doesn’t…