Can You Crash Your Customer Service?

Posted by James Dunford Wood 30 Sep 13

The Importance of Customer Service Audits for Online Retailers

Online retailers spend a lot of resource on marketing, product selection, web design, and optimisation of their commerce stores. But how many really drill down to optimise their customer service? They may spend time setting up the process in the first place, and believe they have a great system in place. But unlike other parts of their business, customer service is reliant much more heavily on people - which is why it needs testing on a regular basis.

customer_service_questionnaireBricks and mortar stores understand this, and many conduct regular mystery shopping exercises. But how many online retailers do this? How many test their returns process, their staff’s reactions to picky customers, their live chat response rates?


Most rely on their customers to test it - after all, feedback from customers is one of the most valuable tools we have for understanding, at little cost to us, what is going on in the customer service department. But the problem with feedback is that it is skewed - only a small proportion of customers leave it, and they also tend to be the ones with a complaint. So this type of feedback is excellent at picking up big problems, but does not monitor all the smaller performance issues that may be putting off customers or getting in the way of them buying from your online store.

To do this properly, you need to test not just the human element, but the website signposts and processes. How clear is the access to customer service in the first place? How seamless is the check out process? How easy is it to make changes online?

So here are a few points you need to bear in mind before you conduct a customer service audit (CSA). 

1) Decide Who’s Going to Conduct It

It may be tempting to keep this in-house, but it is essential that whoever carries out your CSA is completely free and independent, and comes with the mindset of your typical first time customer. Therefore the less the operatives know about your business, your process and the aims of the survey, the better. However, whoever is setting up the tests and managing the resulting data DOES need to know about your businesses and processes - both in order to design the structure of the audit and analyse the results. You need to make sure they follow some of the ground rules below. There are a lot of offers out there put together by people who think this is easy. It is not!

2) Make Sure You Follow Best Practice

Before you start you should make sure that your staff are aware in general terms that they may be subject to a mystery shopping exercise or customer service audit at any time, without giving them an idea of when. Second, you need to make sure that any mystery shoppers employed by you or the outsourcer is aware of and agreed to the code of conduct - In particular, no recording should take place without consent, though notes can be taken, and if the audit is conducted by a third party, employees should not be identified by name.

3) Set Your Objectives

The worst thing is to create a customer service audit in a vacuum, because you need to make sure you come out with conclusions that are ACTIONABLE , and to do this you need to set objectives and targetable metrics to measure. A CSA should never just be done 'for the sake of it' - otherwise you will have wasted the opportunity to collect valuable data or worse still, you could draw the wrong conclusions. Without a clear idea of what you want to do with the results of the audit, it can also damage morale.

Start with the management:

  • What are their research objectives? Get them to tell you what they want to find out, as this will uncover areas of their customer service that they want to test or they believe are more important to improve.
  • They also need to tell you exactly what customer services they have in place and what is supposed to happen to deliver a good experience - in other words, what is their idea of a perfect customer experience?

  • Understand their market. Who are they aimed at? If it’s a kids clothing company, then young mothers may be their largest market, in which case the customer personas you use to conduct the audit will need to reflect this. Do they get a lot of business from forign markets, and if so, which? You may need to test how well they cater for and respond to non-English speakers in this case.

  • What feedback have they already received? What have they done about poor feedback? This will flag up areas that need to be tested.

4) Set Your Tests

Now you are ready to start setting some areas to test. For each test, you need to have an expected result (as briefed by management), and a benchmark result (which you would encourage them to achieve to be ahead of their competition), and where applicable some best practice 'mitigation' strategies where targets are missed. How they set their expected result will tell you a lot about the customer service expectations of the company. The more modest they are, the more they need you!

So for a simple example like phone support, the management may expect the phone to be answered by an operative within one minute. In the meantime, they may have music to keep people on hold. However, in 15% of cases some calls can take up to four minutes to answer, in a busy time. This may be acceptable because the company physically does not have enough operatives to cover these rare spikes. Your best practice recommendation would be to institute a queuing system, so people on hold know how long they have to wait.

Typical areas you will want to test will include:

  • How does your customer see your company when they walk through the (website) door? This is an evaluation of the initial impression your online store gives to the first time customer.

Then onto detail:

  • Check out - how easy or quick was it?
  • Information you needed to know - was it clear and upfront?
  • Were you able to make changes without a problem?
  • How was the process of cancelling the transaction minutes after you confirmed it?
  • Was delivery as advertised?
  • Rate the packaging and messages sent with the product.
  • How was the returns process?
  • Were you able to order variants that were out of stock, and how often were you kept updated?
  • What happened when you abandoned a basket? Were you followed by emails without permission?
  • How long did it take to answer the phone/email/live chat?
  • Does the company cater for different languages? (See above - this may be more or less relevant depending on the various markets the company operates in).
  • How efficiently were staff able to handle complaints?
  • What systems do the staff have for logging complaints, requests, and feedback?
  • What happened next? How did the Company stay in touch with you? (This last can take some time, so retention processes could be added as an appendix to the main report).

Whatever tests you choose, performance should be ranked 1-5, and divided into four stages of pre-purchase, purchase, post-purchase and retention.

5) Set Your Sample

The next step, once you have set the areas to test, is to agree on a sample size, as well as frequency of testing - for example, times of day. As a minimum, we would recommend buying and returning, under different names and using different payment methods, at least 20 items over a two week period. In addition, you should create at least five customer personas to test their customer service to the limit, using five different operatives - though of course this will depend to some extent on the size of the store, as the volume and frequency of testing should not be so unusual as to raise red flags amongst the staff.

6) Consider a Customer Service Survey

As an extension, you might want to agree to sending a specific customer service survey to existing customers. In this case, you need a completed sample size of at least 100 people. However, as mentioned above, this should be used as supporting material, as there is a risk of false positives since you will only appeal to those people who are keen to fill out surveys. It is also better to evaluate results if customers are required to rate individual elements on a scale of 1-3 (or maximum 1-5) and then provide comments to clarify their score. Any more and you will get variations due to shopper bias.

7) Review and Rate Records of Previous Customer Service Interactions

  • Look at a selection of records of feedback and complaints, and how they were handled. How they are recorded is also part of the evaluation.

  • Sample some recorded customer service phone calls - for those companies who record these.

  • Review a sample of customer service email correspondences, following the entire email trail for a sample of about 50 customers.

8) Optional Process Review

To do a complete CSA in depth, you also need to spend time reviewing the processes behind your customer service. For example:

  • Is it cost effective?

  • Do you use the best mix of in-house and third party tools and resources?

  • How effectively do you store information about customers?

  • How effectively do you use this information (via analytics or business intelligence tools for example) to boost business performance?

  • How efficiently do you set goals?

9) Analysing Results

Once you have completed the exercise, you then need to analyse the results and itemise action points, like processes that need changing, or areas that need retraining. Individual instances of poor customer service should not be singled out, but should be aggregated with other instances within that same test to produce an overall score.

Each of the tests above should also have an overall ranking factor, which should be agreed with the management in advance - denoting how important they think the tested customer service function is in regard to the total customer service experience. That way, you will end up with about 20 scores, one for each test result, and then each of these is multiplied by the overall test ranking factor, to give an overall customer service score for the entire exercise. This also gives management a way to prioritise fixing problems, based on how high up the priority list the scored test comes.




Artboard 10 B-1.png