Category: Customer Research

Can You Afford the Hidden Cost of Off-Shoring?

The question has been debated for the better part of the last decade. "Does it make sense to send your customer service call center off shore?" It certainly made cents to do so. With lower labor and operating costs, the off-shoring craze saved a ton of money to the bottom line.

But, what is the cost in customer satisfaction? Some companies learned that the cost of customer ire was not worth the savings.

Now, there is more evidence that there is a specific, calculated cost in customer satisfaction when U.S. customers perceive that a call center is off-shore.

Does this mean that off-shoring never makes sense (or cents)? No. One answer does not fit all in this debate. Nevertheless, there is more warning than ever that companies should calculate the cost of lost statisfaction while they are calculating the savings in operation budget.

Praise or Criticism? What Works Best?

It's a classic debate in the world of call center quality assessment (QA). Do you use QA to praise Customer Service Representatives (CSRs) so as to encourage them and build their self-esteem? Do you use QA to be critical and hold CSRs accountable to keep them honest? Is there a happy medium, and if so, where is it?

When giving seminars, I often use the word pictures of the "QA Nazi" (who uses QA as a means of beating CSRs into submission) and the "QA Hippie" (who uses QA to give CSRs smiley faces and make their world a "happier place") to represent the extremes on both sides of the spectrum.

My coworker recently forwarded an article to me from NY Times Magazine about some research that's being done on the power of praise and criticism with children. While the research focuses on parents and their children, I would submit that there are some lessons for us all to learn in the QA, training and coaching arena.

The most recent research is finding that undue praise can actually have a negative effect. Those who are constantly and generally praised tend to become more competitive, less motivated and less willing to put out effort towards improvement.

Does this mean that praise isn't important? Not at all. What the research is discovering is that praise is a powerful force when it is specific and sincere.

I'm sure the debate will never end, and I'm not sure that it should. A professor of mine said, "truth lies at the tension between the two extremes," and I've found it apt in many situations. Finding that right balance between praise and accountability is elusive, but one to which all QA teams should strive. 

I continually come back to a few key tenets:

  • Know what drives your customer's satisfaction, by asking them
  • Define specific, desirable behaviors that will meet & exceed those expectations
  • Measure those specific behaviors
  • Give consistent, honest, data-led feedback to CSRs telling them which behaviors they are consistently performing, and which behaviors theyare inconsistently demonstrating
  • Train and coach CSRs toward improvement
  • Praise CSRs for the specific, documented acheivements and improvements
  • Hold CSRs accountable for specific, documented lack of performance

The Effect of Metrics on Customer Satisfaction

Bigstockphoto_Customer_Service_Feedback_335920 There's been an interesting conversation happening among the North American Call Center Professionals group on LinkedIn. The question originated with someone asking how you measure the effect that abandon rates and ASA have on Customer Satisfaction. In this case, the call center had implemented some internal initiatives to move their metrics, but wondered how it may have affected their customer's satisfaction.

Several have contributed to the discussion:

"This is the ageless question! The answer is like noodle soup. You run out of noodles or broth but not at the same time. My research found the following, 40% of abandons were wrong numbers, 25% solved the problem or did not need services or purchase, 25% called back, 5 percent were not sure why they called, 5 percent were drunk and just wanted to talk to someone! (Smile) We called every number abandoned during a week period to get this data!" – Arnold Talbott

"The correlation between ASA and satisfaction/loyalty can be measured and quantified, as can the correlation between other access-channels/issues and contact-handling attributes by correlated against satisfaction/loyalty and positive/negative WOM. It is an industry and company specific item to be measured. While generalized numbers (TARP's or anyone elses) can serve as a strawman, you really have to measure your customer's experience." – Jeff Maszal

I really liked Jeff's last statement, and completely concur. If you really want to know how your abandon rates or ASA are affecting customer sat, then a small, focused customer survey can easily do the trick. Over a period of time, call customers who abandoned the call and those who did not and ask them a few questions about their overall satisfaction with the experience. Do the same thing with customers who experienced a long wait in queue versus those who had a short wait.

These types of surveys can be relatively simple and do not need to cost an arm and a leg because you're limiting the scope of your inquiry to one basic question: "How satisfied were you with the experience?" The key is not to rely on industry wide numbers that may, or may not, reflect your customer's views. As our group regularly conducts custom surveys like these for clients, we find that there is no substitute for asking your customers about their experience and satisfaction when they called your call center.

If you would like to join my network on LinkedIn, you may use my email address to send an invitation!

Measure What You Know, Not What You Perceive

QA is not a crystal ball. I am a patient person. Nevertheless, I've learned that I can also be an emotional volcano. I am very slow to anger, and I rarely erupt, but there is a limit to how much frustration I will bear before the explosion is bound to occur. I'm not saying this is a good thing, but it is true about me. Because of this, family, friends, and service providers will often misread me and my responses. I don't look angry. I'm not screaming and yelling. So, they conclude, everything is just fine when it's not. In reality, there's an eruption brewing just below my calm exterior.

You can't always tell what a person is thinking and feeling.

When creating criteria for your Quality Assessment (QA) scale or monitoring form, it's best to clearly define the behaviors you're listening for from the Customer Service Representative (CSR) on the phone. The easiest way to stay objective is to measure that which you can hear and know. Keep your criteria limited what the CSR says to the person on the other end of the conversation.

It's quite common to find companies or individuals basing their assessment on what they perceive the customer thinks or feels. I've seen QA scales that are based on how well the CSR met or exceeded the customer's expectations. However, unless you interview each customer, you're making specious judgements about what that customer thought of the experience. In addition, some customers will never be satisfied. It would not be appropriate to rate the CSR's effort based on an uncontrollable outcome.

People also like to make arguments based on the perceived response or lack of response from the customer. "I shouldn't be penalized for not saying 'please,'" a CSR might argue, "because the customer clearly didn't care whether I said it or not." But, you don't know what that particular customer thought, felt or perceived. Just like my friends thinking that I'm perfectly calm when there is an eruption brewing beneath the surface. QA is not intended to be a crystal ball that looks into the mind of each customer.

We can know, for a fact, what drives our customers' satisfaction on the whole. A good customer satisfaction survey will provide us with this information and it can be critically important in defining the elements we expect as part of our QA criteria. But to try and judge an individual call based on perception of the customer's response is an exercise in futile subjectivity.

We can't control or accurately read every customer's mind, but we can control what we say to each customer and how we say it.

Creative Commons photo courtesy of Flickr and nancee_art

Minimize Customer Aggravation with Transfers

Bigstockphoto_call_phone_apple_1687As our group does customer satisfaction research for different clients, I’ve noticed that "getting answers without transfers" has been growing as a key driver of satisfaction in customers minds. Transfers are a necessary evil in todays business environment, and the larger the company the more likely it will be that customers must be transferred.

Your customers will never be excited about having to be transferred, but you can minimize customer frustration and how badly they penalize you by consistently doing a few key things:

  • Whenever possible, "soft transfer" (a.k.a. "warm transfer") the caller. This means that you place the caller on hold, call the person or department to whom you’re transferring the caller. Provide the caller’s name (account/order number, if applicable) and briefly state the reason for the call. Now, the next agent can pick up the call, personally greet the caller and let the caller know they understand the reason for their call. Another variation is for the initial Customer Service Representative (CSR) to actually conference the customer in with the next CSR and introduce them: "Mike? Thanks for holding. I’ve got Barry on the line. He’s our specialist. I’ve told him about your problem and he has your account up. You’re all set!"
  • Many business can’t "soft transfer" or refuse to do so because it would mean people sitting on the line in queue with the customer until another agent picks up. The lost time and revenue aren’t justified. In that case, it’s imperative that the CSR who initializes the transfer informs the customer where they are being transferred and what they can expect. Are they being transferred to a department or a specific person? Will they go into a queue? Will they have to navigate an IVR menu? Will they possibly go into voice mail? What information should they be ready to provide the next person? Anything you can do to prepare the customer for what they will experience will help soften the blow of the transfer.
  • Apologize. As I’ve written elsewhere on this blog, customers want two things. They want the issue resolved, but they also want to know that you care. Transferring them is not going to immediately resolve their issue and they are likely not going to be pleased to have to sit on hold and explain themselves all over again. What you can do is let them know you care about their frustration. A simple, "I’m sorry, but that is not an issue I normally handle. Let me get you to a specialist in that area." It may not make them happy, but it communicates that you identify with the fact you can’t resolve their issue.

What Does it Mean to Be “Customer-Centered”?

Bigstockphoto_business_focus_568024Glenn Ross has been asking Customer Service bloggers to chime in on the question "How Do You Define a ‘Customer-Focused’ strategy?". I’m late to the party, but I wanted to get in my two-cents worth – especially since our in our group’s mission is to "design and implement customer-centered systems to measure and enhance service quality."

To be simple and direct, I believe that customer-focused means that you’ve invested the resources to listen to your customers and discover what they want. Too many businesses follow what their gut tells them customers want. Others will listen, but only to the customers who call or complain. There’s likely a silent majority out there that have never contacted you and since they’ve never landed on your radar screen you’ve never talked to them.

If you truly want to be customer-focused then it starts with getting to know your customers. A small, statistically sound, directed survey of all your customers can be done economically, can assure you that your strategy is truly focused on your customers’ desires, and will increase your odds for success.

If you want to know how, email me and I’ll be happy to put you in touch with our customer research team!

Do You Have Time for Quality?

Time_pressureWe’re all pressed for time. The speed of business seems to contstantly increase as the windows of opportunity become narrower. As the pressure mounts, time management becomes a larger issue. I’ve found time to be the most heinous enemy of quality programs. When the phones start ringing and fires start blazing – listening to and analyzing calls is the first thing that gets shoved to the back burner.

I sat in on a calibration session with a client recently. This client hires me to come in once a quarter and calibrate with their team of quality call coaches. At one point during the session I asked how things were going in their "regular" calibration sessions between my quarterly visits.

Blank stares were followed by an explanation of how there has been no time for calibration.

There’s no time to listen to calls. There’s no time to listen for anything more strategic than whether the CSR did what we wanted. There’s no time to pull people off the phone to coach them. There’s no time to calibrate and make sure everyone on the Quality Team is analyzing and coaching calls the same way. There’s just no time for quality.

As a business partner, consultant, father of teenagers, and community leader – I find there are things for which I don’t always have time. Lawn care, home repairs, pest control, automotive maintenance to name a few – these are things that sometimes make more sense for me to hire others to do. That way, I know they will get done, I know they will get done better than I would probably do myself, and my time is better spent doing what I do well. I could do these things myself – but I’ve learned from experience that they tend not to get done, or they don’t get done well.

Perhaps you’re finding that you’ve got no time for measuring customer satisfaction or the quality of service your team is providing on the phone. Perhaps you’ve tried but you know you’re not doing it well and your team’s resources would be better spent going about your business. If that description fits you, please give me a call (It’s free! 877.482.0735) and let’s chat.

Measuring customer satisfaction, measuring service quality, training people to serve well – that’s what we spend our time doing so that our clients can spend their time doing what they do well – their business.

Creative Commons photo courtesy of Flickr and Shyald

Starting a QA Program: A Brief Case Study

Bigstockphoto_business_woman_with_tMy team and I are call coaching this week with a client who operates two small call centers with 15-20 Customer Service Representatives (CSRs) in each. Two years ago they had no Quality Assessment (QA) program. They knew they needed to do something to improve their service delivery and customer satisfaction, but didn’t know where to start. They called our group and asked for help.

It’s an interesting case study.

We started with a team-based Service Quality Assessment (SQA). We developed a QA scale utilizing customer satisfaction research to help pinpoint key drivers of customer satisfaction which we then translated into specific, weighted behaviors on the scale. We captured a statistically meaningful sample of calls and analyzed them. The team’s average Overall Service score was 84 out of 100 on their SQA scale. We presented the report to the management team along with group training for the front-line CSRs.

The plan was to help the client develop their own internal QA team in the first year. Utlizing the data we’d already generated and the momentum we’d already gained from the pilot project, we were to teach a team of internal QA reps to analyze, calibrate and coach calls internally while we continued to provide team-based report cards on a quarterly basis. As the first year wore on it became clear that the client was not organized and ready to take on the QA function themselves, so we switched gears.

The second half of the year we provided an Individual Service Quality Assessment (ISQA) for each CSR. We sampled five calls a month for three months and gave them a third-quarter interim report along with a call coaching session based on 15 calls. At year-end we provided a full report on the thirty-call sample with a follow-up call coaching session.

After seven months of the on-going assessment and individual coaching, the average Overall Service score had risen from 84 to 89. That may not seem like a huge move of the needle, but it reflected a major cultural shift in the call centers and a solid, positve foundation on which the on-going QA program could be built. The stage was set.

This week we are presenting the third quarter’s worth of Individual Service Quality Assessment data. The CSRs are on-board and motivated to improve. Average Overall Service scores in the last quarter have improved to 93.4. We are witnessing ISQA scores above 95 for the first time. The client is finally ready to start training their people to do some of the QA work internally.

Most important, research reveals a solid improvement in Customer Satisfaction when they call. Customers have noticed the improvement and are reporting higher levels of satisfaction.

So, what are the implications from this brief case study?

  • A Quality Assessment program can be methodically and positively ramped up in a one to two year period so that it is set for long-term success while having immediate impact on service quality and customer satisfaction.
  • You don’t have to analyze and coach a huge number of calls to see positive improvements in service delivery and customer satisfaction. In this case, we sampled 10-15 calls a quarter and coached each CSR once a quarter. The result was a 10 point improvement in average Overall Service scores accompanied by an improvement in Customer Satisfaction.
  • You can experience improvement in service quality and customer satisfaction by having a third-party provide QA and call coaching. You don’t have to do it yourself.

The positives of using third-party to help you start a QA program:
1. You learn from those who have experience and expertise
2. You don’t risk as much in your own time, energy and resources
3. You can easily fire the third party if it isn’t working (not always true of hired FTEs)
4. You don’t depend on internal reps to get QA done when their plate is already full
5. Improved service and customer satisfaction for a reasonable investment
6. It gets done

If you’ve been wanting to start an internal quality monitoring process but haven’t been sure how to do it, please drop me an e-mail. We might be able to help. If we can’t, I’ll try to refer you to someone who can. Either way, you’ve got nothing to lose and the potential for quality gains.

QAQnA Top 10 Posts from the Past Two Years

In celebration of two years of blogging, here from the home office in Des Moines, Iowa, are the All-Time Top Ten Posts from QAQnA:

1.       The Geek Squad Posts

2.       Ten Things Your Customer’s Don’t Want to Hear

3.       Internal Customers are Still Customers

4.       Successful Calibration Basics

5.       Upselling Basics

6.       World-Class Service: The Greeting

7.       Your Calls Can Be Monitored to Ensure Service Quality

8.       Zero Tolerance QA Elements

9.       World-Class Service: Managing “Dead Air”

10.   Pros & Cons of 3rd Party QA

Consistency Lends Comfort to Customers

Bigstockphoto_customer_support_centCustomers like to know what they are getting – and they like to know that they are getting something good when they interact with your company.

I was talking to one of my team mates yesterday about a report he was working on. In the spirit of practicing what we preach and of continuous improvement, we were discussing some positive changes we could make to the monthly Service Quality Assessment report for our client. When you report data on a regular basis, it’s easy for the data, the charts, the graphs, and the issues to feel stale after a while. We were discussing ways to bring a fresh approach to the report for the client. At the same time, I don’t want to make a complete visual overhaul, either.

One of the reasons for having a quality program, for monitoring your customer’s experience, for coaching your employees towards a common service delivery is that you create a customer experience that the customer can count on. The customer feels safe interacting with your company. They know what they’re going to get.

Nothing destroys customer loyalty like an inconsistent experience. If I call your company today and get your best CSR on the phone, I’m going to walk away feeling GREAT. I might even give you some positive world-of-mouth to my friends, family and colleagues. But now the bar has been set. I have an EXPECTATION of the service I’m going to receive when I call again.

Now I call your company back and get one of your worst CSRs on the phone. Not only have I had a poor experience, but you have DISAPPOINTED me by falling far below my expectations. That’s a double-ding to my customer satisfaction.

Raising your service delivery to a consistent, World-class level requires: