Apologies (Part 1) – The Issue

Apologies. In the years that I’ve been training and coaching call center Customer Service Representatives (CSRs), there is no single service element that creates more emotional reaction from the front line, more controversy in calibration and more frustration in training/coaching sessions. In fact, for that very reason, I’ve delayed posting about it and have been mulling it over.

There’s an article in this week’s Harvard Business School’s newsletter by Barbara Kellerman that prompted me to get off the dime and talk about apologies. Barbara writes:

We have more anecdotal evidence than hard data on what exactly apologies accomplish. Yet academic research conducted so far does suggest that leaders are prone to overestimate the costs of apologies and underestimate the benefits.

My experience would agree with her summation of the anecdotal evidence. People have strong aversions to apologizing. I have been in training sessions where reps raised their voices (one guy stood up – shouting at me) in protest. In coaching sessions people will cross their arms in obstinate refusal.

So, why such strong reaction?

  • People equate an apology with an admission of guilt. For some, this is a long-held belief that is usually rooted in their family and/or culture. To say the words “I’m sorry” or “I apologize” is the same as saying “I’m personally responsible for you not receiving your order.”  I have even surmised that, with some individuals, an apology is intertwined with their religious ideas regarding sin, guilt and confession.
  • People believe that apologizing puts them in a position of weakness. If you start with the supposition that an apology is an admission of guilt, then you’ll tend to believe that the admission will put you at a disadvantage with the customer. People think that apologizing leads to the customer saying, “Aha! I gotcha!” In fact, I’ve heard isolated cases of companies making a policy of never apologizing because they believe it puts them in a poor legal position.
  • People have difficulty separating themselves from their role as a corporate representative. To the customer you are Acme Anvils – you are Widgets-R-Us. I’ve found that those who struggle with apologizing often have difficulty making this distinction. They believe that apologizing makes them personally culpable for the customer’s problem. They can’t seem to get to the place of understanding that, to the customer, they are simply a corporate representative expressing regret that the customer’s expectations have not been met.

Apologies, when appropriately understood and delivered, are an essential element in delivering world-class customer service. Yet, it’s one of the most neglected of all service skills. Our Service Quality Assessment typically finds that apologies are commonly missed well over 50 percent of the time they would apply. No other service skill we’ve measured is so consistently ignored.

It’s important for managers, trainers, supervisors and coaches to understand why their front-line agents may struggle with apologizing. We must foster an appropriate understanding of apologies and how they can benefit the customer, the agent and the company.

Next post: Apologies (Part 2) – Definition

Technorati Tags: , , , , , , ,

Jazz and the Art of Quality Assessment

Phil Gerbyshak listed a great link today in his weblog to a post at Presentation Zen which uses quotes from famous Jazz musicians to discuss keys to successful presentations. One of the quotes was from the legendary Jazz bassist Charles Mingus:

Anyone can make the simple complicated. Creativity is making the complicated simple.”

The thought struck me that this is true of most QA scorecards. I’ve witnessed so many well intentioned managers finding all sorts of convoluted ways to analyze and score a call. Their methodology is so obtuse that it takes a definition document the size of Moby Dick just to figure out how to score a single element. Then you get it scored and go to calibration to find that you’re in for a debate reminiscent of one of those cable news shows where people firmly entrenched on opposite sides scream at each other.

Scoring a call works best when the QA methodology is very simple:

  • What specific behavior are you looking for from the Customer Service Representative (CSR)?
  • Was this specific behavior applicable to the call in question?
  • If it was applicable, did the CSR do it?

What Mingus was talking about is really just applying the K.I.S.S. method, whether it’s Jazz or QA. That’s cool, man. Very hep.

Technorati Tags: , , , , ,

Eeny-meeny-miny-moach, Which Call Do I Choose to Coach?

I was shadowing several call coaches today as part of a call coach mentoring program for one of our clients. It was interesting to watch these coaches select the calls they were going to analyze. Most often, the coach quickly dismissed any call shorter than two minutes and any call longer than five minutes, gravitating to a call between three and five minutes in length. The assumption was that any call less than two minutes had no value for coaching purposes. Dismissing longer calls was done, admittedly, because they didn’t want to take the time to listen to them. Unfortunately, this is a common practice. There are a couple of problems with this approach:

  • You are not getting a truly random sample of the agents performance. If you are simply coaching an occasional call, it may not really not a major issue. If you are using the results for bonuses, performance management or incentive pay, then your sampling process may put you at risk.
  • You are ignoring real “moments of truth” in which customers are being impacted. Customers can make critical decisions about your company in thirty-second calls and thirty minute calls. To avoid listening to these calls is turning a blind eye to, what may be, very critical interactions between customers and CSRs.
  • You may be missing out on valuable data. Short calls often happen because of misdirected calls or other process problems. Quantifying why these are occurring could save you money and improve one call resolution as well as customer satisfaction. Likewise, longer calls may result from situations that have seriously gone awry for a customer. Digging in to the reasons may yield valuable information about problems in the service delivery system.

Capturing and analyzing a truly random sample of phone calls will, in the long run, protect and benefit everyone involved.

Technorati Tags:
, , ,

flickr photo courtesy of lotusutol

Too Many Call Coaches Spoil the Calibration

I’m often asked to sit in on client’s calibration sessions. Whenever I walk into the room and find 20 people sitting there I silently scream inside and start looking for the nearest exit. It’s going to be a long, frustrating meeting. Each person you add to a calibration session exponentially increases the amount of time you’ll spend in unproductive wrangling and debate.

QA scales are a lot like the law. No matter how well you draft it, no matter how detailed your definition document is, you’re going to have to interpret it in light of many different customer service situations. There’s a reason why our legal system allows for one voice to argue each side and a small number of people to make a decision. Can you imagine the chaos if every court case was open for large-scale, public debate and a popular vote?

One of the principles I’ve learned is that calibration is most efficient and productive with a small group of people (four or five max). If you have multiple call centers or a much larger QA staff, then I recommend that calibration have some sort of hierarchy. Have a small group of decision makers begin the process by calibrating, interpreting and making decisions. If necessary, that small group can then hold subsequent sessions with a broader group of coaches (in equally smaller groups) to listen and discuss the interpretation.

Like it or not, business is not a democracy. Putting every QA decision up for a popular vote among the staff often leads to poor decisions that will only have to be hashed, rehashed and altered in the future. Most successful QA programs have strong, yet fair, leaders who are willing to make decisions and drive both efficiency and productivity into the process.

Technorati Tags:
,,,,,

Making Allowances for New CSRs

 Many call centers struggle with how to handle new CSRs as it relates to quality assessment. There is more and more pressure to get CSRs out of training and on the floor. The result is that CSRs are often taking calls before they are fully knowledgeable and there’s going to be a period of time when they struggle to deliver a level of service expected by the QA scorecard. So, what do you do?
First, you always want to be objective. Communicate the QA standard or expectation and score it accordingly. If they missed an element – mark it down. If it’s on the form then you should always score it appropriately.
The customer doesn’t care that the CSR is new – they have the same expectations no matter who picks up the phone. Giving the CSR credit and simply “coaching” her on it will ultimately do a disservice to everyone involved. It tends to undermine the objectivity, validity and credibility of the QA program.
To sum it up, let your “yes be yes” and your “no be no.” It does, however, make sense to give new agents a nesting period to get up to speed. Rather than dumbing down the scale or pretending that they delivered better service than they actually did, it makes more sense to me to have a grace period. Some call centers will have a graduated performance expectation (e.g. by 60 days your QA scores have to average 85 by 90 days they have to be at 90, etc.). Other call centers will allow new CSRs to drop a set number of QA evaluations from their permanent record to account for the outliers that frequently occur (e.g. “We expect you to perform at an average QA score of 95. I realize that newbie mistakes cost you on this evaluation, but over the first 90 days you get to drop the lowest three QA scores from your permanent record, so this may be one of three). Either one of these strategies allow you to make allowance for rookie mistakes without having to sacrifice your objectivity.

Technorati Tags:
, , , ,

How Many Calls Should Your QA Analyze?

I spoke a few weeks ago at the LOMA conference on Customer Service. LOMA is a great organization that caters to the insurance and financial services industry and my workshop was about Avoiding Common QA Pitfalls.” I’m always interested in what I learn from these conferences. You get a feeling for the hot issues in call centers.

The question that seemed to raise the most discussion at LOMA was “How many calls should I score and coach per person?” A book could probably be written on the subject, but let me give you a couple of thoughts based on our group’s experience.

Are you using QA results in performance management? If you are, then the question really needs to be, “do we have enough calls to be statistically valid and hold up to scrutiny?” If you are giving any kind of merit pay, incentives, bonuses or promotions based on their QA scores, then you’ll want a valid number. Assuming your QA scorecard has a valid methodology (which is a big assumption based on the fact that most QA scorecards we audit have major problems with their statistical validity), you’ll want at least 30 randomly selected calls. More is great, but there is sort of a rule in statistics that once you have more than 30 of anything, you’ve got enough that you know they can’t all be outliers. Let me say again, I’m talking minimums here.

The “Wait ’til Mom & Dad are Gone” Syndrome. Many call centers coach each agent religiously once a week. That’s fine from a feedback point-of-view. But like kids who wait until they see their parents pull out of the driveway to start the party, agents often know that they only have to watch their service until they’ve been coached for the week. After that, all bets are off. Sometimes a seemingly random coaching schedule that keeps agents guessing is a good thing.

It might depend on the agent. In our politically correct world we are conditioned to do the same thing for everybody. Yet, some agents need little feedback or coaching. Score the calls, make sure they’re still stellar, and then let them know their scores and give them their bonus.

Why waste time, energy and money coaching them? That’s like the guy who washes his car everyday whether it needs it or not (then parks it diagonally across two spots in the parking lot…I hate that guy!). Seriously, the number of coaching sessions is a separate issue from how many calls should you score to have a valid sample. Spend your coaching energy on agents who need it the most. It even becomes an incentive for some agents who dread the coaching sessions: “Keep your numbers up and you don’t have to be coached as much.”

From the discussion I had with some QA managers at the LOMA conference, there were several who – in my opinion – were coaching their people more than was necessary. We’ve seen agents greatly improve performance with quarterly and even semi-annual call coaching. Still, that’s not going to be enough for other agents.

There’s the challenge for you – finding out which agent is which and tailoring your QA process to meet each agent’s needs.

Technorati Tags:
, , ,