Category: Call Center Issues

“Not Applicable” is Definitely Applicable

When auditing a quality assessment scale or QA scorecard in call centers, I commonly find that there’s no allowance given for an element to be “not applicable” (NA). For those experienced in quality assessment, this may seem like basic common sense, but my experience has proven that it is a frequently overlooked element when scoring or analyzing phone calls. If you’re not already doing so, here’s why you should immediately alter your methodology to include an “NA” option:
  • Because it’s accurate. Typically, when the NA option is not given, the Customer Service Representative (CSR) is given credit for the element even though it doesn’t apply. So, the resulting score doesn’t accurately reflect what happened in the call. Some elements truly aren’t relevant on a given call. If your QA program is going to have integrity, it needs to accurately reflect what actually happened on a phone call. If an element wasn’t a factor in the phone call, it shouldn’t be a factor in the score.
  • Because it’s fair. Some CSRs would argue that it’s not fair (especially if they’re used to receiving falsely inflated scores), but the NA option is fair is because only those elements that do apply had an impact on the customer’s satisfaction on that call. It’s fair that you are only held accountable for the elements that were relevant to the call, no more and no less.
  • Because it raises the level of accountability. Let’s give a hypothetical. Let’s say you had twenty elements on your QA scorecard and, on a certain call, only ten of them really applied. (I feel like I’m writing a story problem) The CSR missed two of the 10 applicable elements. Without the NA option, the CSR gets credit for all ten non-applicable elements. The result looks like he missed two out of twenty (90%). If you take out the ten elements that didn’t factor into the call he now has eight out of ten (80%). Which is more accurate?

When the NA option is not given, it’s common to find poorly performing CSRs sitting back on their laurels, confident that they are doing well when their scores don’t reflect their true performance.

It’s vital that you make the “not applicable” option applicable in your scoring methodology!

Technorati Tags: , , , , ,

Eeny-meeny-miny-moach, Which Call Do I Choose to Coach?

I was shadowing several call coaches today as part of a call coach mentoring program for one of our clients. It was interesting to watch these coaches select the calls they were going to analyze. Most often, the coach quickly dismissed any call shorter than two minutes and any call longer than five minutes, gravitating to a call between three and five minutes in length. The assumption was that any call less than two minutes had no value for coaching purposes. Dismissing longer calls was done, admittedly, because they didn’t want to take the time to listen to them. Unfortunately, this is a common practice. There are a couple of problems with this approach:

  • You are not getting a truly random sample of the agents performance. If you are simply coaching an occasional call, it may not really not a major issue. If you are using the results for bonuses, performance management or incentive pay, then your sampling process may put you at risk.
  • You are ignoring real “moments of truth” in which customers are being impacted. Customers can make critical decisions about your company in thirty-second calls and thirty minute calls. To avoid listening to these calls is turning a blind eye to, what may be, very critical interactions between customers and CSRs.
  • You may be missing out on valuable data. Short calls often happen because of misdirected calls or other process problems. Quantifying why these are occurring could save you money and improve one call resolution as well as customer satisfaction. Likewise, longer calls may result from situations that have seriously gone awry for a customer. Digging in to the reasons may yield valuable information about problems in the service delivery system.

Capturing and analyzing a truly random sample of phone calls will, in the long run, protect and benefit everyone involved.

Technorati Tags:
, , ,

flickr photo courtesy of lotusutol

Too Many Call Coaches Spoil the Calibration

I’m often asked to sit in on client’s calibration sessions. Whenever I walk into the room and find 20 people sitting there I silently scream inside and start looking for the nearest exit. It’s going to be a long, frustrating meeting. Each person you add to a calibration session exponentially increases the amount of time you’ll spend in unproductive wrangling and debate.

QA scales are a lot like the law. No matter how well you draft it, no matter how detailed your definition document is, you’re going to have to interpret it in light of many different customer service situations. There’s a reason why our legal system allows for one voice to argue each side and a small number of people to make a decision. Can you imagine the chaos if every court case was open for large-scale, public debate and a popular vote?

One of the principles I’ve learned is that calibration is most efficient and productive with a small group of people (four or five max). If you have multiple call centers or a much larger QA staff, then I recommend that calibration have some sort of hierarchy. Have a small group of decision makers begin the process by calibrating, interpreting and making decisions. If necessary, that small group can then hold subsequent sessions with a broader group of coaches (in equally smaller groups) to listen and discuss the interpretation.

Like it or not, business is not a democracy. Putting every QA decision up for a popular vote among the staff often leads to poor decisions that will only have to be hashed, rehashed and altered in the future. Most successful QA programs have strong, yet fair, leaders who are willing to make decisions and drive both efficiency and productivity into the process.

Technorati Tags:
,,,,,

Buyer Beware! QA Software Considerations

It has become vogue for call centers to have the latest, greatest software for monitoring and scoring phone calls. For most companies, the decision to purchase one of these products is no small consideration. These software options can be a major investment running well into six figures on just the initial capital outlay. I’ve had the experience of working with various call centers who have utilized the products of different software vendors. My suggestion is that you take your time and give plenty of consideration before making an investment in software. A couple of thoughts:

  • Software is only a tool, you still have to know how to use it. You wouldn’t purchase bookkeeping software and expect it to make you financially solvent. In the same way, you can’t expect that having one of these software products is going to make you an expert in call quality assessment. Unfortunately, I’ve watched companies spend a lot of money on software with the expectation that they’ll simply turn it on and have instant, successful QA. Most of the time, there is a large hidden cost in man power, time and resources just to figure out how you’re going to use it and program the software with your own QA metrics.
  • Slide Shows and slick sales presentations are no substitute for a real-life demonstration. Just last week a client told me how angry they were with their QA software vendor. The client had asked the vendor for a “hands-on” demonstration of the software update on which they were spending a considerable sum of money. The vendor flew in (at the client’s expense!) with nothing but a handful of slides and screen shots. The client was angry and the vendor maintained a “you’ll get what we give you and like it” mentality.
  • Get good references. I asked one of our clients what she thought of the QA software her company had purchased a few years ago. “How do I like it?” she repeated, incredulously looking around the room. “Do you see anyone from the software vendor around here helping me? They’re not here helping me, you’re the one here helping me! How do you think I feel about them?” I wish her experience was isolated, but it’s not. It is not uncommon for contact centers to feel that they were courted by a vendor who disappeared after they said, “I do.” They spent hundreds of thousands of dollars on software that you can’t just return with a receipt, only to find themselves in an unhappy marriage to the vendor.
  • Software experts are not necessarily QA experts. One of our clients was told by their software vendor that, if they wanted to purchase a certain add-on module, they must also pay for the vendor’s experts to help them with their QA scale. They were not given a choice and the resulting QA scale, in our opinion, was a muddled, statistically invalid mess. Programming software to capture audio and data isn’t the same as measuring and analyzing it the data that’s captured.
  • Beware of the money-pit. I remember a Looney Tunes animated short where Daffy Duck is a salesman demonstrating all these great home-improvement technologies to Porky Pig. He keeps warning Porky not to push the red button on the control panel. When Porky gives in to temptation and pushes the forbidden red button, his house is lifted thousands of feet in the air on a hydraulic lift. Daffy comes by in a helicopter and says, “For small fee, you can buy the blue button to get you down!” This is a similar experience to clients who have purchased QA software. You spend a ton of money on this product, you get it installed and integrated with your phone system – now you’re stuck with it. When it doesn’t quite do what you want it to, the software company will tell you they’ll be happy to turn on that feature – for a not-so-small fee.

Don’t get me wrong, I do believe these powerful software tools can be invaluable in helping you efficiently manage your QA program. In most cases, they actually make my job easier, so I don’t generally have a problem with them. It’s just that I’ve just witnessed a lot of frustration from my clients. I would encourage anyone to do their homework, check references, and count the cost (not just the initial cost of the software, but the cost of developing internal QA expertise, additional licenses, frequent updates, and program downtime waiting for the vendor to provide after-the-sale service).

Technorati Tags: , , , ,

flickr photo courtesy of stephenm

Making Allowances for New CSRs

 Many call centers struggle with how to handle new CSRs as it relates to quality assessment. There is more and more pressure to get CSRs out of training and on the floor. The result is that CSRs are often taking calls before they are fully knowledgeable and there’s going to be a period of time when they struggle to deliver a level of service expected by the QA scorecard. So, what do you do?
First, you always want to be objective. Communicate the QA standard or expectation and score it accordingly. If they missed an element – mark it down. If it’s on the form then you should always score it appropriately.
The customer doesn’t care that the CSR is new – they have the same expectations no matter who picks up the phone. Giving the CSR credit and simply “coaching” her on it will ultimately do a disservice to everyone involved. It tends to undermine the objectivity, validity and credibility of the QA program.
To sum it up, let your “yes be yes” and your “no be no.” It does, however, make sense to give new agents a nesting period to get up to speed. Rather than dumbing down the scale or pretending that they delivered better service than they actually did, it makes more sense to me to have a grace period. Some call centers will have a graduated performance expectation (e.g. by 60 days your QA scores have to average 85 by 90 days they have to be at 90, etc.). Other call centers will allow new CSRs to drop a set number of QA evaluations from their permanent record to account for the outliers that frequently occur (e.g. “We expect you to perform at an average QA score of 95. I realize that newbie mistakes cost you on this evaluation, but over the first 90 days you get to drop the lowest three QA scores from your permanent record, so this may be one of three). Either one of these strategies allow you to make allowance for rookie mistakes without having to sacrifice your objectivity.

Technorati Tags:
, , , ,

Understanding QA Rules and Exceptions

Most QA elements are based on general guidelines for calls or e-mails in a given contact center. Some elements are fairly common as they address the vast majority of customer service or sales interactions. There are, however, exceptions to almost every customer service rule. There will always be those outlier circumstances that don’t seem to fit in the normal communication flow. The problem comes when people want to make the rules based on the exceptions:
  • Conversationally use the customer’s name. “But, this one time, a customer got angry because I mispronounced his name – so I never use the caller’s name. Don’t want to make that mistake again.”
  • Apologize if something has not met the customer’s expectations. “But, this one time, I had a customer who told me, ‘I don’t want your apology’ – so I’ve never apologized to a customer again.”
  • Give customers a time frame of when you’ll get back them with an answer. “But I never know when I’m going to hear back from accounting, can’t give a time frame because I might not meet it.”

It’s important to keep “rules” and “exceptions” in balance. Don’t make rules based on occasional exceptions. Base the elements of your QA scorecard on the general rules that apply to the vast majority of your calls. If an “exceptional” situation arises, you can deal with it on a situation by situation basis. For example, if the customer’s name was fifteen syllables long and difficult to pronounce, you would mark “use the customer’s name” not applicable for that particular call and talk to the CSR about how to handle those situations in the future.

There are other ways to deal with exceptional calls and situations within calls. My point is simply that, when coaching CSRs, you have to continually communicate your understanding of exceptional situations and your willingness to treat those situations in a just and fair manner. I try to be equally rigorous in communicating the message that the QA elements can be easily performed in the vast majority of contacts and it is expected that they will.

Generating Sales at the Expense of Service, Satisfaction & Loyalty

There was a post in the Customer Service Reader that discussed declining Customer Satisfaction in the retail sector. Claes Fornell of the National Quality Research Center attributes the decline to companies pushing their staff to generate sales at the expense of service: “Too much pressure on staff to generate sales can have a detrimental effect on the quality of service that the staff is able to provide, which, in turn, has a negative effect on repeat buying. Since many retailers measure and manage productivity, but don’t usually have good measures of the quality of customer service [emphasis added], it seems possible that some companies put too much emphasis on productivity at the expense of service.”

We have been seeing this trend in call centers recently. We’ve seen a manager alter the weighting of his team’s QA scale so that the upselling component counted for over one-third of the CSR’s Overall Service score. The push for cross-selling and up-selling is on the rise, and companies are not always weighing the long-term effects that this can have on customer satisfaction and loyalty. Up-selling and cross-selling can be tremendous tools for revenue generation, but it is critical that companies measure their customer’s willingness to hear these offers. Even with customers who are open to hearing these offers, it is important that a customer’s issues and questions be resolved with exemplary soft skills before the offer is made. Without the resolution and soft skill components delivered prior to the sales pitch, the sales efforts will not be as effective and may serve to erode customer satisfaction and loyalty.

Technorati Tags:
, , , , , , ,

Combat the Excuses of a Monotone CSR!

It’s a classic coaching situation. The CSR sounds like a monotone robot on Valium (kind of like Marvin the Paranoid Android in Hitchhiker’s Guide to the Galaxy for you sci-fi fans). You beg, you cajole, you implore the CSR to put a little inflection and enthusiasm in his voice. They usually give one of two excuses. Either they were having a bad day and couldn’t help it, or they can’t do it – “it’s just not me.” When I hear that, I always give the CSR this example:

How Many Calls Should Your QA Analyze?

I spoke a few weeks ago at the LOMA conference on Customer Service. LOMA is a great organization that caters to the insurance and financial services industry and my workshop was about Avoiding Common QA Pitfalls.” I’m always interested in what I learn from these conferences. You get a feeling for the hot issues in call centers.

The question that seemed to raise the most discussion at LOMA was “How many calls should I score and coach per person?” A book could probably be written on the subject, but let me give you a couple of thoughts based on our group’s experience.

Are you using QA results in performance management? If you are, then the question really needs to be, “do we have enough calls to be statistically valid and hold up to scrutiny?” If you are giving any kind of merit pay, incentives, bonuses or promotions based on their QA scores, then you’ll want a valid number. Assuming your QA scorecard has a valid methodology (which is a big assumption based on the fact that most QA scorecards we audit have major problems with their statistical validity), you’ll want at least 30 randomly selected calls. More is great, but there is sort of a rule in statistics that once you have more than 30 of anything, you’ve got enough that you know they can’t all be outliers. Let me say again, I’m talking minimums here.

The “Wait ’til Mom & Dad are Gone” Syndrome. Many call centers coach each agent religiously once a week. That’s fine from a feedback point-of-view. But like kids who wait until they see their parents pull out of the driveway to start the party, agents often know that they only have to watch their service until they’ve been coached for the week. After that, all bets are off. Sometimes a seemingly random coaching schedule that keeps agents guessing is a good thing.

It might depend on the agent. In our politically correct world we are conditioned to do the same thing for everybody. Yet, some agents need little feedback or coaching. Score the calls, make sure they’re still stellar, and then let them know their scores and give them their bonus.

Why waste time, energy and money coaching them? That’s like the guy who washes his car everyday whether it needs it or not (then parks it diagonally across two spots in the parking lot…I hate that guy!). Seriously, the number of coaching sessions is a separate issue from how many calls should you score to have a valid sample. Spend your coaching energy on agents who need it the most. It even becomes an incentive for some agents who dread the coaching sessions: “Keep your numbers up and you don’t have to be coached as much.”

From the discussion I had with some QA managers at the LOMA conference, there were several who – in my opinion – were coaching their people more than was necessary. We’ve seen agents greatly improve performance with quarterly and even semi-annual call coaching. Still, that’s not going to be enough for other agents.

There’s the challenge for you – finding out which agent is which and tailoring your QA process to meet each agent’s needs.

Technorati Tags:
, , ,

Cracking Call Coaching’s Hard Nuts

Business CoachingIt was a classic moment. It was my first call coaching session with an agent who provided a service/inside sales function for his company. He came in, shut the door and exploded:

“I just want to say right now that this whole thing is a bunch of [expletive]. You don’t know my [expletive] job and there’s no [expletive] way in [expletive] you will make any difference in what I do.”

Great. Have a seat. Let’s get started. I was thinking to myself “with that attitude, I might just have to agree with the last bit of what you just said.”

These are the coaching sessions we dread and with good reason. Most call coaches are well-intentioned people who really want to see their team succeed, their customers satisfied, and their charges improve. Then there are people like this guy who can make the job a nightmare.

To be honest, there are people who I’ve coached through the years that simply were not teachable. They were angry and frustrated in life, they were not a good fit for their jobs, and the best move for them would be to another position. I believe there are nuts you won’t crack.

Yet, there are hard nuts you can crack. With people like the agent I just described, I’ve been able to succeed by finding out what really motivates them. I listen to them. I make small talk. I try to observe what it is that the person really wants. With some people its recognition, so I find the slightest improvement and hold them up before their peers/supervisor for their accomplishment. When I go from critic to fan in their eyes, their attitude changes. Others need to have a stake in the process. They want to lead. So, I make them their team’s “quality captain” and watch them go from critic to cheerleader. The guy I described above was motivated simply by greed. I found it kind of sad, but after listening to him rant for a while I said something like this:

“Look, I know you don’t believe in this whole process but give me a chance here. I know what your customers want (we did the reasearch). I can help you to deliver service that will make your customers love you. If they love you they will want to do more business with you. If they give you more business then you’re going to be more successful. You’ll exceed your sales goals and you’ll make more money.”

BINGO! He wasn’t an instant believer, but I at least had his interest. He’s still a pain to coach at times, but it’s gotten better. He’s actually improved and begun to employ the skills I’ve coached. The guy will never become a raving fan and will never admit that the QA process helped him. His pride won’t let him. That’s okay. We both know it’s true.