Category: QA Methodology

Making Allowances for New CSRs

 Many call centers struggle with how to handle new CSRs as it relates to quality assessment. There is more and more pressure to get CSRs out of training and on the floor. The result is that CSRs are often taking calls before they are fully knowledgeable and there’s going to be a period of time when they struggle to deliver a level of service expected by the QA scorecard. So, what do you do?
First, you always want to be objective. Communicate the QA standard or expectation and score it accordingly. If they missed an element – mark it down. If it’s on the form then you should always score it appropriately.
The customer doesn’t care that the CSR is new – they have the same expectations no matter who picks up the phone. Giving the CSR credit and simply “coaching” her on it will ultimately do a disservice to everyone involved. It tends to undermine the objectivity, validity and credibility of the QA program.
To sum it up, let your “yes be yes” and your “no be no.” It does, however, make sense to give new agents a nesting period to get up to speed. Rather than dumbing down the scale or pretending that they delivered better service than they actually did, it makes more sense to me to have a grace period. Some call centers will have a graduated performance expectation (e.g. by 60 days your QA scores have to average 85 by 90 days they have to be at 90, etc.). Other call centers will allow new CSRs to drop a set number of QA evaluations from their permanent record to account for the outliers that frequently occur (e.g. “We expect you to perform at an average QA score of 95. I realize that newbie mistakes cost you on this evaluation, but over the first 90 days you get to drop the lowest three QA scores from your permanent record, so this may be one of three). Either one of these strategies allow you to make allowance for rookie mistakes without having to sacrifice your objectivity.

Technorati Tags:
, , , ,

Understanding QA Rules and Exceptions

Most QA elements are based on general guidelines for calls or e-mails in a given contact center. Some elements are fairly common as they address the vast majority of customer service or sales interactions. There are, however, exceptions to almost every customer service rule. There will always be those outlier circumstances that don’t seem to fit in the normal communication flow. The problem comes when people want to make the rules based on the exceptions:
  • Conversationally use the customer’s name. “But, this one time, a customer got angry because I mispronounced his name – so I never use the caller’s name. Don’t want to make that mistake again.”
  • Apologize if something has not met the customer’s expectations. “But, this one time, I had a customer who told me, ‘I don’t want your apology’ – so I’ve never apologized to a customer again.”
  • Give customers a time frame of when you’ll get back them with an answer. “But I never know when I’m going to hear back from accounting, can’t give a time frame because I might not meet it.”

It’s important to keep “rules” and “exceptions” in balance. Don’t make rules based on occasional exceptions. Base the elements of your QA scorecard on the general rules that apply to the vast majority of your calls. If an “exceptional” situation arises, you can deal with it on a situation by situation basis. For example, if the customer’s name was fifteen syllables long and difficult to pronounce, you would mark “use the customer’s name” not applicable for that particular call and talk to the CSR about how to handle those situations in the future.

There are other ways to deal with exceptional calls and situations within calls. My point is simply that, when coaching CSRs, you have to continually communicate your understanding of exceptional situations and your willingness to treat those situations in a just and fair manner. I try to be equally rigorous in communicating the message that the QA elements can be easily performed in the vast majority of contacts and it is expected that they will.

How Many Calls Can You Analyze Before Your Brain Turns to Mush?

Tattoo-1 Originally uploaded by Atilla1000I had a ton of calls to score this week. We had a deadline for one of our clients and a perfect storm of circumstances conspired to leave us with a host of calls that had to be analyzed over the past two weeks. You know how it is, no matter how hard you try to schedule things out so that life is nice and even and manageable, daily life has a miscellaneous host of ways to screw up your best intentions. So, I spent much of my week chained to the computer, slogging through a pile of calls.
The thing about doing a good job with quality assessment and call scoring is that – it takes what it takes. If you’re going to be objective and give a quality analysis, you have to give each call the time it requires. If you listen to a call once through and think you heard it all, you’re probably wrong. Trying to keep tabs on what you’re hearing while remembering what the CSR just missed and, at the same time, scrolling to the right place to mark the scoring tool – your mind can’t possibly catch everything with one listen. Besides, you can only analyze and score a certain number of calls before they all start bleeding together in your mind and you forget what you just heard – or you think you heard something but you’re really thinking of the call you previously scored. Arrrrrgghhh. I’ve learned that I need to limit the number of calls I score at one time and then take a call scoring sabbath.
Because I’m dealing with different clients and different types of calls, the number can vary from client to client. I generally won’t score more than a few calls at a time before taking a mental break. It doesn’t have to be a long mental break. I might just get up to grab a bite of food or a drink and let my thoughts wander. I might listen to some music while checking my e-mail or take a minute to check out a few links or the latest blog posts on my weblog list. Whatever it is, sometimes you need a mental break so your brain doesn’t turn to mush. I always go back to my call scoring with just a little more energy and clarity.

Technorati Tags:
, , ,

Generating Sales at the Expense of Service, Satisfaction & Loyalty

There was a post in the Customer Service Reader that discussed declining Customer Satisfaction in the retail sector. Claes Fornell of the National Quality Research Center attributes the decline to companies pushing their staff to generate sales at the expense of service: “Too much pressure on staff to generate sales can have a detrimental effect on the quality of service that the staff is able to provide, which, in turn, has a negative effect on repeat buying. Since many retailers measure and manage productivity, but don’t usually have good measures of the quality of customer service [emphasis added], it seems possible that some companies put too much emphasis on productivity at the expense of service.”

We have been seeing this trend in call centers recently. We’ve seen a manager alter the weighting of his team’s QA scale so that the upselling component counted for over one-third of the CSR’s Overall Service score. The push for cross-selling and up-selling is on the rise, and companies are not always weighing the long-term effects that this can have on customer satisfaction and loyalty. Up-selling and cross-selling can be tremendous tools for revenue generation, but it is critical that companies measure their customer’s willingness to hear these offers. Even with customers who are open to hearing these offers, it is important that a customer’s issues and questions be resolved with exemplary soft skills before the offer is made. Without the resolution and soft skill components delivered prior to the sales pitch, the sales efforts will not be as effective and may serve to erode customer satisfaction and loyalty.

Technorati Tags:
, , , , , , ,

How Many Calls Should Your QA Analyze?

I spoke a few weeks ago at the LOMA conference on Customer Service. LOMA is a great organization that caters to the insurance and financial services industry and my workshop was about Avoiding Common QA Pitfalls.” I’m always interested in what I learn from these conferences. You get a feeling for the hot issues in call centers.

The question that seemed to raise the most discussion at LOMA was “How many calls should I score and coach per person?” A book could probably be written on the subject, but let me give you a couple of thoughts based on our group’s experience.

Are you using QA results in performance management? If you are, then the question really needs to be, “do we have enough calls to be statistically valid and hold up to scrutiny?” If you are giving any kind of merit pay, incentives, bonuses or promotions based on their QA scores, then you’ll want a valid number. Assuming your QA scorecard has a valid methodology (which is a big assumption based on the fact that most QA scorecards we audit have major problems with their statistical validity), you’ll want at least 30 randomly selected calls. More is great, but there is sort of a rule in statistics that once you have more than 30 of anything, you’ve got enough that you know they can’t all be outliers. Let me say again, I’m talking minimums here.

The “Wait ’til Mom & Dad are Gone” Syndrome. Many call centers coach each agent religiously once a week. That’s fine from a feedback point-of-view. But like kids who wait until they see their parents pull out of the driveway to start the party, agents often know that they only have to watch their service until they’ve been coached for the week. After that, all bets are off. Sometimes a seemingly random coaching schedule that keeps agents guessing is a good thing.

It might depend on the agent. In our politically correct world we are conditioned to do the same thing for everybody. Yet, some agents need little feedback or coaching. Score the calls, make sure they’re still stellar, and then let them know their scores and give them their bonus.

Why waste time, energy and money coaching them? That’s like the guy who washes his car everyday whether it needs it or not (then parks it diagonally across two spots in the parking lot…I hate that guy!). Seriously, the number of coaching sessions is a separate issue from how many calls should you score to have a valid sample. Spend your coaching energy on agents who need it the most. It even becomes an incentive for some agents who dread the coaching sessions: “Keep your numbers up and you don’t have to be coached as much.”

From the discussion I had with some QA managers at the LOMA conference, there were several who – in my opinion – were coaching their people more than was necessary. We’ve seen agents greatly improve performance with quarterly and even semi-annual call coaching. Still, that’s not going to be enough for other agents.

There’s the challenge for you – finding out which agent is which and tailoring your QA process to meet each agent’s needs.

Technorati Tags:
, , ,

Who are You Satisfying with Your QA Scorecard?

Multi-Ethnic Group of People Working TogetherI’m always struck by the mixture of motivations underlying many call center QA scorecards. Companies love to give lip service to delivering excellent customer service and improving customer satisfaction. Their QA scale, however, may reflect the designs of a cost-sensitive management team (driven by lowering costs with no regard to impact on the customer) or a sales-driven management team (driven by increasing sales with no regard to impact on the customer).

This mixture of messages frustrates front-line agents who see the hypocrisy: “You say you believe in customer service but all I’m told in QA is to keep it short or push the cross-sell.” It also frustrates the QA coach who must try to justify or explain the obvious, mixed message. Of course, it is possible to have a balanced methodology in which you satisfy customers and look for ways to be efficient and opportunistic. The key is to make sure the customer is not left out of the QA equation.

If you really care about keeping your customers coming back, you should start your entire QA program with a valid, objective customer satisfaction survey. The results can give you the data you need to impact Customer satisfaction and retention.

Find out what is really driving your customer’s satisfaction and loyalty. Then use that information in building and weighting your QA score card. In fact, some of our surveys have measured the customer’s willingness to hear up-sells and cross-sells in a customer service interaction. The results are often surprisingly positive, and the data can be a powerful tool in building buy-in among the front-lines for your sales drive.

Oh, and by the way, it’s possible that your company already does customer sat research and you’ve never seen it. Just the other day we provided a call center manager with a copy of a survey our group had done for his boss a few years ago. He was never aware that it had been done and had not been given access to the information, even though it was critical for driving tactical decisions in his call center. I wish that this was an isolated incident, but my gut tells me it happens more often than not. It may be worth it to ask around. Of course, trying to decipher the data in many customer sat surveys we’ve seen can be a mind-numbing task – but that rant will have to wait for another post!

Technorati Tags:
, ,

The Secret of this Team’s Success…

Analysis Analytics Bar graph Chart Data Information ConceptI scored a lot of calls today and it was really satisfying. The calls were fantastic. I mean, these calls were really World-class. I began working with this client years ago. They had no quality program in place. They had never monitored a call or coached their agents on service quality. Actually, when we began they could be described as decent. You might have said that they were very good – above average, even. That’s the thing. It’s one thing to help a customer service team who knows they’re bad. I think it’s a tougher job to take a team who’s doing well and motivate them to excellence.

This team is a good study in some of the keys to developing a consistent, world-class delivery:

  1. A management team that’s committed for the long-haul. This team had the same manager for several years. He was committed to developing a culture of quality and had the support of his superiors. No matter how much the front-line railed against the program or how wishy-washy the front-line supervisors may have been at times, the consistent message and commitment to quality has always been there.
  2. Outlast the critics. The QA program has not always been popular among the ranks. As is true whenever you start a quality program, there are plenty of crusty veterans who have been used to having the free-reign to do and say whatever they desire. Over the years, the nay-sayers on this team were quietly faced with three choices: get on board, retire or find another job. There are few of them left.
  3. Set a high expectation for new hires. This team has had turnover – like all call centers. This team implemented a new hire orientation training in which it’s clearly communicated that quality service and exemplary phone skills are mandatory.
  4. Individual accountability. The program for this client began by measuring and reporting team-based results. This was great to get the process started and to get front-line buy-in. You can only get so far with team-based reporting, however. This team let their program evolve until every team member received regular, individual feedback. Their QA scores are now a significant part of their annual performance review.
  5. Have fun rewarding performance. Through the years, this team has done a mixture of incentives. One year there were quarterly team rewards like going bowling for an hour at the end of the work day, taking a limo out for ice cream or having lunch in the board room. One popular incentive cost almost nothing – it allowed agents to throw a pie in their supervisor’s face. Another year, each agent who achieved a certain quality score got his/her name in a drawing for a major prize (like, $1500 nice). Perhaps the most motivating reward I’ve witnessed, however, comes from this team’s senior manager. He sends an e-mail or voice mail to every agent who achieves World-class QA scores and thanks them for their efforts.

I hate to think how many thousand phone calls I’ve scored from this team over the years. But listening to their calls today and hearing the difference…it feels pretty good.