Category: Calibration

Don’t Fear Call Calibration! Free Webinar in April!

CalibrationDo your calibration sessions have you ducking for cover? Are you so afraid of what calibration might unearth that you’re afraid to even start calibrating your QA team?

Calibration is a painful experience for many companies. Calibration experiences are often swapped between call center veterans like war stories in a V.F.W. hall. Nevertheless, that Calibration Session on your Outlook calendar doesn’t have to produce fear and trepidation.

Avtex has asked me to present a webinar on Successful Calibration Basics. The webinar is free and scheduled for April 17 from 10-11 a.m. CDT and April 22 from 2-3 p.m. CDT. The webinar will cover why calibration is so crucial to the success of QA, different ways you can calibrate and how to run a successful calibration session.

Space is limited so register for April 17th or register for April 22nd!

QAQnA Top 10 Posts from the Past Two Years

In celebration of two years of blogging, here from the home office in Des Moines, Iowa, are the All-Time Top Ten Posts from QAQnA:

1.       The Geek Squad Posts

2.       Ten Things Your Customer’s Don’t Want to Hear

3.       Internal Customers are Still Customers

4.       Successful Calibration Basics

5.       Upselling Basics

6.       World-Class Service: The Greeting

7.       Your Calls Can Be Monitored to Ensure Service Quality

8.       Zero Tolerance QA Elements

9.       World-Class Service: Managing “Dead Air”

10.   Pros & Cons of 3rd Party QA

“DANGER WILL ROBINSON!”

Robot
I was in a training session this morning and a front-line CSR questioned one of the elements in the QA scale. "This doesn’t make sense to me," he said. "I do it because I know I’ll be scored down if I don’t – but I don’t believe that I’m doing right by the customer."

DING!-DING!-DING!-DING! DANGER WILL ROBINSON!

Do you hear that? Whenever you hear a CSR say, "I do it just to get the points", that’s the sound of alarms going off telling you that there’s an issue that needs to be addressed. You want your QA scale to "make sense" and to promote behaviors that "do right by the customer". If your CSRs are questioning either, then it’s time to revisit the scale or to have a discussion with the agent because a couple of possibilities exist:

  • The QA scale could, indeed, be promoting a behavior that you didn’t intend. If you have a hard time justifying the element with your CSRs, then you need to ask why it’s in the scale in the first place. If it doesn’t make sense, delete it. If it makes sense but needs clarification then reword it. The scale is not the Constitution and it’s not Holy Scripture. You need to be able to change and refine it, as necessary (though changes should be made in a timely, ordered manner – too many changes too often only creates confusion and frustration).
  • There could be a calibration issue with the supervisors or QA analysts. You might have people interpreting and scoring the same element different ways. The fact that the CSR raised the alarm creates a great opportunity to meet with your QA team, review the element, and discuss how it’s being scored and coached.
  • The CSR could misunderstand the intent of that particular scale element and needs clarification. Pull the agent aside and explain to them why the element is in the scale, what is intended by the behavior, and what it accomplishes for the customer.
  • The CSR might understand why the element is there but they just don’t like doing it and do it "just to get the points". I find this happens with certain things like apologies or offering to help with other needs. In this case, a begrudging adherence is acceptable, if regrettable. I often find that certain behaviors begin with an CSR’s begrudging adherence, but eventually – as the agent builds a habit – they begin to understand and ultimately become critical of others who don’t do it.

When CSRs struggle with the scale, it’s easy for management to dismiss it as bad attitude or complaining. But, the CSRs struggle is great accountability and an opportunity for the process, and each person involved, to get better.

Creative Commons photo courtesy of Flickr and drp.

Definition Documents Aren’t Always the Answer

Tax_code
Many QA teams go through the process of "simplifying" their QA form by reducing the number of items on the form, generalizing the items, then creating a "definition document" that expands and defines what is meant by each item. I have seen definition documents that have become expansive tomes reminiscent of the U.S. tax code.

Much like the tax code, the simplification process and resulting encyclopedia of definitions started out with the best of intentions. Nevertheless, the following problems often result:

  • Using the definition document becomes cumbersome, so analysts or supervisors score calls using their own gut understanding of what each item means rather than consulting the definition document. Elements are thus missed or each scorer analyzes calls a bit differently based on their recollection of the criteria outlined in the definition document. Objectivity is lost.
  • The QA team often spends long periods of time in various debates arguing each "jot and tittle" of the definition document. Because the creation of a QA court system to "interpret the law" is generally frowned upon by management, the QA team continues to mire itself in calibration. Productivity and efficiency are lost.
  • Scorers who actually use the definition document spend significant amounts of time pouring over the document, checking, double-checking, cross-referencing, and mentally debating their analysis. Productivity and efficiency are lost.

While replacing the U.S. tax code with a simple, flat-tax is an economic pipe-dream – it’s not too late to reconsider your QA options. We’ve witnessed some QA teams who have trashed the confusing definition document and turned to well defined, behavioral elements that are scored using a binary scoring methodology, which can make scoring more simple, efficient and productive for your entire team.

Creative Commons photo courtesy of Flickr and Hibri

Measuring 101 Percent

We have, quite often, run into companies who measure their CSRs on a Quality Assessment (QA) scale that goes to 100 percent. They have all these people getting 100s on their assessments and then the QA analyst or supervisor has a call where the CSR did a great job. They feel that "100" doesn’t quite capture how good this call really was, so they decide to add something to the scale so that the CSR can get more than 100 percent if they exhibit this or that "extra" behavior.

While I appreciate the intent of the management team to reward exemplary behavior, there are a couple of issues we take with this approach:

  • If most of your CSRs are getting 100s and you feel like you have to give "extra credit" to an exemplary call, then your scale is likely a measure of mediocrity. You have many CSRs who, by their nature, will never give you more than 100. "I got 100. That’s good enough. As long as I get 100 I’m happy." If scoring 100 is common, then your scale isn’t challenging people to reach for excellence.
  • If a behavior is worth rewarding, then it should be in your scale and set as an expectation for all your CSRs when it applies. Don’t make it an optional "extra" for those who want to do it, make it the expectation that every CSR should do it when it’s in their power and ability to do so.
  • There’s nothing wrong with setting a high standard. It’s tempting to lower the standard in order to make people "feel good", but feeling "good" is often a mask for feeling "complacent". Setting a high standard means that your CSRs are continuously challenged to improve and those who reach the high standard can honestly feel a sense of accomplishment.

The result of a high standard is that people will score 100 less frequently. That’s okay. You simply have to help your team understand that context of the score.

Customer Expectation is a Difficult Way to Judge Calls

Through the years we’ve had a few clients who asked their QA analysts to use a scale that judged call quality this way (example based on an actual QA scale):

Element: CSR used the customer’s name in the call.
(choose one)

  • Exceeded Customer Expectations
  • Met Customer Expectations
  • Fell Below Customer Expectations
  • Needs improvement

While I applaud the intent of this approach (it asks the analyst to consider the customer’s point-of-view), there are several problems that will undermine the objectivity and effectiveness of the QA process:

  1. Do you really know what your customer expects? Asking the analyst to judge based on customer expectation assumes that each analyst understands what those expectations are. We’ve seen this approach used, but when we ask how the client arrived at determining just what the customer’s expectations are, there is no supporting data. (typical response: "It’s just common sense. We’re all customers.") If there’s not supporting data then each analyst is left to decide what "meets" or "exceeds" expectations. Because each analyst may have different thoughts or opinions, your resulting scores will be all over the map.
  2. Is exceeding customer expectation always important? It might seem obvious, but it is actually a faulty notion to believe that exceeding customer expectation in every area is necessary to improve customer satisfaction. Certain service dimensions are what researchers call "penalty variables". If you exceed the customer’s expectation in that dimension of service, you don’t receive a proportionate boost in customer satisfaction (you will be penalized if you fall below customer expectation, however). It doesn’t pay to exceed customer expectation in that particular area. Other dimensions are "reward variables", in which case you will be rewarded for exceeding customer expectation. The above scale presumes that every area of the phone call is a reward variable. This is not the case.
  3. Can you objectively differentiate between meeting and exceeding expectations on any given behavior? In the example given, how do you differentiate between "met expectations" and "exceeded expectations" when using the customer’s name? Do you give a number of times the name must be used (e.g. use the name three times in the call)? But, what if the call is only 30 seconds long? Does using the caller’s first or last name count differently? It’s difficult to make a consistent, objective choice based on what the customer’s expectations may (or may not) be.
  4. In almost every case we’ve seen this approach used, there are two negative options. In the scale I cited above the QA analyst was asked to choose between "fell below customer expectations" and "needs improvement". If, however, the scale is based on the notion that exceeding expectations is what you want, than wouldn’t anything below "met expectation" mean that it needs improvement? How is an analyst to decide that an element was below expectation, yet it doesn’t need to be improved?

Creating a QA scale can be an arduous task, but there are principles that, when followed, will allow you to maximize objectivity and make your scale both effective and efficient.

Measuring Quality Across Multiple Call Centers

Call_floor
Larger corporations often face a dilemma of having multiple contact centers representing multiple divisions. The problem is that each contact center may have their own quality scale, quality metrics and quality teams. When senior management wants to know how their contact centers are doing, there is no comparable data. In addition, there is little meaning in the numbers provided ("Great. So, what does 94.6 mean?") or confidence that the methodology by which the data has been gathered is reliable ("Okay, I know this contact center has some major issues, but according to the quality scores they’re better than Disney!")

This is a situation where a well-focused, third party quality assessment can provide maximum benefit. Our group has approached this dilemma with our clients by:

  • Measuring all contact centers on the same criteria
  • Utilizing a sound sampling, measurement and reporting methodology
  • Analyzing calls with an objective, customer’s point-of-view approach
  • Providing actionable data that outlines key improvement opportunities for each contact center and the company as a whole.

This type of assessment not only provides an "apples-to-apples" comparison of service across multiple contact centers, but also unearths policy and procedural issues impeding service and diminishing the customer experience. Internal QA functions are often so focused on individual CSR behavior that they fail to identify these process improvement opportunities.

Creative Commons photo courtesy of Flickr and Brandon King.

It’s So Simple! Why Don’t We Do It?!

Track
When the best sprinters in the world meet in competition, the difference between getting a gold medal and being an "also-ran" can be the difference of tenths or even hundredths of a second. Think about the seemingly insignificant details that can make a hundredth of a second difference.

Great service providers are like world-class athletes. The difference between good customer service and World-class, gold medal winning service are often small details of a call done consistently and done well. The small details that can push a company into that "World-class" range are usually very simple things like using the customer’s name, consistently using "please" when requesting information, apologizing for unmet expectations or offering to help with other needs before closing the call. Executives and managers often bang their heads against a table when their teams fail to do well in these "simple" behaviors, uttering "Why can’t we do this?!?!"

Here are a few reasons:

  • We don’t communicate "why" this detail is important. Often, the customer’s expectations are left out of the equation when communicating what we expect from the front-line – leaving the CSR thinking that this is just a silly management requirement. Even when data is available about what the customer expects, it is rarely well-communicated to the front-line.
  • Internal QA teams are often reticent to focus on these "details", fearing that they will seem nit-picky and that it will create conflict with CSRs. They choose not to mark the CSR down and decide just to "coach them on it". This sends the message that it’s not really that important and CSRs have no motivation or incentive to change their behavior.
  • In the day-to-day pressure cooker of a call center environment, it’s easy for people to get into a "good enough" mentality. "We’re answering the phone. We’re resolving issues. It’s good enough."
  • Behavior change requires conscious effort, and many people aren’t willing to do it.

Great managers are like great coaches. They set a high standard of excellence, they continually coach to that standard, they accept nothing less than that standard – and they find ways to both inspire and motivate their teams.

Flickr Creative Commons photo courtesy of Don Andre

Be Wary of “Yeah, but…”s in Call Scoring

One of the foundations of successful QA is objectivity. When a supervisor or QA scorer is analyzing a call, it’s imperative that she or he objectively measures what did or did not take place in that particular moment of truth with the customer. Too often, call scorers are given to relativity in their analysis rather than objectivity.

Through years of calibrating with different QA and supervisory teams, I’ve learned to be wary of two words…

"Yeah, but…"

These two words usually precede an excuse for behavior that damages objectivity. Here are the three most common "Yeah, but…"s I hear:

  • "Yeah, but…the CSR was really having a bad day." (the customer doesn’t care)
  • "Yeah, but…this was an improvement over what this CSR used to be like."(the customer doesn’t care)
  • "Yeah, but…this CSR is new." (the customer doesn’t care)

While each of the above statements may be true, it doesn’t change the fact that the customer received a negative service experience. The goal of QA should be to build consistency in service delivery no matter who is delivering that service – no matter what the circumstance. Once you start excusing behavior and giving CSRs a "pass" for a "yeah, but…." – you’ve crossed a line that brings the validity of every call evaluation into question.

Do you have any "yeah, but…"s you’ve heard in your own call center that you’d like to share? Click on the "comment" link at the bottom of this post and share it with us. (E-mail subscribers: Click on the title in this e-mail, which will take you to the web-page of this particular post. Once there, you can scroll down and leave a comment!)

Does Calibration Make You Want to ‘Duck & Cover’?

Calibration
Let’s be honest. Calibration isn’t always the most pleasant experience in the QA process. In fact, I’ve found it quite common for people to prepare for calibration sessions like they’re going to battle. Once the session starts they feel like ducking for cover, and they leave the session feeling like they’ve gone 15 rounds with George Foreman! Nevertheless, calibration is necessary – and when it’s done properly, it does produce more consistent and objective QA results. And it doesn’t have to be contentious!

Take one of our clients, for example. When our group began calibrating with their QA team a couple of years ago it was an absolutely brutal experience for various reasons. Their QA scale wasn’t as objective as it could have been, there were far too many people involved, there was a leadership vacuum, and the entire QA process was cobbled together without discipline. It’s taken some time, but the team has slowly applied successful calibration basics.

What have been the results?

Calibration sessions take half the time.
The session is more constructive.
Managers are – well – managing.
In the past year, the average score differential dropped from 11% to 2% (and 11% was an improvement from previous years!)

If you’re going to do QA right – then you’ve got to do calibration right. Be encouraged, and come on out from under the desk. It can be done!