Category: Calibration

Three Ways to Improve Your Quality Program in 2017

It’s still January and everyone is busy implementing goals for 2017. It’s not too late to take a good, long look at your contact center’s quality program with an eye to improving things this year. Here are three thoughts for taking your quality assessment (QA) to a new level.

Reevaluate the Scorecard

Most quality programs hinge on the quality of the criteria by which they measure performance. A few years ago there was a backlash against behavioral measurements (e.g. “Did the agent address the caller by name?”) as companies sought to avoid the calibration headaches and wrangling over definitions. The pendulum swung in true human nature to the opposite side of the continuum to become completely subjective. Multiple behaviors gave way to two or three esoteric questions such as, “Did the agent reflect the brand?”

This shift to the subjective is, of course, wrought with its own problems. You can forget about having any objective data with which to measure agent performance. If your analyst is Moonbeam Nirvana then you’ll get consistently positive evaluations complete with praise for what Moonbeam believes was your good intentions (and lots of smiley emoticons). If, on the other hand, your analyst is Gerhardt Gestapo then your performance will always fall short of the ideal and leave you feeling at risk of being written up.

Measuring performance does not have to be that difficult. First, consider what it is that you really desire to accomplish. Do you want to measure compliance or adherence to corporate or regulatory requirements? Do you want to drive customer satisfaction? Do you want to make agents feel better about themselves? Any of these can be an arguable position from which to develop criteria, but you should start with being honest about the goal. Most scorecards suffer from misunderstood and/or miscommunicated intention.

Next, be clear about what you want to hear from your agent in the conversation. Define it so that it can be easily understood, taught, and demonstrated.

Prioritizing is also important. While exhaustive measurement of the interaction can be beneficial, it is also time consuming and may not give you your bang for the investment of time and energy. If your priority is ad-on sales, then be honest about you intention of measuring it, define what you want to hear from your agents, then focus your analysts on listening for those priority items.

Look at Data for Both Agents and Analysts

One of the more frequently missed opportunities to keep your QA process on task is that of looking at the data of how your analysts actually measured the calls.

Years ago our team was the third party QA provider for several teams inside a global corporation while other internal teams managed the job for other locations. There was an initiative to create a hybrid approach that put the internal and external analysts together in sampling and measuring agents across all offices. When we ran the numbers to see how analysts were scoring, however, the internal analysts’ average results were consistently higher than the external analysts. Our analysis of analyst data provided the opportunity for some good conversations about the differences in how we were hearing and analyzing the same conversations.

Especially with larger quality operations in which many analysts measure a host of different agents and/or teams, the tracking of analyst data can provide you with critical insight. When performing audits of different QA programs, it is quite common for our team to find that analysts who happen to also be the team’s supervisor can be easily given to sacrifice objectivity in an effort to be “kind” to their agents (and make their team’s scores look a little better to the management team). Likewise, we have also seen instances where data reveal that one analyst is unusually harsh in their analysis of one particular agent (as evidenced in the deviation in scores compared to the mean). Upon digging into the reasons for the discrepancy it is discovered that there is some personality conflict or bad blood between the two. The analyst, perhaps unwittingly, is using their QA analysis to passive aggressively attack the agent.

If you’ve never done so, it might be an eye opener to simply run a report of last year’s QA data and sort by analyst. Look for disparities and deviations. The results could give you the blueprint you need to tighten up the objectivity of your entire program.

Free Yourself from Software Slavery

As a third party QA provider, our team is by necessity platform agnostic when it comes to the recording, playing and analyzing phone calls. We have used a veritable plethora of software solutions from the telephony “suites” of tech giants who run the industry like the Great and Powerful Oz to small programs coded for a client by some independent tech geek. They all have their positives and negatives.

Many call recording and QA software “suites” come with built in scoring and analysis tools. The programmers, however, had to create the framework by which you will analyze the calls and report the data. While some solutions are more flexible than others, I have yet to see one that gives one the flexibility truly desired. Most companies end up sacrificing their desire to measure, analyze, and/or report things a certain way because of the constraints inherent in the software. The amazing software that the sales person said was going to make things so easy now becomes an obstacle and a headache. Of course, the software provider will be happy to take more of your money to program a solution for you. I know of one company who, this past year, paid a big telephony vendor six figures to “program a solution” within their own software, only to watch them raise their hands in defeat and walk away (with the client’s money, of course).

Tech companies have, for years, sold companies on expensive promises that their software will do everything they want or need it to do. My experience is that very few, if any, of the companies who lay out the money for these solutions feel that the expensive promises are ever fully realized.

If your call data, analysis and reporting is not what you want it to be, and if you feel like you’re sacrificing data/reporting quality because the software “doesn’t do that,” then I suggest you consider liberating yourself. If the tool isn’t working, then find a way to utilize a different tool. What is it we want to know? How can we get to that information? What will allow us to crunch the numbers and create the reports we really want? Look into options for exporting all of the data out of your software suite and into a database or Excel type program that will allow you to sort and analyze data to get you the information you want and need. Our company has always used Excel (sometimes in conjunction with some other statistical software) because it’s faster, easier, more powerful and infinitely more flexible than any packaged QA software we’ve ever tested.

Continuous improvement is key to business success. Scrutinizing quality criteria, analyst data, and your software constraints are just three simple ways to take a step forward with your quality program. Here’s to making sure that we’re doing things better at the end of 2017 than we were doing at the start!

 

Who QA’s the QA Team?

Bigstockphoto_Business_Meeting_1264156 It’s a classic dilemma. The Quality Assesment (QA) team, whether it’s supervisor or separate QA analyst, evaluates calls and coaches Customer Service Reps (CSRs). But, how do you know that they are doing a good job with their evaluations and their coaching? Who QA’s the QA team?

The question is a good one, and here are a couple of options to consider:

  • QA Data analysis. At the very least, you should be compiling the data from each supervisor or QA analyst. With a little up front time spent setting up some tracking on a spreadsheet program, you can, overtime, quanitfy how your QA analysts score. How do the individual analysts compare to the average of the whole? Who is typically high? Who is the strictest? Which elements does this supervisor score more strictly than the rest of the group? The simple tracking of data can tell you a lot about your team and give you the tool you need to help manage them.
  • CSR survey. I hear a lot of people throw this out as an option. While a periodic survey of CSRs to get their take on each QA coach or supervisor can provide insight, you want to be careful how you set this up. If the CSR is going to evaluate the coach after every coaching session, then it puts the coach in an awkward position. You may be creating a scenario in which the coach is more concerned with how the CSR will evaluate him/her than providing an objective analysis. If you’re going to poll your CSR ranks, do so only on a periodic basis. Don’t let them or the coaches know when you’re going to do it. Consider carefully the questions you ask and make sure they will give you useful feedback data.
  • Third-party Assessment. Our team regularly provides a periodic, objective assessment of a call center’s service quality. By having an independent assessment, you can reality test and validate that your own internal process is on-target. You can also get specific, tactical ideas for improving your own internal scorecard.
  • QA Audit. Another way to periodically get a report card on the QA team is through an audit. My team regularly provides this service for clients, as well. Internal audits can be done, though you want to be careful of any internal bias. In an audit, you have a third party evaluate a valid sample of calls that have already been assessed by the supervisor or coach. The auditor becomes the benchmark and you see where there are deviations in the way analysts evaluate the call. In one recent audit, we found that one particular member of the QA team was more consistent than any other member of the QA and supervisory staff. Nevertheless,there was one element of the scorecard that this QA analyst never scored down (while the element was missed on an average of 20% of phone calls). Just discovering this one “blind spot” helped an already great analyst improve his accuracy and objectivity.

Any valid attemps you make to track and evaluate the quality of your call analysis is helpful to the entire process. Establishing a method for validating the consistency of your QA team will bring credibility to the process, help silence internal critics and establish a model of continuous improvement.

If you think our team may be of service in helping you with an objective assessment or audit, please drop me an e-mail. I’d love to discuss it with you.

Managing Appeals & Challenges in QA

A process of appeal. Special thanks to one of our readers, Sarah M., who sent an email asking about the process of a CSR challenging their Quality Assessment (QA) evaluation. Unless you've gone the route of having speech analytics evaluate all of your calls (which has inherent accuracy challenges of its own), your QA process is a human affair. Just as every CSR will fall short of perfection, so will every QA analyst. No matter how well you set up the process to ensure objectivity, mistakes will be made.

Because QA is a human affair, you will also be evaluating individuals who do not respond positively to having their performance questioned or criticized. There are a myriad of reasons for this and I won't bother to delve into that subject. The reality is that some individuals will challenge every evaluation.

So, we have honest mistakes being made, and we have occasional individuals who will systematically challenge every evaluation no matter how objective it is. How do you create a process of appeal that acknowledges and corrects obvious mistakes without bogging down the process in an endless bureaucratic system of appeals, similar to the court system?

Here are a couple of thoughts based on my experience:

  • Decide on an appropriate "Gatekeeper." Front line supervisors, or a similar initial "gatekeeper" are often the key to managing the chaos. There should be a person who hears the initial appeal and rightfully acknowledges there was an honest mistake, a worthy calibration issue, or dismisses the appeal outright. Now we've quickly addressed to probabilities: the honest mistake can be quickly corrected or the appeal without standing is quickly dismissed.
  • Formulate an efficient process for appeal. If an appeal is made that requires more discussion, than it needs to go a step further. I have seen many different set ups and scenarios this may successfully take. The "gatekeeper" might take it to the QA manager for a quick verdict. There might be a portion of regular calibration sessions given to addressing and discussing the issues raised by appeals. Two supervisors might discuss it and, together, render a quick decision.
  • Identify where the buck stops. When it comes to QA, my mantra has always been that"Managers should manage." A process of appeal becomes bogged down like a political process when you try to run it democratically. The entire QA process is more efficient, including the process of appeal, when a capable manager, with an eye to the brand/vision/mission of the company, can be the place where the buck stops.

Those are my two cents worth. What have you found to be key to handling challenges and appeals in your QA program?

Thoughts from the Calibration Trenches

Yesterday was calibration marathon day. Three different calibration sessions with three different teams with a staff meeting scrunched in between. It's not exactly what most people would consider an enjoyable day at the office. Granted, compared to countless calibration sessions I've endured with many different client's, our calibration sessions are a cake walk.

Nevertheless, as I was driving home I got a call from one of my teammates struggling with discouragement after the session and we had a great conversation about the calibration process. It got me thinking about some basic lessons I've learned through the years in calibration:

  • Calibration, by its' very nature, is a conflictive process. When you try to get a group of people to analyze the same call the same way, there are bound to be disagreements. The calibration session is not focused on the 90-95 percent of the call a team agrees on, but on the handful of things on which they disagree. You have to accept this going in and keep it in perspective. It's always wise to try and bring some levity and laughter to the session. Remind people of all the things that you agreed on which weren't conflictive. Keep the big picture in front of the team.
  • Calibration is often not about who is "right" and who is "wrong" but how we are going to consistently and objectively approach and analyze a given behavior or situation. People will see things differently. Often, I recognize that our team is grappling with multiple, legitimate ways to analyze a given situation. Because a manager or a team decides to do it a particular way does not mean that another person's way of doing it was "wrong," it just means that someone had to choose the method that works best in that moment. A good manager will regularly encourage his or he team with this fact.
  • A constructive calibration process will not get mired in a singular circumstance, but look for patterns and principles to apply across all calls. Many calibraiton sessions turn into a war over a small piece of one call. I am always asking myself, "what's the principle we can glean from this discussion that will help us be more consistent in scoring all of our calls?" Our team will keep a "Calibration Monitor" document that tries to summarize the general principles we discussd in the session which will aid all analysts with future calls.
  • You have to choose your battles. I will sometimes feel very strongly about a given situation when I was the lone person in the room who seemed to view it that way. Making my argument and stating my case is only met with blank stares. Despite the tremendous personal effort it takes to let it go, I have learned that it makes no sense to keep arguing. If it is a worthwhile and relevant issue, then I will have another opportunity in future calibration sessions to make my point when more people might see it. If that opportunity never emerges, then I was making a mountain of a mole-hill anyway.

Gray Areas Need Managers to Manage

Capable managers are crucial. Every QA scale has gray areas. In many cases, a particular behavior could be handled multiple ways and neither is necessarily "right" or "wrong." Take todays calibration with one of our clients, for example. The QA scale called for the Customer Service Representative (CSR) to seek the caller's permission to place him/her on hold and wait for the caller to answer before hitting the hold button. In the call we were analyzing the CSR didn't technically ask the caller to hold. She pretty much told him she wanted to place him on hold while she got the requested information. The customer, however, responded "Sure! No problem!"

The group of supervisors and QA analysts were evenly split.  Should the CSR should be given credit (since the customer did seem to take the statement as a question and responded in the affirmative)? Or, should she should be dinged (since she didn't phrase it the way the company preferred and should be help accountable to do so)? Either answer was acceptable. It just required a decision.

I believe an effective QA program needs a capable manager (or designated team of decision makers with authority) to manage it.  When the decision is left to a democratic vote of all parties in calibration, my experience is that members of the team who got "out voted" will often choose to continue handling the situation the way they see it. When a manager or superior weighs both sides and makes the decision, however, the members of the team are accountable to a higher authority. The key is to have a manager who can capably weigh and balance the priorities of company, customer and Customer Service Representatives.

Creative Commons photo courtesy of Flickr and aak

If You Give Points for “NA”, It’s Applicable

We’ve had several clients over the years who have created QA scales in which the call analyst can mark that a particular behavior was "Not Applicable", but then the Customer Service Representative (CSR) is given credit for that particular behavior in the calculation of their quality score. In some cases, this is driven by call scoring software that won’t (or won’t easily) run the calculations for non-applicable attributes. Other times, the scorecard was created this way and it was never given much consideration.

Giving credit for "Not Applicable" elements creates problems on different levels. The core issue is that you are diminishing the statistical reliability of your results. For the sake of simplicity, let’s say you have ten elements that are worth ten points each. On a given call, only five of them apply and the CSR missed one. The CSR got four-fifths or 80 percent of the applicable elements (40 points out of a possible 50). If we give credit for the five elements that really didn’t apply, the CSR gets 90 percent. That means you have created "noise" in the data. You’re not accurately measuring what the CSR actually did because you’ve given credit where credit for something that wasn’t even a factor in the call.

Not only does this create problems for the data, but it can diminish the effectiveness of your quality efforts. We have witnessed many situations in which a CSR consistently gets high quality marks because of all the credit they receive for non-applicable behaviors. CSRs have little motivation or challenge to improve because they figure, "I’m getting great scores. I must be doing all right!" If you took the "noise" out of the data, it would reveal that the CSRs have several key opportunities to improve.

For example, certain behaviors (like Hold elements) may rarely apply. When they do apply, the CSR often misses them. However, because the CSR is credited for 90 percent of calls in which the customer was never placed on hold, the score does not truly reflect their performance on that behavior. The CSR can easily look at a score of 90 percent and think that they are doing just fine when it comes to putting the customer on hold, when the truth is that they missed it every time on the 10 percent of calls to which it applied.

Another problem is raised when senior management attempts to correlate their quality scores and their customer satisfaction numbers. We’ve watched many executives scratch their heads when they continually get reports with QA scores near 100, only to find out that customers aren’t that satisfied with the service received when they call.

When measuring quality, it’s critical to accurately measure only the behaviors which applied in a given interaction! To say that a behavior didn’t apply to the phone call but somehow does apply to the calculation of the overall service experience…well, that just doesn’t compute.

A Simple Way to Start Calibrating

Scores I sat in the office of a call center manager as I shared the results of an audit our group had performed of their quality program. In this case, we had collected a sample of calls that had been scored by the supervisors and scored the same calls using their internal form. Not only were we able to provide a number of procedural recommendations that would improve their quality process, but we were able to unearth which supervisors tended to be unduly harsh or lenient in the analysis of their teams.

As we covered the results for one particular supervisor, the call center manager laughed and looked at the quality manager. In unison, they rolled their eyes. Our audit revealed that the supervisor commonly missed marking down for obvious quality errors. As a result, the quality scores for the team were grossly inflated. Both the call center manager and the quality manager suspected this had been the case.

While calibration can be a very detailed process, there is a very simple way to get started. Begin by comparing the average overall service scores of everyone who analyzes calls on your team. Use a minimum of thirty calls per analyst. Who is high? Who is low? What’s the average? How big is the variance? While this doesn’t give you much detailed information, it can be a road map for your first steps.

  • If you suspect that the analyst with the highest average overall service score may be inflating scores, simply pull a few calls and score them yourself. Compare your evaluation with that of the analyst. You’ll notice pretty quickly if things are being missed.
  • Likewise, the person with the lowest average may be marking things more strictly than the rest of the team. Scoring a few of the same calls and comparing your scores to theirs can be very revealing.
  • If the average overall scores are within a few data points, it’s a good sign that your quality form is driving a consistent result.
  • If you have a wide range in average overall service scores, it’s quite possible that you’ve got a lot more work to do.

The call center manager with whom I shared our audit results knew that they had issues with their quality process simply by looking at the average overall scores of his supervisors and knowing that the scores couldn’t be an accurate reflection of the service being delivered. By doing a more thorough audit of their quality process, they were able to drive down to parts of the process which were creating the problems and make positive changes. As a result, the entire call center team feels better about the quality process, service quality has improved and customer satisfaction is up!

Creative Commons photo courtesy of Flickr and brtsergio

Are You Producing Results or Just a Number?

95_3Our group recently performed an audit of our client’s internal quality process. In a QA audit, our team typically analyzes a sample of calls which have already been scored by the client’s Quality or Supervisory team. After analyzing the same calls using the client’s internal QA scale, our audit typically pin-points several improvement opportunities. An audit can reveal:

  • QA analysts or Supervisors who are unduly harsh in their analysis
  • QA analysts or Supervisors who are unduly lenient in their analysis
  • Areas of the QA scale which are creating confusion among analysts and CSRs
  • Elements within the scale which are driving calibration problems
  • Policies or procedures which are undermining the effectiveness of the program

For example, in our recent audit we took a look at the dates and times on the Supervisors QA reports. It became quickly apparent that most supervisors were waiting until the last possible minute before starting their QA analysis for the month. They then rifled through their assigned calls. Elements were easily missed. The analysis was shoddy and the results were unreliable.

I have witnessed many a call center manager who simply wants a quality report on his or her desk once a month. Typically, they just want a number. I’ve even witnessed call center managers who will say to their teams, "I don’t care how you do it. I just want a report with a ’95’ or better on my desk on the last day of each month." The number is never questioned. The methodology used to derive the number is given no consideration. What’s worse, the question is never asked: "Is the process used to analyze the calls actually having an impact on front-line service?"

If your quality program is about providing a report with a number, perhaps you should print the same report each month and stop wasting everyone’s time.

Creative Commons photo courtesy of Flickr and Leo Reynolds

Calibration is a Challenging Process

Bigstockphoto_listen_in_105794 It was an interesting webinar last Thursday with our friends from Avtex. We tried something new. We took seven willing participants from different companies and held a mock, role-play calibration during the webinar. The participants were all from different companies. They listened to a mock call, scored it using the same criteria, and then we got together to role-play an actual calibration call during the webinar.

If people were hoping to hear the ideal, sanitized, perfect calibration – they were sorely disappointed. What took place was the very picture of a real calibration call complete with people having technical difficulties and arriving late, the calibration going much longer than expected, people having to bow out to get to other meetings.

What I hope people did take away is that there are some key principles that help make calibrations more effective:

  • QA and Calibration is a marathon journey, not the finish line of a sprint. You don’t have to solve all your QA problems in one session – just make sure you’re moving forward. Slow and steady wins the race.
  • Someone needs to closely manage the process to keep things on track, make needed decisions, and make sure that there is follow up.
  • When disagreements arise, you should consider the business mission, customer expectation and CSR realities.
  • Always think about what decisions you can make right now and what issues require more discussion and calibration. Don’t be afraid to make the call when it needs to be made and table discussions that will required more time/consideration.

Thanks to those who participated and joined in! If you were there, I’d be interested in knowing your thoughts and observations!

Giving Credit Where Credit is Due

UnitedOne of the classic questions facing call center managers is "Who should do QA?" Do you have the supervisor do it, or do you have a team of people who do nothing but QA? There is no easy answer and there a pros and cons to both choices.

One of the struggles facing supervisors is being able to separate management issues outside of a phone call from influencing your call analysis and coaching. If my CSR is a chronic problem on the team with regard to attendance, attitude and productivity – it’s easy to be hyper-critical of his call. Call analysis and coaching can easily become a convenient vehicle of discipline and objectivity is lost. That’s where calibration becomes a huge part of the checks and balances to keep call analysts honest.

Call analysts, no matter who they are, must be honest and objective. If the CSR did a great job, then you’ve got to give credit where credit is due.

Speaking of which, I have to be honest about my business travel experiences last week. I’ve been very critical of United Airlines and the decline of service I’ve chronicled over the past few years. Last week was the most pleasant, on-time set of business trips I’ve had in years. It was capped off by a dear gate agent (I can’t remember her name) who went the extra-mile to get me on the last seat of an early flight home. As I approached the gate as the last passengers were getting on, she told me to go ahead and she would take care of the paperwork later. It was clearly an inconvenience for her, but she saved me a leg and got me home early to be at my daughter’s end-of-the-year choir concert. Well done. One of my clients also commented that a recent business trip on United had been the most pleasant he’d ever experienced. We’ll see if the trend continues on my flights this week and next. I hope this points to a new, customer-centered wind blowing in the friendly skies.

Creative Commons photo courtesy of Flickr and bcorreira