Tag: Call Center

Who Knew Siri Can Coach Your Employees, Too?!

siri-fail-2

We just posted last week about the rather disappointing realities two of our clients experienced in comparison to the bright promises on which they’d been sold speech analytic technology. In both cases they were sold on the idea of speech analytics replacing their human QA programs by analyzing every call and flagging calls in which there were problems. Our clients found that the technology itself took a much greater investment of time and resources than anticipated just to make it work at a basic level. The results were equally disappointing, requiring even more time and resources just to sort through the many false-positives that the software flagged.

It is with great interest then, that I received an MIT Technology Review article  from a former co-worker this week. The article reports on what the writers claim is the latest technology trend, offered by Cogito, to revolutionize contact centers. Apparently speech analytics has been so successful and popular at accurately analyzing customer conversations that the technology experts now want to sell technology to do call coaching, as well. Who knew that Siri could now offer us sage advice on how to communicate more effectively and connect more emotionally with our customers. By the way, according to their marketing they think their technology might help you with your marriage, as well.

I have noted over the years just how much big technology drives our industry. Go to any Contact Center Conference and look at who is paying big bucks, commanding the show floor, introducing the latest revolutionary advancement, and driving the conference agenda. C’est la vie. That’s how the market works. I get it.

I have also noted, however, that technology companies have often sold us on the next big thing, even when it wasn’t. Does anyone remember the Apple Newton? Laser Discs? Quadrophonic sound? Have you scanned a QR code lately? Ever heard of Sony Beta?

Technology is an effective tool when utilized for the strengths it delivers. I am more appreciative than most my colleagues with the advancements we’ve made in technology. I remember days sitting in a small closet jacking cassette tape recorders into an analog phone switch. I also know from a quarter century of coaching Customer Service Representatives (CSRs), Collection agents, and Sales representatives that human communication and interactions are complex on a number of levels. It isn’t just the customer to CSR conversation that is complex, but also the Call Coach to CSR conversations and relationship. Technology may be able to provide objective advice based on voice data, but I doubt that technology can read the personality type of the CSR. I don’t believe it can read the mood that the CSR is in that day and the nonverbal clues they are giving off regarding their openness and receptivity to the information. I doubt it can learn the communication style that works most effectively with each CSR and alter its coaching approach accordingly.

But, I’m sure they’re working on that. Just check it out at your next conference. They’ll have a virtual reality demonstration ready for you, I’m sure.

 

 

 

Three Ways to Improve Your Quality Program in 2017

It’s still January and everyone is busy implementing goals for 2017. It’s not too late to take a good, long look at your contact center’s quality program with an eye to improving things this year. Here are three thoughts for taking your quality assessment (QA) to a new level.

Reevaluate the Scorecard

Most quality programs hinge on the quality of the criteria by which they measure performance. A few years ago there was a backlash against behavioral measurements (e.g. “Did the agent address the caller by name?”) as companies sought to avoid the calibration headaches and wrangling over definitions. The pendulum swung in true human nature to the opposite side of the continuum to become completely subjective. Multiple behaviors gave way to two or three esoteric questions such as, “Did the agent reflect the brand?”

This shift to the subjective is, of course, wrought with its own problems. You can forget about having any objective data with which to measure agent performance. If your analyst is Moonbeam Nirvana then you’ll get consistently positive evaluations complete with praise for what Moonbeam believes was your good intentions (and lots of smiley emoticons). If, on the other hand, your analyst is Gerhardt Gestapo then your performance will always fall short of the ideal and leave you feeling at risk of being written up.

Measuring performance does not have to be that difficult. First, consider what it is that you really desire to accomplish. Do you want to measure compliance or adherence to corporate or regulatory requirements? Do you want to drive customer satisfaction? Do you want to make agents feel better about themselves? Any of these can be an arguable position from which to develop criteria, but you should start with being honest about the goal. Most scorecards suffer from misunderstood and/or miscommunicated intention.

Next, be clear about what you want to hear from your agent in the conversation. Define it so that it can be easily understood, taught, and demonstrated.

Prioritizing is also important. While exhaustive measurement of the interaction can be beneficial, it is also time consuming and may not give you your bang for the investment of time and energy. If your priority is ad-on sales, then be honest about you intention of measuring it, define what you want to hear from your agents, then focus your analysts on listening for those priority items.

Look at Data for Both Agents and Analysts

One of the more frequently missed opportunities to keep your QA process on task is that of looking at the data of how your analysts actually measured the calls.

Years ago our team was the third party QA provider for several teams inside a global corporation while other internal teams managed the job for other locations. There was an initiative to create a hybrid approach that put the internal and external analysts together in sampling and measuring agents across all offices. When we ran the numbers to see how analysts were scoring, however, the internal analysts’ average results were consistently higher than the external analysts. Our analysis of analyst data provided the opportunity for some good conversations about the differences in how we were hearing and analyzing the same conversations.

Especially with larger quality operations in which many analysts measure a host of different agents and/or teams, the tracking of analyst data can provide you with critical insight. When performing audits of different QA programs, it is quite common for our team to find that analysts who happen to also be the team’s supervisor can be easily given to sacrifice objectivity in an effort to be “kind” to their agents (and make their team’s scores look a little better to the management team). Likewise, we have also seen instances where data reveal that one analyst is unusually harsh in their analysis of one particular agent (as evidenced in the deviation in scores compared to the mean). Upon digging into the reasons for the discrepancy it is discovered that there is some personality conflict or bad blood between the two. The analyst, perhaps unwittingly, is using their QA analysis to passive aggressively attack the agent.

If you’ve never done so, it might be an eye opener to simply run a report of last year’s QA data and sort by analyst. Look for disparities and deviations. The results could give you the blueprint you need to tighten up the objectivity of your entire program.

Free Yourself from Software Slavery

As a third party QA provider, our team is by necessity platform agnostic when it comes to the recording, playing and analyzing phone calls. We have used a veritable plethora of software solutions from the telephony “suites” of tech giants who run the industry like the Great and Powerful Oz to small programs coded for a client by some independent tech geek. They all have their positives and negatives.

Many call recording and QA software “suites” come with built in scoring and analysis tools. The programmers, however, had to create the framework by which you will analyze the calls and report the data. While some solutions are more flexible than others, I have yet to see one that gives one the flexibility truly desired. Most companies end up sacrificing their desire to measure, analyze, and/or report things a certain way because of the constraints inherent in the software. The amazing software that the sales person said was going to make things so easy now becomes an obstacle and a headache. Of course, the software provider will be happy to take more of your money to program a solution for you. I know of one company who, this past year, paid a big telephony vendor six figures to “program a solution” within their own software, only to watch them raise their hands in defeat and walk away (with the client’s money, of course).

Tech companies have, for years, sold companies on expensive promises that their software will do everything they want or need it to do. My experience is that very few, if any, of the companies who lay out the money for these solutions feel that the expensive promises are ever fully realized.

If your call data, analysis and reporting is not what you want it to be, and if you feel like you’re sacrificing data/reporting quality because the software “doesn’t do that,” then I suggest you consider liberating yourself. If the tool isn’t working, then find a way to utilize a different tool. What is it we want to know? How can we get to that information? What will allow us to crunch the numbers and create the reports we really want? Look into options for exporting all of the data out of your software suite and into a database or Excel type program that will allow you to sort and analyze data to get you the information you want and need. Our company has always used Excel (sometimes in conjunction with some other statistical software) because it’s faster, easier, more powerful and infinitely more flexible than any packaged QA software we’ve ever tested.

Continuous improvement is key to business success. Scrutinizing quality criteria, analyst data, and your software constraints are just three simple ways to take a step forward with your quality program. Here’s to making sure that we’re doing things better at the end of 2017 than we were doing at the start!

 

Five Reasons to Consider a Third Party QA Provider

c wenger group is a full service Quality Assessment provider, assisting clients set up their QA programs and providing QA as a 3rd party complete with call analysis, reporting of team and individual agent data, and even data led coaching and training.

c wenger group is a full service Quality Assessment provider, assisting clients set up their QA programs and providing QA as a 3rd party complete with call analysis, reporting of team and individual agent data, and even data led coaching and training.

If your team or company is thinking about getting into call monitoring and Quality Assessment (QA), or for those who are seeking a solution to their internal QA headaches, we would encourage you to at least give consideration to a third party QA solution. Many companies dismiss the idea of a third party provider without really weighing the option. With nearly a quarter century of experience and multiple client relationships of twenty years or more, the team here at c wenger group believes we’ve proven that it can be a sensible alternative.

Here are five reasons to consider a third party QA provider:

  1. Expertise. I’m sure your company is good at what it does. You have expertise in your field and would like to focus your resources and energies on doing what you do well. We feel the same way. It may seem that analyzing a phone call, e-mail, or chat should not be that difficult. The technology company who sold you your suite of software probably made it sound like it would practically run itself and give you all sorts of powerful information with a few clicks of the mouse. The truth is that a successful quality program is more complex than it seems. Many companies go down the road to setting up their own quality program only to find themselves bogged down in a quagmire of questions about methodology, sample sizes, criteria, and calibration. Don’t try to re-invent the wheel building expertise in a business discipline that distracts you from doing what you do well (and what makes you money). Let us do what we do well, and help you with that.
  2. Expediency. We’ve had many companies tell us that they purchased or installed a call recording and QA solution that they thought would deliver an easy, “out of the box” program. Instead, they find themselves feeling like they purchased an expensive plane that sits on the tarmac because no one knows how to fly it. Don’t spending months wrangling and struggling just to figure out how you want your QA program to look and work. How much time will you and your valuable, talented team members waste in meetings and strategy sessions just trying to figure out how you’re going to analyze calls? We’ve been doing QA for companies of all shapes, sizes, and types for many years, and in short period of time we can have a working, effective, successful QA program set up and delivering useful data and information right to your desktop.
  3. Objectivity. One of the most common pitfalls of internal quality programs is analyst bias. Supervisors are tasked with monitoring their own teams’ calls, but they don’t want the team (or themselves) to look bad so when they hear something that goes wrong in a call they give the agents credit on the QA form and (wink, wink) “coach them on it.” A quality team member has personality issues with an agent, so he scores that agent more stringently that the rest of the team. A team leader has an agent who is disruptive to the team. She starts looking for “bad calls” to help make a case to fire the problem team member. These are scenarios we’ve seen and documented in our QA audits. They happen. What’s the cost of an internal QA program that doesn’t deliver reliable data or results? A third-party QA provider is not worried about making people look good or grinding axes. We are concerned with delivering objective data that accurately reflects the customer’s experience.
  4. Results delivered regularly, and on time. One of the biggest problems with internal QA programs is that they chronically bow to the tyranny of the urgent (which is all of the time). When things get busy or stressed, the task of analyzing calls is the first thing pushed to the back burner. Internal analysts procrastinate their call analysis until the deadline looms. Then, they rifle through calls to get them done and the results are not thoughtful, accurate, or objective. Our clients tell us that they appreciate knowing that when we we’re on the job the QA process will get done and it will be done well. Calls will be analyzed and reports will be delivered regularly and on time. Better yet, the results will be effective at helping you make tactical goals for improvement, effectively focus your training, manage agent performance, and successfully move the needle on customer satisfaction, retention, and loyalty.
  5. You can always fire us. A client once told us that he kept us around because he slept better at night knowing that he could always fire us. His comment was, admittedly, a little unnerving but his logic made a lot of sense. “If I do this QA thing myself,” he explained, “I have to hire and pay people to do it. In today’s business environment it’s impossible for me to fire someone without a lot of HR headaches. So, if the people I pay to do it internally don’t do it well then I’m stuck with both them and the poor QA program. I like having you do QA for us. Not only do you do it well, but I know that if anything goes wrong I can just pick up the phone and say, ‘we’re done.'” The good news is that he never made that call before he retired!

If you’re looking at getting started in call monitoring and assessment, or if you have a program that isn’t working, we would welcome you to consider how one of our custom designed solutions could deliver reliable, actionable, and profitable results.

 

CWG logoLR

c wenger group designs and provides fully integrated Customer Experience solutions including Customer Satisfaction research, call/e-mail/chat Quality Assessment, and coaching/training solutions for teams and individual agents. Our clients include companies of all sizes in diverse market sectors.

Please feel free to contact us for a no obligation conversation!

Note: c wenger group will maintain your privacy

An Airplane on the Tarmac Profits You Little

Plane on tarmac: Sydney NS
Plane on tarmac: Sydney NS (Photo credit: mattjiggins)

I had an interesting conversation with a call center manager the other day over breakfast. I asked him how things were going at work. After a pause and a long sigh, I wondered if our breakfast was going to become an informal counseling session. He launched into his story. His company recently made a huge capital investment in the latest technology for call monitoring and evaluation. This is good news, right?! He’s got the latest programs that allow him to do all sorts of things in capturing, analyzing, and reporting on service quality. So, why was he looking so glum?

With all the investment in technology, there was no money in the budget to hire anyone to actually use the shiny new QA program. The marching orders from the executive suite were to use the new whiz-bang technology to work more efficiently and productively. “We bought you technology so we don’t have to hire more people,” was the mantra. He went on to make an interesting statement:

“It makes about as much sense as me going out and buying a new airplane. What can I do with it sitting there on the ground? I can stare at it. I can keep it clean. I can sit on the ground, stare at the dials, and play with the controls. But, I certainly can’t fly the thing.”

My colleague went on to explain how the corporate decision not to back-fill positions while increasing responsibilities for his call center staff meant that everyone had far more on their plate than could reasonably be accomplished. He knew his skeletal QA efforts were not coming close to utilizing the new, expensive technology, but the IT department who chose the system does not have the human resources to help the get it optimized or train the call center staff on how to best utilize it. Without human resources and human expertise, the investment in technology seemed a total waste. The company can certainly brag and feel good about having the latest technology that will allow them to fly with the best in the business world. However, without the necessary expertise and investment in human capital to actually make it fly, their team will sit on the tarmac admiring the dials on their very expensive placebo.

Enhanced by Zemanta

In Customer Service, Improvisation is Sometimes Necessary

 

from henriqueiwao via Flickr

My colleague was scheduled to present a training session to one of our client’s teams this morning. I was scheduled to attend and observe. While I was aware of the general topic being presented in the training, this was my colleague’s baby. She had written and produced the training and I’d never seen it presented before. She did, however, ask me to arrive early and set up the lap top, projector and slide show for her. Knowing that she was scheduled in a previous meeting, she realized that she would be pressed to arrive on time and needed to be ready to jump right in to her training presentation.

I was happy to help out. I arrived early, set up the laptop, projector and slide show. I greeted our client guests as they arrived and helped them all get settled. My colleague was clearly running behind. I apologized, explained the she would be there momentarily and attempted to initiate some small talk among the 20 or so team members assembled. A few minutes passed by. My colleague had still not arrived.

The Senior Manager in the room grew visibly anxious by the delay. From the oppostie side of the room he said, “Tom, will you please go ahead and get us started? We need to stay on schedule. You can start the training and she can take over when she gets here.”

The subtext of this was not a question as in “Can you start us?” but a gentle demand: “Tom, you will start this session. Our team’s time is valuable and we don’t have time to wait around.”

Ummmm… Okay. So I got up and approached the laptop praying that my colleague’s slide show was thorough and detailed. Slide one contained the objectives. Sweet. I can go through these. The first point of the training was talking about voice tone.  I quickly pulled some information from my years as a trainer and plowed forward.

A few months ago I wrote a post on my personal blog outlining Ten Ways Being a Theatre Major Prepared Me for Success. The post went viral. Well over 120,000 views to-date and hundreds of comments from around the globe. Number one on that list was “Improvisation.” I chuckled to myself as I thought about that and now found myself improvising my way through the opening slides of a training presentation I didn’t produce and of which I had no knowledge. To my great relief, my teammate entered the room a few minutes later and delivered me from having to improvise any more than I did.

I always tell my Customer Service training classes that training is all about understanding rules and exceptions. There are Customer Service rules that apply remarkably well to most service situations. Yet, for every rule there are exceptional situations to which the rules don’t fit. You don’t want to make rules based on the exceptions. You do, however, want to be prepared for the exceptional situation that requires you to think on your feet and improvise in the moment.

QA is Important: You Get What You Measure (or Don’t)

Portrait of happy female manager with business staff working in a call center

Last night I was preparing a Service Quality Assessment report for one of our clients. For years, the team was led by a strong manager who set the bar high for his team and held them accountable for their service performance. Agents had individual performance goals based on the service quality data we provided and could check their progress monthly through our on-line web portal. The manager even committed a generous monetary bonus to agents who could consistently deliver high levels of service. Then, just two months ago the manager was promoted and moved on to a new position.

Wouldn’t you know it? The team’s sevice performance plummeted after one month.

In recent years I’ve heard a cacophany of industry voices saying that QA is old school and ineffective. Most of the time, it seems to come from the technology sector who have a new widget to sell which promises to measure quality better (without actually involving humans) with the click of a mouse – or who want businesses to direct dollars spent on quality to their latest technology fad.

Last night’s report was a good reminder to me, and to my client, why the old fashioned discipline of setting an expectation, measuring behavior, encouraging, coaching and holding your people accountable works. You can set the expectation, but without the measuring, encouraging, coaching and accountability you’re not going to know if your team is delivering on that expectation (and it’s likely they won’t). It may not be glitzy. I may not be glamorous. Because it involves humans and human interaction it can even get messy at times. But, it works.

Ask my client, who this morning can go into her team meeting with the data to know how her team performed, what they did well, and what specific service behaviors they stopped demonstrating once they thought they weren’t going to be held accountable. She knows specifically what they need to do and can efficiently communicate the game plan and expectation for improvement.

The Truth of the Tape

A typical home reel to reel tape recorder, thi...

Image via Wikipedia

Since Prohibition, when recorded phone conversations with a bootlegger were first used in a criminal prosecution, the taped phone call has had a colorful history. Movies and television have made familiar the image of FBI agents hunkered over spinning reels of tape in a van or an empty warehouse loft as they listen in on the calls of shady mobsters. Go to the new Mob Museum in Las Vegas and you’ll get to hear some of the actual calls for yourself.

The recorded conversation is a powerful tool. In our training with clients, our team will often go into a studio and recreate a phone call using voice actors to protects the identify of caller and CSR, but accurately recreate the customer service conversation between the two. These calls are always a fun and effective training tool because they are based on an actual interaction with which CSRs identify. “I took a call just like that,” we hear all the time, “I think that mighta been me!” Because the pertinent identifying information is hidden, the focus can be on what we can learn from the call and how the interaction might have been improved.

Another important way to utilize recordings is as evidence of a particular procedural or systems related issue. Call recording software often includes a video capture of what is happening on the agent’s desktop during the phone call. When trying to make a point about how obtuse or cumbersome a particular system is for agents while they are on the phone call, a recorded example complete with visual can be a powerful piece of evidence for upper management and decision makers. As they sit and uncomfortably witness first hand the CSR struggling through a jungle of screens as they try to maintain conversation and call flow with the customer, it makes a much more persuasive argument than a mere description of the issue.

Of course, the recordings can also be very effective tools to highlight both positive and negative performance. It’s hard for CSRs to defend their poor service behaviors when there is a plethora of recorded evidence with which to coach them. People often think of call recording as merely a tool to catch people doing things wrong, but our team regularly reminds CSRs that the truth of the tape can also catch people doing things right and become hard evidence of an agents exemplary service skills. Many years ago a frustrated manager asked our team to do a special assessment of an agents calls. The manager wanted to fire the agent and was looking for evidence to do so. In this case, the tape revealed that the agent performed well when serving customers on the phone. The truth of the tape helped protect the CSR from being unfairly terminated.

Call recordings are tools. As with all tools, the results lie in the wisdom and abilities of the person or persons wielding them. When misused, call recording can do damage to people and businesses. When used with discernment and expertise, those same recordings can effectively help build a successful business.

Enhanced by Zemanta

You Can’t Fix What You Don’t Know is Broken

[tentblogger-youtube jhKqqYuV9MU]

I’m working with several new teams for a particular client. It’s always a bit of a sticky wicket when I show up for the first time. The other day I walked into the office of a department manager who’d been ducking me for weeks. Unanswered e-mails, unreturned voicemails and missed appointments. My team has been hired by the executive team to do a pilot assessment of his team’s service, and he wasn’t too happy about it. Many times a team and their managers are a little freaked when Mr. or Ms. Big tells them that someone is coming to listen in on their customer conversations.

  • “Oh, great! Big Brother is here!”
  • “What? Do you think we’re bad?”
  • “Someone’s just looking for the dirt to fire us!”
  • “What did I do wrong?”

I get it. It’s not always comfortable doing something new and a bit threatening when you’ve never done it before. And yet, I have almost twenty years doing this for many different companies and many different teams who started out as skeptics and are now long-term partners in better sales, service and even collections. it seems comfortable and easy rolling along without really knowing what’s happening in those moments of truth when your customers are talking to your company. “If it ain’t broke don’t fix it,” they say. But we are all human beings working for human beings dealing with human beings in a system created and maintained by human beings. I therefore have come to trust more in Bob Dylan’s perspective: “Everything is Broken.”  My experience is that with any cusotmer service, sales, or collections team there are things which are broken in the system which could be easily remedied if they are simply identified. But first you have to identify what they are. If you’re not listening, then you might not know something is broken until it’s too late (and no one wants that to happen at any rung of the corporate ladder).

When our team does a first time pilot assessment with a team, we generally start by assessing the whole team. We listen from the customer’s perspective. We don’t care who is who. We don’t identify individual agents. Like the customer, when you call Acme Anvils you don’t care who answers the phone. You’re talking to Acme Anvils. By starting with a blind assessment of the team, we can quickly identify areas that the team needs to improve. There’s no finger pointing, no calling out, no working agreements, and no private converstions in the corner office. There’s just a common issue that the whole team needs to address.

I’m happy to say that the vast majority of our clients, from the front-line to the board room, eventually learn that our Service Quality Assessment benefits everyone from the customer to everyone in the organization who cares about the customer and wants to do a good job. But, I first have to prove it to them and earn their trust. And so, I begin my day.

The Check-Out Line and Hold Button Have Glaring Similarities

NEW YORK - NOVEMBER 24:  Travelers wait in lin...

Image by Getty Images via @daylife

The Wall Street Journal had a great article this morning about the science of finding the best check-out line. Within the article, it talked about what happens when you are in queue for a period of time:

Envirosell, a retail consultancy, has timed shoppers in line with a stopwatch to determine how real wait times compared with how long shoppers felt they had waited. Up to about two to three minutes, the perception of the wait “was very accurate,” says Paco Underhill, Envirosell’s founding president and author of the retail-behavior bible “Why We Buy: The Science of Shopping.”

But after three minutes, the perceived wait time multiplied with each passing minute. “So if the person was actually waiting four minutes, the person said ‘I’ve been waiting five or six minutes.’ If they got to five minutes, they would say ‘I’ve been waiting 10 minutes,'” Mr. Underhill says.

It confirms exactly what we’ve known about customers placed on hold for many years. Put a caller on hold for a minute or two and they typically don’t mind. There’s something that happens, however, between the two and three minute hold. Customers who hit that third minute on hold begin to get anxious. The perceived length of time on hold becomes inflated. They’ve been on hold for just over three minutes but if you ask them they’ll tell you it was ten.

The hold button can be a useful tool to help CSRs avoid dead air or allow CSRs a moment to get information together and confidently prepare their response before addressing the customer. If you leave the customer for too long, however, it’s going to come back to bite you. When using the hold button, remember:

  1. Ask the caller’s permission to place him/her on hold. Customers like to feel that they have control and a say in the service they receive. Forcing the customer to hold or placing a customer on hold without permission runs the risk of the customer feeling they are getting the runaround.
  2. If possible, give the customer a realistic time frame. Many customers feel lied to when a CSR said “Let me put you on hold for a second” only to be gone for three minutes. By telling the customer her or she will be on hold “for a minute or two” is more honest and better manages expectations.
  3. Check back after two minutes. If it’s been two minutes and you’re still working on the issue then return to the line, apologize for the wait, explain that you’re still working on it, and give the customer the option of remaining on hold or receiving a call back in a set period of time.

Call Center managers or supervisors would do well to find a way to give CSRs a two minute countdown timer that starts when they hit the hold button and reminds them when the two minutes has lapsed.

Armed with the knowledge of what we know to be true about customers, we can better manage the process of asking the customer to hold while we serve them.

Enhanced by Zemanta

Year-End QA Considerations

Calendar

Image by studiocurve via Flickr

For many companies, the months of November, December and January signal the end of a fiscal year. With the end of the year comes annual performance management reviews which often include a service quality component. It is quite typical for this service quality component to be a score from the call monitoring and coaching QA program (e.g. “your call may be monitored to ensure quality service”). After almost two decades of doing QA as a third party provider as well as helping companies set up and improve their QA programs, I can tell you that year end reviews bring heightened scrutiny to your QA process. This is especially true if monetary bonuses or promotions hinge upon the results.

Not to be a fear monger (it is Halloween as I write this), but now is a good time to do a little self check on your program:

  • Sample: If you’re QA process is intended to measure a CSR’s overall service quality across the entire population of calls, make sure your sampling process is robust and you’ve collected a truly random sample of calls. This means that calls were not excluded for time and that they are representative across hours of the day, days of the week and weeks/months of the year.
  • Objectivity: Make sure you’ve checked your internal call analysts objectivity. This can be done by a simple analysis of the data. Run averages of each analysts results for both the overall score as well as for each element on your scorecard. By comparing individual scores against the group average, you will see where there may be objectivity issues that clouded results. This can also be checked through a robust and disciplined calibration program, though that is not done quickly.
  • Bias: Make sure that your program is not set up in such a way that those who analyze the calls have an inherent interest in the outcome. A classic example of where this happens is when supervisors score their teams calls. The team’s QA results reflect on the supervisor (in some cases there are incentives for the supervisor that hinge on the quality scores), so it is often hard for supervisors to be completely objective in their analysis. A good quality program will reward analysts for the objectivity of the results, not the results themselves.
  • Collusion: If, month after month, the QA results consistently show that your entire team is performing at 98-100% of goal, then one of two things is likely true. 1) Your QA program has the bar set so low that almost anyone with blood pressure and a pulse can meet goal or 2) Everyone in the organization from the front-line CSR to the executive suite has colluded in making the company’s service quality look a lot better than it is. I get it. Sometimes it’s easier to pretend a problem doesn’t exist rather than doing the work to address it. Every organization that has more than a handful of CSRs can count on having a wide range of quality across their front-line ranks. It’s a human nature thing. If everyone is scoring almost perfectly, then something’s definitely rotten in the state of Denmark.

If your year-end is coming up, it’s a good idea for Call Center Managers and executives to start asking some questions now so that there are no surprises when CSRs, unhappy with the results of their performance management, begin asking questions. If you’re interested in an independent 3rd party audit of your current program, contact me. It’s one of the things we do.

Enhanced by Zemanta