Category: QA Methodology

Things You Learn Capturing Calls

As the QA provider for some of our clients, c wenger group employs a small group of dedicated specialists whose job it is to weed through all of the phone calls recorded by the client’s recording software, determine which calls are usable for analysis, and assign them to the appropriate call analysts. The process of using different people for capturing & assigning ensures that those tasked with analyzying the calls don’t give in to the temptation of selecting only shorter calls, good calls or easy calls for analysis (and thus bias the sample). Most companies confine the QA program to what happened within a phone call, but a quick analysis of the call sample for a given agent, or group of agents, can be very revealing. Here are a few examples of issues our call capturers have brought to our attention:

  • One CSR had an average number of calls for their position, but 95 percent of the calls were from family and friends.
  • Another agent who worked a territory of regular customers checked his voice mail several times an hour but rarely took a call or made a call. We suspected that he was choosing not to answer the phone, checking the voice mail, then responding to the customer via e-mail as a way of having to avoid actually talking to customers.
  • One group of sales agents simply weren’t making any of the sales calls with which they were tasked. Either the recording software wasn’t working or all of the cold calls they manually recorded on their daily sales call log were…well, you get the idea.

Sometimes the built in accountability which comes from a QA team simply trying to identify calls for analysis can provide R.O.I. in identifying opportunities to increase productivity, or at least identify and address the lack of productivity which you may discover.

QA Today:The Human Element

There are a growing number of companies who are scrapping internal call monitoring programs and Quality Assessment initiatives. One noticeable trend is the shift toward after call satisfaction surveys to replace traditional call monitoring. In most cases, the customer is asked to rate their satisfaction with the agent and/or the resolution of the issue. In some cases, customers can leave comments for the Customer Service Representative (CSR). I’ve heard of some companies who use the satisfaction ratings from these post call surveys as the only service quality metric.

From a management perspective, this tactic has all sorts of budgetary, productivity and managerial upside. In effect, you automate the process. Let the IVR handle the survey, let your software spit out a report that gets emailed to the CSR and supervisor. If customers are happy, then the company is happy. You only deal with CSRs who get consistently poor ratings.

Sounds like a dream. So, what’s the problem?

  • Bias. Post IVR surveys are rife with all sorts of response bias. You’re not getting an objective, random sample of customer feedback. You’re typically getting feedback from customers who are really happy, really unhappy, or who like entering the survey sweepstakes.
  • You get what you measure. If a CSR knows that they simply have to get good ratings then they will make sure they get good ratings. This might include giving away the company store to ensure blissful customers or badgering customers to give them good ratings (e.g. “I might lose my job if I get bad ratings.”). You might never know this, however, because you’re not listening to the calls.
  • No actionability. One of the most critical piece you miss when relying on customer sat as your lone QA metric is actionability. So, customer’s aren’t satisfied with a particular agent. Typically, there’s no objective data to help that CSR know what he/she is doing that dissatisfies customers. You might pick up a few ideas from anecdotal messages customer’s leave but it’s certainly not an objective measurement. You could coach your CSR to focus on a particular behavior based on one or two irate customers who leave a post-call tirade, but completely miss some critical service skills that the CSR needs to address to consistently improve the customer experience.

In an era in which technology is touted as a cure for every business problem, it’s easy to want to flip a switch and have QA report automatically generated and sent out. However, great Customer Service is still largely a human enterprise conducted by human beings with a wide range of education levels, skills, experience and personalities. The best way to address human behaviors is with human assessment and human interaction. It may be messy at times, but it can be done efficiently, done successfully, and done well.

QA Today: Pondering Some Foundational Thoughts

This is the first part of a series of posts regarding the state of Quality Assessment (QA) in the Call Center or Contact Centre.

I’ve been on a sabbatical of sorts for a few months. My apologies to those who’ve missed my posts and have emailed me to see if I’m okay. We all need a break from time to time and after almost four years I gave myself a little break from posting. While on sabbatical, I’ve been watching the trends in the call center industry and, in particular, what others have been saying about Quality Assessment (QA). I’m finding a sudden anti-QA sentiment in the industry. One client mentioned that the call center conference she recently attended had no sessions or workshops about QA. I then had an article sent to me by a client. It bemoaned the failure of QA and called for QA to “modernized.” At the same time, I’m hearing about companies who are shutting down their QA operations and turning to after call surveys and customer satisfaction metrics to measure agent performance.

I’ve been in this industry for almost twenty years. And I’d like to take a few posts to offer my two cents worth in the discussion, though more and more I’m feeling like a voice crying in the wilderness. First, I’d like to make a couple of general observations as a foundation for what I’m going to share in subsequent posts.

  • QA is a relatively new discipline. It has only been in the past 15-20 years that technology has allowed corporations to easily record interactions between their customers and their agents. In even more recent years, the profusion of VoIP technology in the small to mid-sized telephony markets has proliferated that ability into almost every corner of the market place. Suddenly, companies have this really cool ability to record calls and have no idea what to do with it. Imagine handing an Apple iPhone to Albert Einstein. Even the most intelligent man is going to struggle to quickly and effectively use the device when he has no experience or frame of reference for how it might help him. “It can’t be that hard,” I can hear the V.P. of Customer Service say. “Figure out what we want them to say and see if they say it.” The result was a mess. Now, I hear people saying that QA is a huge failure. This concerns me. I’m afraid a lot of companies are going to throw the QA baby out with the bathwater of trending industry tweets rather than investing in  how to make QA effectively work for them.
  • We want technology to save us. We are all in love with technology. We look to technology to help us do more with less, save us time, and make our lives easier. We like things automated. We have the ability to monitor calls and assess agents because technology made it possible. Now I’m hearing cries from those who’d like technology to assess the calls for us, provide feedback for us and save us from the discomforts of having to actually deal with front-line agents. This concerns me as well. If there’s one thing I’ve learned in my career it’s this: Wherever there is a buck to be made in the contact center industry you’ll find software and hardware vendors with huge sales budgets, slick sales teams, and meager back end fulfillment. They will promise you utopia, take you for a huge capital investment, then string you along because you’ve got so much skin in the game. Sometimes, the answer isn’t more, better or new technology. Sometimes the answer is figuring out how to do the right thing with what you’ve got.
  • The industry is often given to fads and comparisons. Don’t get me wrong. There’s a lot of great stuff out there. We all have things to learn. Nevertheless, I’m fascinated when I watch the latest buzz word, bestseller and business fad rocket through the industry like gossip through a junior high school. Suddenly, we’re all concerned about our Net Promoter Scores, and I’ll grant you that there’s value to tracking how willing your friends and family are to tell others about your business. Still, when your NPS heads south it’s going to take some work to figure out what’s changed in your service delivery system. If you want to drive your NPS up you have some work ahead of you to figure out what your customers expect and then get your team delivering at or above expectation. And, speaking of junior high, I also wonder how much of the felt QA struggle is because we spend too much time worrying about comparing ourselves to everyone else rather than doing the best thing for ourselves and our customers. I’ve known companies who ended up with mediocre QA scorecards because they insisted on fashioning their standards after the “best practices” of 2o other mediocre scorecards from companies who had little in common with theirs.

Know that when I point a finger here, I see three fingers pointing back at me. We’re all human and I can see examples in my own past when I’ve been ask guilty as the next QA analyst. Nevertheless, I’m concerned that the next fad will be for companies to do away with QA. I know that there is plenty of gold to mine in an effective QA process for those companies willing to develop the discipline to do it well. 

Creative Commons photo courtesy of Flickr and striatic

Are You Measuring What You Are, or What You Want to Be?

Letter scale.
Image via Wikipedia

When it comes to developing a scale by which you measure your company’s phone calls, there are a number of ways to approach it. I always encourage clients to start with a few foundational questions:

  • “What is our goal, and what do we want the outcome to be?”
  • “What are we trying to achieve?”
  • “Who are we primarily serving with the scale/scorecard/form/checklist?”

The process of deciding what behaviors you will listen for, what you expect from your Customer Service Representatives (CSRs), and how high you set the standard can be a complex web. When you involve many voices from within the organization who have their own agendas and ideas, the task can slide into conflict and frustration very quickly. By defining up front what you want to accomplish, you can always take the conflict about a particular element back to the question “How is this going to help us achieve the goal we established?”

Let me summarize some general observations about QA scorecards and programs I’ve seen which represent different organizational goals.

  • Reaching for the stars. Some companies set a high standard in an effort to be the best of the best. They know that their CSRs are human and will never be perfect, but they set the bar at a level which will require conscious effort to achieve. CSRs are expected to continuously improve their service delivery. In these cases, the behaviors measured by the QA form can be exhaustive and ideal.
  • Maintaining the standard. There are some organizations who don’t care about being the best of the best, they just want to maintain what they’ve deemed to be an acceptable standard. The scale rewards the vast majority of CSR with acceptable scores while identifying the relatively few CSRs who could hurt the organization and likely need to find another job.
  • Motivating the troops. CSR motivation and encouragement is the focus of some QA programs. Scorecards designed in these situations tend to look for and reward any positive behaviors the CSRs demonstrate on a consistent basis while minimizing expectations or negative feedback. In these cases, the elements of the QA form gravitate towards easily identifiable and reward-able behaviors.
  • Customer Centric. Some call centers really focus on designing their QA evaluation form around what their customers want and expect. They use research data to identify and behaviors which drive their customers satisfaction. The Quality Assessment checklist is designed to create a snapshot of how the individual or team is performing in the customers mind. These types of evaluations can vary depending on the market and customer base.
  • Going through the motions. I’ve encountered some companies who really don’t care what they are measuring or how they are measuring it. They just want to have a program in place so that they can assure others (senior management, shareholders, customers, etc.) that they are doing something about service quality. In this case, the scorecard doesn’t really matter.

Some quality programs and scorecards struggle because they haven’t clearly defined what they want and what they are trying to acheive. Different individuals within the process have competing goals and motivations. Based on my experience, I recommend some approaches more than others and have my own beliefs about which are best. Nevertheless, I’ve come to accept that most of the differing approaches can be perfectly appropriate for certain businesses in particular situations (I’d never recommend the “Going through the Motions” approach as it tends to waste time, energy and productivity). The key is to be honest about your intentions and clear in your approach. It makes the rest of the process easier on everyone.

Enhanced by Zemanta

A Little Consideration Goes a Long Way

Our group is currently working with one of our clients on a major overhaul of their quality program.  With projects of this size, it is natural for things to take longer than planned. In a meeting a few weeks ago the discussion came up about when we would go “live” with the new Quality Assessment (QA) scorecard since the original deadline was fast approaching. The initial response was “we don’t see any reason not to implement as scheduled and start evaluating calls right away.” It did not take long, however, for the team to realize that it would be inappropriate for them to start evaluating Customer Service Representatives (CSRs) before they had even told the CSRs what behaviors the new QA scale evaluated. To their credit, the quality team and management chose to miss their deadline, push back implementation, and give their front-line associates the opportunity to learn what the scorecard contained before they began evaluating the agent’s phone calls with the new scorecard.

In retrospect, it seemed an obvious decision. Why wouldn’t you want to give your own associates the consideration to view the QA criteria and have an opportunity to change any necessary behaviors before you analyze their calls? As I thought about it on my drive home, I realized how often I find a lack of consideration in the corporate contact center.

  • Marketing drops a promotion that will generate a spike in calls without ever consulting the contact centre or telling them what the promotion contains.
  • CSRs are given an ultimatum to cut “talk time’ or “average handle time” without anyone taking the time to assess and find out tactical ways to do so (like identifying new shortcuts to commonly requested information, etc.).
  • Changing a policy or procedure, then holding associates accountable before it’s been clearly communicated.
  • IT procures and installs telephony, IVR, call recording, or other system software without consideration of how it will affect the call center’s ability to serve customers.
  • A supervisor or QA team simply gives a CSR his or her “score” (e.g. “You got an 82 on your QA this month”), without any clear documentation regarding which behaviors they missed or a conversation/coaching about how the CSR can alter behavior and improve.
  • Having QA criteria that is so broad and ill defined that a “QA Nazi” supervisor can use it to beat CSRs into submission with their own personally impossible expectations while a “QA Hippie” supervisor can use the same criteria to boost the team’s self-esteem by giving them all “100”s (turning the zeroes into smiley faces, of course).

As we near year end and are looking towards setting goals for 2011, perhaps one goal for all managers should be to identify areas of our process in which we act without consideration for those our actions will affect.

With On-Line Chat, a Few Extra Words Go a Long Way

online chat
Image by marioanima via Flickr

Our group goes beyond call monitoring to provide Service Quality Assessment for a client’s e-mail and/or on-line chat communication. The process is virtually the same. We define the key behaviors or service elements that will consistently meet and exceed the customer’s expectations and drive increased satisfaction. They are important, but often overlooked, communication channels. Your email and chat correspondence can make (or break) customer satisfaction just like a phone call. Take my experience today, for example:

Before we were married, I sponsored a child in a third world country through a charitable organization. It’s a great experience and my support quickly became a joint venture as my wife got involved. Her name, however, had never been added to the account. So, while making an on-line donation I noticed that there was an on-line chat option and figured it was a good time to add her name to the account.

Here is a transcript (names changed):

Mitzi: Thank you for contacting ORGANIZATION. How may I assist you today?
Tom: Hi Mitzi. I’m wondering how I can get my wife’s name added to my account. I started sponsorship before I was married, but now we are both involved in sponsoring our child and I’d like her name included.
Mitzi: I am happy to assist you with that!
(I feel like there was about a 4-5 minute wait here) 
Mitzi: What is your wife’s name?
Tom: Wendy.
(I feel like there was another 3-4 minute wait)
Mitzi: I am working on this, just a moment.
(I timed this wait at about 6 minutes)
Mitzi: I submitted the paper work on this. I appreciate your patience.
Tom: Great. Thanks!
(Waited briefly for a response)
Tom: Do I need to do anything else? How long does it take?
Mitzi: You shoul see the change gradually… in the next 4 to 6 weeks everything should have her name on it.
Tom: Wonderful. Thanks for your help!
Mizti: Thank you for chatting with me. I welcome your feedback. Please click here to complete a 15 second survey.

The on-line rep was pleasant, professional and did a nice job.  My issue, as far as I know, has been resolved. It was a good experience, but it wasn’t a great experience. There are a couple of key things that would have left me far more satisfied:

  • Be sensitive to my time. Our customer satisfaction research shows that time related elements (e.g. quickness reaching a rep, answers without being placed on hold, or timeliness of follow up) are a growing driver of customer satisfaction across many customer segments. There were long gaps of time between responses that left me wondering what was happening on the other end. A quick statement to let me know what was going on, or to give me a time frame would have eased my anxiety and impatience.
  • Don’t just tell me what you did; tell me what I can expect. The on-line rep told me that she submitted the paperwork, but I had to guess what that meant. My initial thought was that I might have to wait on-line while it was processed. Rather than anticipating my questions, I was left having to pull it out of her.
  • Courtesy and friendliness are sometimes more important in text than on the phone. CSRs in a call center have the inflection of their voice to communicate a courteous tone, but written communication can take on an abrupt feeling when it’s void of courtesy. Adding a “please” when making a request or using the customers name (especially when they use yours) can turn a black and white exchange into a pleasant conversation.
  • Make sure you’ve answered all my questions. At the end of the chat I was left wondering if it was over. By asking if I had any other questions, it would have clued me in that the issue was resolved while offering to go the extra mile and help with other needs.

Here’s the transcript again, but I’ve rewritten it the way I would have appreciated experiencing it:

Mitzi: Thank you for contacting ORGANIZATION. How may I assist you today?
Tom: Hi Mitzi. I’m wondering how I can get my wife’s name added to my account. I started sponsorship before I was married, but now we are both involved in sponsoring our child and I’d like her name included.
Mitzi: I am happy to assist you with that! And, congratulations on getting married!Please bear with me. It will take a few minutes to access your account and the appropriate forms.
Mitzi: Thanks for waiting, Tom. May I please have your wife’s name?
Tom: Wendy.
Mitzi: Thank you. It will take me 5 minutes or so to fill out the appropriate forms.
Mitzi: Sorry for the delay. I am still working on this, just a moment.
Mitzi: I appreciate your patience. I submitted the paper work on this. You should see the change gradually… in the next 4 to 6 weeks everything should have her name on it.
Tom: Great. Thanks!
Mitzi: Any other questions I can answer for you, Tom?
Tom: No. Wonderful. Thanks for your help!
Mizti: Thank you for chatting with me, and for you and Wendy sponsoring a child. I welcome your feedback. Please click here to complete a 15 second survey.

A few extra words and sentences, properly placed, can turn a cut-and-paste chat experience into one that is personable, friendly, professional and polite.

Enhanced by Zemanta

Eeny-Meeny-Miny-Moach, Which Call Do I Choose to Coach?

I was shadowing several call coaches as part of a call coach mentoring program for one of our clients. It was interesting to watch these coaches select the calls they were going to analyze. The coach quickly dismissed any call shorter than two minutes and any call longer than five minutes, gravitating to a call between three and five minutes in length. The assumption was that any call less than two minutes had no value for coaching purposes. Dismissing longer calls was done, admittedly, because they didn’t want to take the time to listen to them. Unfortunately, this is not an uncommon practice. Yet, there are a couple of problems with this approach:

  • You are not getting a truly random sample of the agents performance. If you are simply coaching an occasional call, this may not be a major issue. If you are using the results for bonuses, performance mangement or incentive pay, then your sampling process may put you at risk. What you’re really doing by eliminating calls based on time is reducing your sample to a census of calls in a certain time range, but it’s not a true picture of an agents performance over their entire sample of calls.
  • You are ignoring real “moments of truth” in which customers are being impacted. Customers can make critical decisions about your company in thirty second calls and thirty minute calls. To avoid listening to these calls is turning a blind eye to, what may be, very critical interactions between customers and CSRs. I have, unfortunately, witnessed situations in which CSRs rushed customers off the phone. I’ve even known a few CSRs to routinely placing callers on hold, immediately after answering the phone, and then releasing the call. In the case of my client today, the QA team would never catch it. It always happened in less than 30 seconds.
  • You may be missing out on valuable data. Short calls often happen because of misdirected calls or other process problems. Quantifying why these are occuring could save you money and improve one call resolution as well as customer satisfaction. You might also discover simple questions that aren’t currently being handled by your IVR. Likewise, longer calls may result from situations that have seriously gone awry for a customer. Digging in to the reasons may yeild valuable information about problems in the service delivery system that will save you time and future calls.

Capturing and analyzing a truly random sample of phone calls will, in the long run, protect and benefit everyone involved.

“Must Learn BALANCE, Daniel-san”

I had an interesting post come across my RSS feed this afternoon from Syed Masood Ibrahim, in which he presented the following statement:

Nothing frustrates me more than the waste associated with counseling, monitoring and inspecting the agent for improved performance.  No organization can inspect in good service.

95% of the performance of any organization is attributable to the system and only 5% the individual.  This challenges the modern attempts by many contact centers to focus attention on the agent.  The problem is that the design of the work is so poor that an agent has little chance of being successful.  Blaming the individual for a bad system is nonsense.

I agree with Mr. Ibrahim that some contact centers place an inordinate amount of blame on the CSR for failures in the service delivery system. His supposition is correct. If the system is broken, it doesn’t matter how nice your CSRs are to the customer, the customer is going to walk away dissatisfied.

With all due respect to my colleague, however, I must disagree that CSR performance is only 5% of the equation. I believe the opposite of Mr. Ibrahim’s supposition is also true: If you have a perfect system, but your CSR communicates with the customer in a less than appropriate manner,  you still have a dissatisfied customer. I’ve had the privilege of working with some of the world’s best companies; companies who manage their systems with an exceptional degree of excellence. In each case, the system will only drive customer satisfaction so far. It is the CSRs consistent world-class service that drives the highest possible levels of customer satisfaction, retention, and loyalty.

In the famed words of Mr. Miyagi, “Must learn balance, Daniel-san.”

A good quality program will identify and address both system related and CSR related issues that impede service quality. When our group performs a Service Quality Assessment for our clients, our analysts are focused on the entire customer experience. That includes both the agents communication skills and the system related problems that stand in the way of the customer receiving resolution to his or her issue. The client’s management team receives an analysis of both CSR skill issues and policy/procedural related issues that need to be addressed if the customer experience is going to improve.

The pursuit of service excellence requires attention to every part of the service delivery system.

Creative Commons photo courtesy of Flickr and bongarang

Should CSRs Perform Their Own QA Assessment?

Bigstockphoto_Customer_Feedback_Survey_1564305 Our good friend at Call Centre Helper recently responded to this series of posts on who should do the Quality Assessment (QA) in the contact center, and suggested we've missed two alternatives: CSR self-assessment and technology based speech analytics. I think both of these options deserve consideration.

Let's start with a post about CSR sefl-assessment. Many call centers allow or require their Customer Service Rrepresentatives (CSRs) to listen to and assess their own calls. It can be a great training tool:

  • Individuals can listen without the pressure of feeling someone else's judgment. In call coaching situations, some CSRs are so nervous about having someone listening to their calls or judging their performance that they tend to miss the point of the process. By listening alone to their calls, a CSR can sometimes focus in on what took place in the call without these interpersonal distractions.
  • We tend to be our own worst critics. Individuals will regularly hear things that others don't. It is quite common in coaching sessions for CSRs to point out things they could have improved that didn't even occur to me. By having CSRs critique themselves, they may listen more critically than even an objective analyst, and that can be a huge motivator for some CSRs.
  • Having the CSR go through and assess the call using the QA scorecard engages them with the process and forces them to consider the behavioral standards. Many QA programs create contention simply because CSRs do not understand the criteria with which their conversations are analyzed, and don't understand how the process works. When a CSR sits down with the scorecard and analyzes their own calls, it forces them to think through how they performed on each behavioral element.

You'll notice I wrote that self-assessment is a great training tool. I don't believe that self-assessment is a great way to approach your QA program if you want to get a reliable, objective assessment of what took place on the phone. Self-assessment has its' drawbacks:

  • Having people grade themselves is inherently biased. If you want a reliable and statistically valid measurement of what's happening on the phone in your call center, you need someone other than the person who took the call to analyze the call.
  • Based on the personality and attitude of the CSR, individuals tend to be overly critical ("It was AWFUL. I sound TERRIBLE!") or not critical enough ("That was PERFECT. I heard nothing wrong with that call."). Sometimes CSRs get highly self-critical about a minute issue that makes little difference to the customer experience while missing larger behavioral elements that would impact the customer. Even with self-assessment, CSRs often need help interpreting what they are hearing.
  • Because individuals are so focused on their voice and their own performance, they tend to be blind to the larger policy or procedural issues that can be mined from QA calls by a more objective analyst who is trained to look at the bigger picture.

Self-assessment has its' place as part of the quality process, but our experience tells us that its strength lies in the training end of the program. If your QA program requires meaningful and objective data, then a more objective analyst is required.

A Hybrid Approach to QA

Mixing strengths. Many companies have discovered that having just the just the supervisors analyzing calls and providing coaching does not have the desired effect. Likewise, those who put all their QA eggs in the basket of an internal QA analyst/team or a 3rd party provider find themselves wanting greater impact.

That's why many will create a hybrid program to leverage the strengths of each approach. Here are three common hybrid approaches our team sees and recommends in the right circumstances:

  • Internal/External. In this approach, the company uses an internal analysts and coaches on a day-to-day basis, but utilizes an external third party to provide periodic, comprehensive assessment. Our team will often measure a significantly great number of behaviors on a periodic bases as a reality check, but then help the client's internal team to make sure they are focused on the proper, albeit smaller, list of crucial behaviors. This hybrid can save time, avoid wasted resources and provide a continual source of objective feedback that can help you make strategic improvements.
  • Supervisor/QA. The internal hybrid approach also divides duties to maximize efforts. The QA team, who is tasked with focusing on service quality, provides an on-going, detailed analysis of calls. The supervisors, who have limited time and resources to do call/data analysis, continue to monitor/coach calls, but listen for a short list of inconsistent behaviors unearthed by the Quality Team's detailed assessment.
  • Analyst/Coach. Some teams get mired in the belief that they must coach the CSR on every call analyzed. Others analyze a ton of calls and do nothing with them. Find the right point of tension between analysis and coaching. As long as you have a statistically valid and manageable scale, you can have an individual or team who analyzes a bunch of calls to provide you with good data, but another person who is trained in coaching can take the data and a few call examples to provide the feedback.

The key to any hybrid approach:

Creative Commons photo courtesy of Flickr and curious gregor