Addressing Team Differences in QA

Standardization can sometimes be a bit of a holy grail for corporations. We have several corporate clients who have multiple divisions and teams across one or more contact centers. It is understandable that a company would want to have a common scorecard by which Customer Service Representatives (CSRs) are measured. This not only ensures equitable performance management, but also helps drive a unified brand to a wide customer base.

All teams are not, however, the same. There are differences in function and procedure necessitated by a company’s business. Having diverse business functions sometimes drives the belief that there must be completely different QA programs or forms. Our experience is that companies can create standardization while addressing the internal differences.

Scorecard Considerations

A common way that companies approach unique business functions across teams is to divide the QA scorecard. One part of the form addresses common soft skills and behaviors expected of all CSRs to communicate the company brand and deliver a consistent customer experience. The other part of the form addresses procedural or technical aspects of the CSRs job which may be unique to each team.

The Social Media Buzz; Time for Decaf?

I was part of a great ACCP event last week sponsored by Avtex and hosted by Pella Corporation at their headquarters. There was a wonderful presentation made on the subject of monitoring and responding to customers through social media by Spindustry and their clients from Omaha Steaks. Then, this morning, the Wall Street Journal dedicated an entire section to the subject of Social Media and IT.

In case you’ve had your head buried in the sand for the past year or two, the buzz in the call center world is currently “social media.” The very mention of the term seems to get call center personnel wound up like they’ve just swigged a triple-shot-chocolate-sugar-bomb-espressiato with a Red Bull chaser. Everyone wants to talk about it. The big call center conferences have been scrambling for the past two years to fill their keynotes and workshops full of social media gurus, how-tos, and software vendors. All the buzz has prompted great conversation with clients and colleagues.

For years, I’ve been advocating that every client listen to what customers are saying on the internet and through social media outlets. There is a huge leap, however, between keeping your ear open and diving into a full scale social media task force within your customer service team complete with the latest, greatest social media monitoring software. One of the questions that came up in the ACCP meeting last week was whether our group was doing Customer Satisfaction research for customers who use social media to contact a client company. The reality is that, for most of our clients, the number of customers using social media as a means of communication is still very small. So small, in fact, that they must be regarded as outliers and not representative of the general population of customers.

That does not mean that social media will not grow in importance and influence. It definitely is growing in importance and influence (But, how far will it grow? How influential will it become?). It does not mean that social media is not a critical piece of the marketing and customer service picture for some companies. I simply want to make the point that the time, energy and resources that an individual company invests in social media efforts should be considerate of how many customers they have actively engaged in the medium. Our group is helping some clients determine that very fact. By investing a little money in a survey to find out how important social media is to their customer population as a whole will help them wisely steward their resources when it comes to making an investment in their overall social media strategy. I begin to fear that clients will throw a lot of money and resources to engage a small number of customers in the social media arena when a much larger segment of customers are still encountering significant service issues through traditional channels (as boring and old school as those traditional channels may be).

In the meantime, I’m sure the social media buzz will continue unabated. In the call center industry there always seems to be a buzz where there is software, hardware and/or workshops to sell. Please do not misunderstand me. I’m not against social media in any way. I’m a blogger, tweeter, texter and Facebook junkie. I think social media is great and have led the charge in getting clients to “listen” to what customers are saying via social media. Social Media is here to stay and will continue to evolve. I am, however, dedicated to helping my clients make wise, measured decisions when it comes to their customers and their resources. So, when it comes the social media buzz, make mine decaf, please. Remember, there was a lot of buzz about betavision, too.

Creative Commons photo courtesy of Flickr and thetrial

Self-Serve Success Means Changing Metrics

Just read a great post over at Customer Experience Crossroads reminding us all that the boon in customer self-serve options means that a greater percentage of the calls which do get through to live agents tend to be those which are more complex.

This is a crucial thing to remember as call center managers, supervisors and QA analysts monitor and set metrics such as Average Call Time (ACT) or Average Handle Time (AHT). If you see your ACT and AHT numbers creepoing up, do a little homework before you bring Thor’s Hammer ringing down on your beleagured Customer Service Representatives (CSRs).

  • Check your self-serve channels to monitor useage and, if possible, how customers are using it. We have a client whose self-serve IVR has gone through the roof with customers accessing basic customer account information that used to mean calls of 60-90 seconds in length. Off loading these “short” calls means the average call getting through to the CSR is going to be longer.
  • Use your QA to track customer and call type, or do a quick statistical survey of calls to get good data. By tracking the reason customers are calling, you can begin to link it to average call time by call type to find out which calls drive the highest ACT/AHT. Those become the targets for improvement. (Warning: call data driven by CSR self reporting is usually worthless. CSRs are notorious for not coding calls or choosing the wrong codes. Don’t waste their time or yours.) 

QA Today: Pondering Some Foundational Thoughts

This is the first part of a series of posts regarding the state of Quality Assessment (QA) in the Call Center or Contact Centre.

I’ve been on a sabbatical of sorts for a few months. My apologies to those who’ve missed my posts and have emailed me to see if I’m okay. We all need a break from time to time and after almost four years I gave myself a little break from posting. While on sabbatical, I’ve been watching the trends in the call center industry and, in particular, what others have been saying about Quality Assessment (QA). I’m finding a sudden anti-QA sentiment in the industry. One client mentioned that the call center conference she recently attended had no sessions or workshops about QA. I then had an article sent to me by a client. It bemoaned the failure of QA and called for QA to “modernized.” At the same time, I’m hearing about companies who are shutting down their QA operations and turning to after call surveys and customer satisfaction metrics to measure agent performance.

I’ve been in this industry for almost twenty years. And I’d like to take a few posts to offer my two cents worth in the discussion, though more and more I’m feeling like a voice crying in the wilderness. First, I’d like to make a couple of general observations as a foundation for what I’m going to share in subsequent posts.

  • QA is a relatively new discipline. It has only been in the past 15-20 years that technology has allowed corporations to easily record interactions between their customers and their agents. In even more recent years, the profusion of VoIP technology in the small to mid-sized telephony markets has proliferated that ability into almost every corner of the market place. Suddenly, companies have this really cool ability to record calls and have no idea what to do with it. Imagine handing an Apple iPhone to Albert Einstein. Even the most intelligent man is going to struggle to quickly and effectively use the device when he has no experience or frame of reference for how it might help him. “It can’t be that hard,” I can hear the V.P. of Customer Service say. “Figure out what we want them to say and see if they say it.” The result was a mess. Now, I hear people saying that QA is a huge failure. This concerns me. I’m afraid a lot of companies are going to throw the QA baby out with the bathwater of trending industry tweets rather than investing in  how to make QA effectively work for them.
  • We want technology to save us. We are all in love with technology. We look to technology to help us do more with less, save us time, and make our lives easier. We like things automated. We have the ability to monitor calls and assess agents because technology made it possible. Now I’m hearing cries from those who’d like technology to assess the calls for us, provide feedback for us and save us from the discomforts of having to actually deal with front-line agents. This concerns me as well. If there’s one thing I’ve learned in my career it’s this: Wherever there is a buck to be made in the contact center industry you’ll find software and hardware vendors with huge sales budgets, slick sales teams, and meager back end fulfillment. They will promise you utopia, take you for a huge capital investment, then string you along because you’ve got so much skin in the game. Sometimes, the answer isn’t more, better or new technology. Sometimes the answer is figuring out how to do the right thing with what you’ve got.
  • The industry is often given to fads and comparisons. Don’t get me wrong. There’s a lot of great stuff out there. We all have things to learn. Nevertheless, I’m fascinated when I watch the latest buzz word, bestseller and business fad rocket through the industry like gossip through a junior high school. Suddenly, we’re all concerned about our Net Promoter Scores, and I’ll grant you that there’s value to tracking how willing your friends and family are to tell others about your business. Still, when your NPS heads south it’s going to take some work to figure out what’s changed in your service delivery system. If you want to drive your NPS up you have some work ahead of you to figure out what your customers expect and then get your team delivering at or above expectation. And, speaking of junior high, I also wonder how much of the felt QA struggle is because we spend too much time worrying about comparing ourselves to everyone else rather than doing the best thing for ourselves and our customers. I’ve known companies who ended up with mediocre QA scorecards because they insisted on fashioning their standards after the “best practices” of 2o other mediocre scorecards from companies who had little in common with theirs.

Know that when I point a finger here, I see three fingers pointing back at me. We’re all human and I can see examples in my own past when I’ve been ask guilty as the next QA analyst. Nevertheless, I’m concerned that the next fad will be for companies to do away with QA. I know that there is plenty of gold to mine in an effective QA process for those companies willing to develop the discipline to do it well. 

Creative Commons photo courtesy of Flickr and striatic

A Little Consideration Goes a Long Way

Our group is currently working with one of our clients on a major overhaul of their quality program.  With projects of this size, it is natural for things to take longer than planned. In a meeting a few weeks ago the discussion came up about when we would go “live” with the new Quality Assessment (QA) scorecard since the original deadline was fast approaching. The initial response was “we don’t see any reason not to implement as scheduled and start evaluating calls right away.” It did not take long, however, for the team to realize that it would be inappropriate for them to start evaluating Customer Service Representatives (CSRs) before they had even told the CSRs what behaviors the new QA scale evaluated. To their credit, the quality team and management chose to miss their deadline, push back implementation, and give their front-line associates the opportunity to learn what the scorecard contained before they began evaluating the agent’s phone calls with the new scorecard.

In retrospect, it seemed an obvious decision. Why wouldn’t you want to give your own associates the consideration to view the QA criteria and have an opportunity to change any necessary behaviors before you analyze their calls? As I thought about it on my drive home, I realized how often I find a lack of consideration in the corporate contact center.

  • Marketing drops a promotion that will generate a spike in calls without ever consulting the contact centre or telling them what the promotion contains.
  • CSRs are given an ultimatum to cut “talk time’ or “average handle time” without anyone taking the time to assess and find out tactical ways to do so (like identifying new shortcuts to commonly requested information, etc.).
  • Changing a policy or procedure, then holding associates accountable before it’s been clearly communicated.
  • IT procures and installs telephony, IVR, call recording, or other system software without consideration of how it will affect the call center’s ability to serve customers.
  • A supervisor or QA team simply gives a CSR his or her “score” (e.g. “You got an 82 on your QA this month”), without any clear documentation regarding which behaviors they missed or a conversation/coaching about how the CSR can alter behavior and improve.
  • Having QA criteria that is so broad and ill defined that a “QA Nazi” supervisor can use it to beat CSRs into submission with their own personally impossible expectations while a “QA Hippie” supervisor can use the same criteria to boost the team’s self-esteem by giving them all “100”s (turning the zeroes into smiley faces, of course).

As we near year end and are looking towards setting goals for 2011, perhaps one goal for all managers should be to identify areas of our process in which we act without consideration for those our actions will affect.

A CEO in the Call Center CSR’s Shoes

http://o.aolcdn.com/videoplayer/AOL_PlayerLoader.swf

DirecTV’s CEO went on Undercover Boss and learned how difficult a CSR’s job can really be. Sometimes it’s good to walk a mile in your CSR’s shoes!

If you can’t see the video above, you may view it here.

“Must Learn BALANCE, Daniel-san”

I had an interesting post come across my RSS feed this afternoon from Syed Masood Ibrahim, in which he presented the following statement:

Nothing frustrates me more than the waste associated with counseling, monitoring and inspecting the agent for improved performance.  No organization can inspect in good service.

95% of the performance of any organization is attributable to the system and only 5% the individual.  This challenges the modern attempts by many contact centers to focus attention on the agent.  The problem is that the design of the work is so poor that an agent has little chance of being successful.  Blaming the individual for a bad system is nonsense.

I agree with Mr. Ibrahim that some contact centers place an inordinate amount of blame on the CSR for failures in the service delivery system. His supposition is correct. If the system is broken, it doesn’t matter how nice your CSRs are to the customer, the customer is going to walk away dissatisfied.

With all due respect to my colleague, however, I must disagree that CSR performance is only 5% of the equation. I believe the opposite of Mr. Ibrahim’s supposition is also true: If you have a perfect system, but your CSR communicates with the customer in a less than appropriate manner,  you still have a dissatisfied customer. I’ve had the privilege of working with some of the world’s best companies; companies who manage their systems with an exceptional degree of excellence. In each case, the system will only drive customer satisfaction so far. It is the CSRs consistent world-class service that drives the highest possible levels of customer satisfaction, retention, and loyalty.

In the famed words of Mr. Miyagi, “Must learn balance, Daniel-san.”

A good quality program will identify and address both system related and CSR related issues that impede service quality. When our group performs a Service Quality Assessment for our clients, our analysts are focused on the entire customer experience. That includes both the agents communication skills and the system related problems that stand in the way of the customer receiving resolution to his or her issue. The client’s management team receives an analysis of both CSR skill issues and policy/procedural related issues that need to be addressed if the customer experience is going to improve.

The pursuit of service excellence requires attention to every part of the service delivery system.

Creative Commons photo courtesy of Flickr and bongarang

We Can Record Calls. Now What Do We Do?

One of the common frustrations I’ve witnessed in the past 16 years of working with clients and their Quality Assessment (QA) programs happens shortly after a company invests in new phone technology or call recording and QA software. The technology is installed and the switch is flipped. They have this great new software tool with bells and whistles and… no idea how to get started.

Here are a three things you’ll need to do:

  • Decide how you are going to report the data and information you generate, and how you want to utilize them. Do you simply want data to show the VP that you’re doing something to measure and improve service? Are you going to use the results for individual performance management and incentives at the agent level? Do you simply want to provide call coaching to your front line agents? Do you want to leverage what you’re learning to impact training, marketing and sales? What you want out of your QA program is the first thing you need to determine because it affects how methodical and critical you need to be in subsequent steps of setting up your program.
  • Decide on and list the specific list behaviors you want to listen for. What is most important to your customers when they call (a small post-call survey could help you here)? What specific behaviorsare important to representing your brand? What are the important touch-points at different stages of a common customer interaction? It’s easy to get caught up on the myriad of exceptional situations, but when setting up your scale/scorecard/checklist/form you need to focus on your most routine type(s) of call(s).
  • Decide who is going to monitor and analyze the calls. Many companies use the front-line supervisor to monitor the calls. Others go with a dedicated QA analyst. Some companies hire a third party (that’s one of the service our group provides). There are pros and cons to each approach, and many companies settle on a hybrid approach. It’s important to think through which approach will work best for you and your team.

These are only a few broad bullet points to help focus your thinking, but they provide a rough outline of critical first steps. All the best to you as you set out on your QA journey.

If you need help, don’t hesitate to contact me. It’s what we do.

New Journal of Contact Centre Mgmt Launched

Received word this morning of a new publication by Henry Stewart Publishing. The Journal of Contact Centre Mangement boasts an international Editorial Board.

A sample of articles from the table of contents:

Problems with partial solutions in contact centres
Peter Cochrane, Founder, Cochrane Associates
 
Refocusing resources in a competitive economy: Sales through service
Kerry Weiner Elkind and Andy Elkind, Co-Founders, The Elkind Group
 
Providing consistent and equitable access through re-thinking your business processes: A health sector case study
Brett Dean, Director, Communio
 
Innovation in a best practice contact centre
Richard J Snow, VP & Research Director Customer and Contact Center, Ventana Research

Should CSRs Perform Their Own QA Assessment?

Bigstockphoto_Customer_Feedback_Survey_1564305 Our good friend at Call Centre Helper recently responded to this series of posts on who should do the Quality Assessment (QA) in the contact center, and suggested we've missed two alternatives: CSR self-assessment and technology based speech analytics. I think both of these options deserve consideration.

Let's start with a post about CSR sefl-assessment. Many call centers allow or require their Customer Service Rrepresentatives (CSRs) to listen to and assess their own calls. It can be a great training tool:

  • Individuals can listen without the pressure of feeling someone else's judgment. In call coaching situations, some CSRs are so nervous about having someone listening to their calls or judging their performance that they tend to miss the point of the process. By listening alone to their calls, a CSR can sometimes focus in on what took place in the call without these interpersonal distractions.
  • We tend to be our own worst critics. Individuals will regularly hear things that others don't. It is quite common in coaching sessions for CSRs to point out things they could have improved that didn't even occur to me. By having CSRs critique themselves, they may listen more critically than even an objective analyst, and that can be a huge motivator for some CSRs.
  • Having the CSR go through and assess the call using the QA scorecard engages them with the process and forces them to consider the behavioral standards. Many QA programs create contention simply because CSRs do not understand the criteria with which their conversations are analyzed, and don't understand how the process works. When a CSR sits down with the scorecard and analyzes their own calls, it forces them to think through how they performed on each behavioral element.

You'll notice I wrote that self-assessment is a great training tool. I don't believe that self-assessment is a great way to approach your QA program if you want to get a reliable, objective assessment of what took place on the phone. Self-assessment has its' drawbacks:

  • Having people grade themselves is inherently biased. If you want a reliable and statistically valid measurement of what's happening on the phone in your call center, you need someone other than the person who took the call to analyze the call.
  • Based on the personality and attitude of the CSR, individuals tend to be overly critical ("It was AWFUL. I sound TERRIBLE!") or not critical enough ("That was PERFECT. I heard nothing wrong with that call."). Sometimes CSRs get highly self-critical about a minute issue that makes little difference to the customer experience while missing larger behavioral elements that would impact the customer. Even with self-assessment, CSRs often need help interpreting what they are hearing.
  • Because individuals are so focused on their voice and their own performance, they tend to be blind to the larger policy or procedural issues that can be mined from QA calls by a more objective analyst who is trained to look at the bigger picture.

Self-assessment has its' place as part of the quality process, but our experience tells us that its strength lies in the training end of the program. If your QA program requires meaningful and objective data, then a more objective analyst is required.