Category: Call Center Issues

The Social Media Buzz; Time for Decaf?

I was part of a great ACCP event last week sponsored by Avtex and hosted by Pella Corporation at their headquarters. There was a wonderful presentation made on the subject of monitoring and responding to customers through social media by Spindustry and their clients from Omaha Steaks. Then, this morning, the Wall Street Journal dedicated an entire section to the subject of Social Media and IT.

In case you’ve had your head buried in the sand for the past year or two, the buzz in the call center world is currently “social media.” The very mention of the term seems to get call center personnel wound up like they’ve just swigged a triple-shot-chocolate-sugar-bomb-espressiato with a Red Bull chaser. Everyone wants to talk about it. The big call center conferences have been scrambling for the past two years to fill their keynotes and workshops full of social media gurus, how-tos, and software vendors. All the buzz has prompted great conversation with clients and colleagues.

For years, I’ve been advocating that every client listen to what customers are saying on the internet and through social media outlets. There is a huge leap, however, between keeping your ear open and diving into a full scale social media task force within your customer service team complete with the latest, greatest social media monitoring software. One of the questions that came up in the ACCP meeting last week was whether our group was doing Customer Satisfaction research for customers who use social media to contact a client company. The reality is that, for most of our clients, the number of customers using social media as a means of communication is still very small. So small, in fact, that they must be regarded as outliers and not representative of the general population of customers.

That does not mean that social media will not grow in importance and influence. It definitely is growing in importance and influence (But, how far will it grow? How influential will it become?). It does not mean that social media is not a critical piece of the marketing and customer service picture for some companies. I simply want to make the point that the time, energy and resources that an individual company invests in social media efforts should be considerate of how many customers they have actively engaged in the medium. Our group is helping some clients determine that very fact. By investing a little money in a survey to find out how important social media is to their customer population as a whole will help them wisely steward their resources when it comes to making an investment in their overall social media strategy. I begin to fear that clients will throw a lot of money and resources to engage a small number of customers in the social media arena when a much larger segment of customers are still encountering significant service issues through traditional channels (as boring and old school as those traditional channels may be).

In the meantime, I’m sure the social media buzz will continue unabated. In the call center industry there always seems to be a buzz where there is software, hardware and/or workshops to sell. Please do not misunderstand me. I’m not against social media in any way. I’m a blogger, tweeter, texter and Facebook junkie. I think social media is great and have led the charge in getting clients to “listen” to what customers are saying via social media. Social Media is here to stay and will continue to evolve. I am, however, dedicated to helping my clients make wise, measured decisions when it comes to their customers and their resources. So, when it comes the social media buzz, make mine decaf, please. Remember, there was a lot of buzz about betavision, too.

Creative Commons photo courtesy of Flickr and thetrial

Things You Learn Capturing Calls

As the QA provider for some of our clients, c wenger group employs a small group of dedicated specialists whose job it is to weed through all of the phone calls recorded by the client’s recording software, determine which calls are usable for analysis, and assign them to the appropriate call analysts. The process of using different people for capturing & assigning ensures that those tasked with analyzying the calls don’t give in to the temptation of selecting only shorter calls, good calls or easy calls for analysis (and thus bias the sample). Most companies confine the QA program to what happened within a phone call, but a quick analysis of the call sample for a given agent, or group of agents, can be very revealing. Here are a few examples of issues our call capturers have brought to our attention:

  • One CSR had an average number of calls for their position, but 95 percent of the calls were from family and friends.
  • Another agent who worked a territory of regular customers checked his voice mail several times an hour but rarely took a call or made a call. We suspected that he was choosing not to answer the phone, checking the voice mail, then responding to the customer via e-mail as a way of having to avoid actually talking to customers.
  • One group of sales agents simply weren’t making any of the sales calls with which they were tasked. Either the recording software wasn’t working or all of the cold calls they manually recorded on their daily sales call log were…well, you get the idea.

Sometimes the built in accountability which comes from a QA team simply trying to identify calls for analysis can provide R.O.I. in identifying opportunities to increase productivity, or at least identify and address the lack of productivity which you may discover.

Self-Serve Success Means Changing Metrics

Just read a great post over at Customer Experience Crossroads reminding us all that the boon in customer self-serve options means that a greater percentage of the calls which do get through to live agents tend to be those which are more complex.

This is a crucial thing to remember as call center managers, supervisors and QA analysts monitor and set metrics such as Average Call Time (ACT) or Average Handle Time (AHT). If you see your ACT and AHT numbers creepoing up, do a little homework before you bring Thor’s Hammer ringing down on your beleagured Customer Service Representatives (CSRs).

  • Check your self-serve channels to monitor useage and, if possible, how customers are using it. We have a client whose self-serve IVR has gone through the roof with customers accessing basic customer account information that used to mean calls of 60-90 seconds in length. Off loading these “short” calls means the average call getting through to the CSR is going to be longer.
  • Use your QA to track customer and call type, or do a quick statistical survey of calls to get good data. By tracking the reason customers are calling, you can begin to link it to average call time by call type to find out which calls drive the highest ACT/AHT. Those become the targets for improvement. (Warning: call data driven by CSR self reporting is usually worthless. CSRs are notorious for not coding calls or choosing the wrong codes. Don’t waste their time or yours.) 

QA Today:The Human Element

There are a growing number of companies who are scrapping internal call monitoring programs and Quality Assessment initiatives. One noticeable trend is the shift toward after call satisfaction surveys to replace traditional call monitoring. In most cases, the customer is asked to rate their satisfaction with the agent and/or the resolution of the issue. In some cases, customers can leave comments for the Customer Service Representative (CSR). I’ve heard of some companies who use the satisfaction ratings from these post call surveys as the only service quality metric.

From a management perspective, this tactic has all sorts of budgetary, productivity and managerial upside. In effect, you automate the process. Let the IVR handle the survey, let your software spit out a report that gets emailed to the CSR and supervisor. If customers are happy, then the company is happy. You only deal with CSRs who get consistently poor ratings.

Sounds like a dream. So, what’s the problem?

  • Bias. Post IVR surveys are rife with all sorts of response bias. You’re not getting an objective, random sample of customer feedback. You’re typically getting feedback from customers who are really happy, really unhappy, or who like entering the survey sweepstakes.
  • You get what you measure. If a CSR knows that they simply have to get good ratings then they will make sure they get good ratings. This might include giving away the company store to ensure blissful customers or badgering customers to give them good ratings (e.g. “I might lose my job if I get bad ratings.”). You might never know this, however, because you’re not listening to the calls.
  • No actionability. One of the most critical piece you miss when relying on customer sat as your lone QA metric is actionability. So, customer’s aren’t satisfied with a particular agent. Typically, there’s no objective data to help that CSR know what he/she is doing that dissatisfies customers. You might pick up a few ideas from anecdotal messages customer’s leave but it’s certainly not an objective measurement. You could coach your CSR to focus on a particular behavior based on one or two irate customers who leave a post-call tirade, but completely miss some critical service skills that the CSR needs to address to consistently improve the customer experience.

In an era in which technology is touted as a cure for every business problem, it’s easy to want to flip a switch and have QA report automatically generated and sent out. However, great Customer Service is still largely a human enterprise conducted by human beings with a wide range of education levels, skills, experience and personalities. The best way to address human behaviors is with human assessment and human interaction. It may be messy at times, but it can be done efficiently, done successfully, and done well.

QA Today: Pondering Some Foundational Thoughts

This is the first part of a series of posts regarding the state of Quality Assessment (QA) in the Call Center or Contact Centre.

I’ve been on a sabbatical of sorts for a few months. My apologies to those who’ve missed my posts and have emailed me to see if I’m okay. We all need a break from time to time and after almost four years I gave myself a little break from posting. While on sabbatical, I’ve been watching the trends in the call center industry and, in particular, what others have been saying about Quality Assessment (QA). I’m finding a sudden anti-QA sentiment in the industry. One client mentioned that the call center conference she recently attended had no sessions or workshops about QA. I then had an article sent to me by a client. It bemoaned the failure of QA and called for QA to “modernized.” At the same time, I’m hearing about companies who are shutting down their QA operations and turning to after call surveys and customer satisfaction metrics to measure agent performance.

I’ve been in this industry for almost twenty years. And I’d like to take a few posts to offer my two cents worth in the discussion, though more and more I’m feeling like a voice crying in the wilderness. First, I’d like to make a couple of general observations as a foundation for what I’m going to share in subsequent posts.

  • QA is a relatively new discipline. It has only been in the past 15-20 years that technology has allowed corporations to easily record interactions between their customers and their agents. In even more recent years, the profusion of VoIP technology in the small to mid-sized telephony markets has proliferated that ability into almost every corner of the market place. Suddenly, companies have this really cool ability to record calls and have no idea what to do with it. Imagine handing an Apple iPhone to Albert Einstein. Even the most intelligent man is going to struggle to quickly and effectively use the device when he has no experience or frame of reference for how it might help him. “It can’t be that hard,” I can hear the V.P. of Customer Service say. “Figure out what we want them to say and see if they say it.” The result was a mess. Now, I hear people saying that QA is a huge failure. This concerns me. I’m afraid a lot of companies are going to throw the QA baby out with the bathwater of trending industry tweets rather than investing in  how to make QA effectively work for them.
  • We want technology to save us. We are all in love with technology. We look to technology to help us do more with less, save us time, and make our lives easier. We like things automated. We have the ability to monitor calls and assess agents because technology made it possible. Now I’m hearing cries from those who’d like technology to assess the calls for us, provide feedback for us and save us from the discomforts of having to actually deal with front-line agents. This concerns me as well. If there’s one thing I’ve learned in my career it’s this: Wherever there is a buck to be made in the contact center industry you’ll find software and hardware vendors with huge sales budgets, slick sales teams, and meager back end fulfillment. They will promise you utopia, take you for a huge capital investment, then string you along because you’ve got so much skin in the game. Sometimes, the answer isn’t more, better or new technology. Sometimes the answer is figuring out how to do the right thing with what you’ve got.
  • The industry is often given to fads and comparisons. Don’t get me wrong. There’s a lot of great stuff out there. We all have things to learn. Nevertheless, I’m fascinated when I watch the latest buzz word, bestseller and business fad rocket through the industry like gossip through a junior high school. Suddenly, we’re all concerned about our Net Promoter Scores, and I’ll grant you that there’s value to tracking how willing your friends and family are to tell others about your business. Still, when your NPS heads south it’s going to take some work to figure out what’s changed in your service delivery system. If you want to drive your NPS up you have some work ahead of you to figure out what your customers expect and then get your team delivering at or above expectation. And, speaking of junior high, I also wonder how much of the felt QA struggle is because we spend too much time worrying about comparing ourselves to everyone else rather than doing the best thing for ourselves and our customers. I’ve known companies who ended up with mediocre QA scorecards because they insisted on fashioning their standards after the “best practices” of 2o other mediocre scorecards from companies who had little in common with theirs.

Know that when I point a finger here, I see three fingers pointing back at me. We’re all human and I can see examples in my own past when I’ve been ask guilty as the next QA analyst. Nevertheless, I’m concerned that the next fad will be for companies to do away with QA. I know that there is plenty of gold to mine in an effective QA process for those companies willing to develop the discipline to do it well. 

Creative Commons photo courtesy of Flickr and striatic

Are You Measuring What You Are, or What You Want to Be?

Letter scale.
Image via Wikipedia

When it comes to developing a scale by which you measure your company’s phone calls, there are a number of ways to approach it. I always encourage clients to start with a few foundational questions:

  • “What is our goal, and what do we want the outcome to be?”
  • “What are we trying to achieve?”
  • “Who are we primarily serving with the scale/scorecard/form/checklist?”

The process of deciding what behaviors you will listen for, what you expect from your Customer Service Representatives (CSRs), and how high you set the standard can be a complex web. When you involve many voices from within the organization who have their own agendas and ideas, the task can slide into conflict and frustration very quickly. By defining up front what you want to accomplish, you can always take the conflict about a particular element back to the question “How is this going to help us achieve the goal we established?”

Let me summarize some general observations about QA scorecards and programs I’ve seen which represent different organizational goals.

  • Reaching for the stars. Some companies set a high standard in an effort to be the best of the best. They know that their CSRs are human and will never be perfect, but they set the bar at a level which will require conscious effort to achieve. CSRs are expected to continuously improve their service delivery. In these cases, the behaviors measured by the QA form can be exhaustive and ideal.
  • Maintaining the standard. There are some organizations who don’t care about being the best of the best, they just want to maintain what they’ve deemed to be an acceptable standard. The scale rewards the vast majority of CSR with acceptable scores while identifying the relatively few CSRs who could hurt the organization and likely need to find another job.
  • Motivating the troops. CSR motivation and encouragement is the focus of some QA programs. Scorecards designed in these situations tend to look for and reward any positive behaviors the CSRs demonstrate on a consistent basis while minimizing expectations or negative feedback. In these cases, the elements of the QA form gravitate towards easily identifiable and reward-able behaviors.
  • Customer Centric. Some call centers really focus on designing their QA evaluation form around what their customers want and expect. They use research data to identify and behaviors which drive their customers satisfaction. The Quality Assessment checklist is designed to create a snapshot of how the individual or team is performing in the customers mind. These types of evaluations can vary depending on the market and customer base.
  • Going through the motions. I’ve encountered some companies who really don’t care what they are measuring or how they are measuring it. They just want to have a program in place so that they can assure others (senior management, shareholders, customers, etc.) that they are doing something about service quality. In this case, the scorecard doesn’t really matter.

Some quality programs and scorecards struggle because they haven’t clearly defined what they want and what they are trying to acheive. Different individuals within the process have competing goals and motivations. Based on my experience, I recommend some approaches more than others and have my own beliefs about which are best. Nevertheless, I’ve come to accept that most of the differing approaches can be perfectly appropriate for certain businesses in particular situations (I’d never recommend the “Going through the Motions” approach as it tends to waste time, energy and productivity). The key is to be honest about your intentions and clear in your approach. It makes the rest of the process easier on everyone.

Enhanced by Zemanta

A Little Consideration Goes a Long Way

Our group is currently working with one of our clients on a major overhaul of their quality program.  With projects of this size, it is natural for things to take longer than planned. In a meeting a few weeks ago the discussion came up about when we would go “live” with the new Quality Assessment (QA) scorecard since the original deadline was fast approaching. The initial response was “we don’t see any reason not to implement as scheduled and start evaluating calls right away.” It did not take long, however, for the team to realize that it would be inappropriate for them to start evaluating Customer Service Representatives (CSRs) before they had even told the CSRs what behaviors the new QA scale evaluated. To their credit, the quality team and management chose to miss their deadline, push back implementation, and give their front-line associates the opportunity to learn what the scorecard contained before they began evaluating the agent’s phone calls with the new scorecard.

In retrospect, it seemed an obvious decision. Why wouldn’t you want to give your own associates the consideration to view the QA criteria and have an opportunity to change any necessary behaviors before you analyze their calls? As I thought about it on my drive home, I realized how often I find a lack of consideration in the corporate contact center.

  • Marketing drops a promotion that will generate a spike in calls without ever consulting the contact centre or telling them what the promotion contains.
  • CSRs are given an ultimatum to cut “talk time’ or “average handle time” without anyone taking the time to assess and find out tactical ways to do so (like identifying new shortcuts to commonly requested information, etc.).
  • Changing a policy or procedure, then holding associates accountable before it’s been clearly communicated.
  • IT procures and installs telephony, IVR, call recording, or other system software without consideration of how it will affect the call center’s ability to serve customers.
  • A supervisor or QA team simply gives a CSR his or her “score” (e.g. “You got an 82 on your QA this month”), without any clear documentation regarding which behaviors they missed or a conversation/coaching about how the CSR can alter behavior and improve.
  • Having QA criteria that is so broad and ill defined that a “QA Nazi” supervisor can use it to beat CSRs into submission with their own personally impossible expectations while a “QA Hippie” supervisor can use the same criteria to boost the team’s self-esteem by giving them all “100”s (turning the zeroes into smiley faces, of course).

As we near year end and are looking towards setting goals for 2011, perhaps one goal for all managers should be to identify areas of our process in which we act without consideration for those our actions will affect.

Video Clip: Do What Your Customer’s Love

In this video clip, Tom Vander Well presents to a small group of front-line Call Center Customer Service Representatives (CSRs). He illustrates why doing simply what customers expect may not result in higher levels of customer satisfaction. If you want to improve customer satisfaction, you have to consistently demonstrate on the phone the behaviors and service skills that customer’s love (and may not expect).

With On-Line Chat, a Few Extra Words Go a Long Way

online chat
Image by marioanima via Flickr

Our group goes beyond call monitoring to provide Service Quality Assessment for a client’s e-mail and/or on-line chat communication. The process is virtually the same. We define the key behaviors or service elements that will consistently meet and exceed the customer’s expectations and drive increased satisfaction. They are important, but often overlooked, communication channels. Your email and chat correspondence can make (or break) customer satisfaction just like a phone call. Take my experience today, for example:

Before we were married, I sponsored a child in a third world country through a charitable organization. It’s a great experience and my support quickly became a joint venture as my wife got involved. Her name, however, had never been added to the account. So, while making an on-line donation I noticed that there was an on-line chat option and figured it was a good time to add her name to the account.

Here is a transcript (names changed):

Mitzi: Thank you for contacting ORGANIZATION. How may I assist you today?
Tom: Hi Mitzi. I’m wondering how I can get my wife’s name added to my account. I started sponsorship before I was married, but now we are both involved in sponsoring our child and I’d like her name included.
Mitzi: I am happy to assist you with that!
(I feel like there was about a 4-5 minute wait here) 
Mitzi: What is your wife’s name?
Tom: Wendy.
(I feel like there was another 3-4 minute wait)
Mitzi: I am working on this, just a moment.
(I timed this wait at about 6 minutes)
Mitzi: I submitted the paper work on this. I appreciate your patience.
Tom: Great. Thanks!
(Waited briefly for a response)
Tom: Do I need to do anything else? How long does it take?
Mitzi: You shoul see the change gradually… in the next 4 to 6 weeks everything should have her name on it.
Tom: Wonderful. Thanks for your help!
Mizti: Thank you for chatting with me. I welcome your feedback. Please click here to complete a 15 second survey.

The on-line rep was pleasant, professional and did a nice job.  My issue, as far as I know, has been resolved. It was a good experience, but it wasn’t a great experience. There are a couple of key things that would have left me far more satisfied:

  • Be sensitive to my time. Our customer satisfaction research shows that time related elements (e.g. quickness reaching a rep, answers without being placed on hold, or timeliness of follow up) are a growing driver of customer satisfaction across many customer segments. There were long gaps of time between responses that left me wondering what was happening on the other end. A quick statement to let me know what was going on, or to give me a time frame would have eased my anxiety and impatience.
  • Don’t just tell me what you did; tell me what I can expect. The on-line rep told me that she submitted the paperwork, but I had to guess what that meant. My initial thought was that I might have to wait on-line while it was processed. Rather than anticipating my questions, I was left having to pull it out of her.
  • Courtesy and friendliness are sometimes more important in text than on the phone. CSRs in a call center have the inflection of their voice to communicate a courteous tone, but written communication can take on an abrupt feeling when it’s void of courtesy. Adding a “please” when making a request or using the customers name (especially when they use yours) can turn a black and white exchange into a pleasant conversation.
  • Make sure you’ve answered all my questions. At the end of the chat I was left wondering if it was over. By asking if I had any other questions, it would have clued me in that the issue was resolved while offering to go the extra mile and help with other needs.

Here’s the transcript again, but I’ve rewritten it the way I would have appreciated experiencing it:

Mitzi: Thank you for contacting ORGANIZATION. How may I assist you today?
Tom: Hi Mitzi. I’m wondering how I can get my wife’s name added to my account. I started sponsorship before I was married, but now we are both involved in sponsoring our child and I’d like her name included.
Mitzi: I am happy to assist you with that! And, congratulations on getting married!Please bear with me. It will take a few minutes to access your account and the appropriate forms.
Mitzi: Thanks for waiting, Tom. May I please have your wife’s name?
Tom: Wendy.
Mitzi: Thank you. It will take me 5 minutes or so to fill out the appropriate forms.
Mitzi: Sorry for the delay. I am still working on this, just a moment.
Mitzi: I appreciate your patience. I submitted the paper work on this. You should see the change gradually… in the next 4 to 6 weeks everything should have her name on it.
Tom: Great. Thanks!
Mitzi: Any other questions I can answer for you, Tom?
Tom: No. Wonderful. Thanks for your help!
Mizti: Thank you for chatting with me, and for you and Wendy sponsoring a child. I welcome your feedback. Please click here to complete a 15 second survey.

A few extra words and sentences, properly placed, can turn a cut-and-paste chat experience into one that is personable, friendly, professional and polite.

Enhanced by Zemanta