Category: Call Center Issues

You Can’t Fix What You Don’t Know is Broken

[tentblogger-youtube jhKqqYuV9MU]

I’m working with several new teams for a particular client. It’s always a bit of a sticky wicket when I show up for the first time. The other day I walked into the office of a department manager who’d been ducking me for weeks. Unanswered e-mails, unreturned voicemails and missed appointments. My team has been hired by the executive team to do a pilot assessment of his team’s service, and he wasn’t too happy about it. Many times a team and their managers are a little freaked when Mr. or Ms. Big tells them that someone is coming to listen in on their customer conversations.

  • “Oh, great! Big Brother is here!”
  • “What? Do you think we’re bad?”
  • “Someone’s just looking for the dirt to fire us!”
  • “What did I do wrong?”

I get it. It’s not always comfortable doing something new and a bit threatening when you’ve never done it before. And yet, I have almost twenty years doing this for many different companies and many different teams who started out as skeptics and are now long-term partners in better sales, service and even collections. it seems comfortable and easy rolling along without really knowing what’s happening in those moments of truth when your customers are talking to your company. “If it ain’t broke don’t fix it,” they say. But we are all human beings working for human beings dealing with human beings in a system created and maintained by human beings. I therefore have come to trust more in Bob Dylan’s perspective: “Everything is Broken.”  My experience is that with any cusotmer service, sales, or collections team there are things which are broken in the system which could be easily remedied if they are simply identified. But first you have to identify what they are. If you’re not listening, then you might not know something is broken until it’s too late (and no one wants that to happen at any rung of the corporate ladder).

When our team does a first time pilot assessment with a team, we generally start by assessing the whole team. We listen from the customer’s perspective. We don’t care who is who. We don’t identify individual agents. Like the customer, when you call Acme Anvils you don’t care who answers the phone. You’re talking to Acme Anvils. By starting with a blind assessment of the team, we can quickly identify areas that the team needs to improve. There’s no finger pointing, no calling out, no working agreements, and no private converstions in the corner office. There’s just a common issue that the whole team needs to address.

I’m happy to say that the vast majority of our clients, from the front-line to the board room, eventually learn that our Service Quality Assessment benefits everyone from the customer to everyone in the organization who cares about the customer and wants to do a good job. But, I first have to prove it to them and earn their trust. And so, I begin my day.

The Check-Out Line and Hold Button Have Glaring Similarities

NEW YORK - NOVEMBER 24:  Travelers wait in lin...

Image by Getty Images via @daylife

The Wall Street Journal had a great article this morning about the science of finding the best check-out line. Within the article, it talked about what happens when you are in queue for a period of time:

Envirosell, a retail consultancy, has timed shoppers in line with a stopwatch to determine how real wait times compared with how long shoppers felt they had waited. Up to about two to three minutes, the perception of the wait “was very accurate,” says Paco Underhill, Envirosell’s founding president and author of the retail-behavior bible “Why We Buy: The Science of Shopping.”

But after three minutes, the perceived wait time multiplied with each passing minute. “So if the person was actually waiting four minutes, the person said ‘I’ve been waiting five or six minutes.’ If they got to five minutes, they would say ‘I’ve been waiting 10 minutes,'” Mr. Underhill says.

It confirms exactly what we’ve known about customers placed on hold for many years. Put a caller on hold for a minute or two and they typically don’t mind. There’s something that happens, however, between the two and three minute hold. Customers who hit that third minute on hold begin to get anxious. The perceived length of time on hold becomes inflated. They’ve been on hold for just over three minutes but if you ask them they’ll tell you it was ten.

The hold button can be a useful tool to help CSRs avoid dead air or allow CSRs a moment to get information together and confidently prepare their response before addressing the customer. If you leave the customer for too long, however, it’s going to come back to bite you. When using the hold button, remember:

  1. Ask the caller’s permission to place him/her on hold. Customers like to feel that they have control and a say in the service they receive. Forcing the customer to hold or placing a customer on hold without permission runs the risk of the customer feeling they are getting the runaround.
  2. If possible, give the customer a realistic time frame. Many customers feel lied to when a CSR said “Let me put you on hold for a second” only to be gone for three minutes. By telling the customer her or she will be on hold “for a minute or two” is more honest and better manages expectations.
  3. Check back after two minutes. If it’s been two minutes and you’re still working on the issue then return to the line, apologize for the wait, explain that you’re still working on it, and give the customer the option of remaining on hold or receiving a call back in a set period of time.

Call Center managers or supervisors would do well to find a way to give CSRs a two minute countdown timer that starts when they hit the hold button and reminds them when the two minutes has lapsed.

Armed with the knowledge of what we know to be true about customers, we can better manage the process of asking the customer to hold while we serve them.

Enhanced by Zemanta

Year-End QA Considerations

Calendar

Image by studiocurve via Flickr

For many companies, the months of November, December and January signal the end of a fiscal year. With the end of the year comes annual performance management reviews which often include a service quality component. It is quite typical for this service quality component to be a score from the call monitoring and coaching QA program (e.g. “your call may be monitored to ensure quality service”). After almost two decades of doing QA as a third party provider as well as helping companies set up and improve their QA programs, I can tell you that year end reviews bring heightened scrutiny to your QA process. This is especially true if monetary bonuses or promotions hinge upon the results.

Not to be a fear monger (it is Halloween as I write this), but now is a good time to do a little self check on your program:

  • Sample: If you’re QA process is intended to measure a CSR’s overall service quality across the entire population of calls, make sure your sampling process is robust and you’ve collected a truly random sample of calls. This means that calls were not excluded for time and that they are representative across hours of the day, days of the week and weeks/months of the year.
  • Objectivity: Make sure you’ve checked your internal call analysts objectivity. This can be done by a simple analysis of the data. Run averages of each analysts results for both the overall score as well as for each element on your scorecard. By comparing individual scores against the group average, you will see where there may be objectivity issues that clouded results. This can also be checked through a robust and disciplined calibration program, though that is not done quickly.
  • Bias: Make sure that your program is not set up in such a way that those who analyze the calls have an inherent interest in the outcome. A classic example of where this happens is when supervisors score their teams calls. The team’s QA results reflect on the supervisor (in some cases there are incentives for the supervisor that hinge on the quality scores), so it is often hard for supervisors to be completely objective in their analysis. A good quality program will reward analysts for the objectivity of the results, not the results themselves.
  • Collusion: If, month after month, the QA results consistently show that your entire team is performing at 98-100% of goal, then one of two things is likely true. 1) Your QA program has the bar set so low that almost anyone with blood pressure and a pulse can meet goal or 2) Everyone in the organization from the front-line CSR to the executive suite has colluded in making the company’s service quality look a lot better than it is. I get it. Sometimes it’s easier to pretend a problem doesn’t exist rather than doing the work to address it. Every organization that has more than a handful of CSRs can count on having a wide range of quality across their front-line ranks. It’s a human nature thing. If everyone is scoring almost perfectly, then something’s definitely rotten in the state of Denmark.

If your year-end is coming up, it’s a good idea for Call Center Managers and executives to start asking some questions now so that there are no surprises when CSRs, unhappy with the results of their performance management, begin asking questions. If you’re interested in an independent 3rd party audit of your current program, contact me. It’s one of the things we do.

Enhanced by Zemanta

Getting Started with QA: Getting Your Feet Wet

BOSCH 90303 PLUNGE ROUTER BACK JPG

On the workbench in my garage is a router and router table. I bought it several years ago. It’s a nice one. I even bought a bunch of jigs for creating different kinds of edges. In all the time I’ve had it, I’ve turned it on less than five times. The problem is, I am not very proficient with the whole carpentry thing and I don’t have a lot of time on my hands to dedicate to learning the craft. I have the desire and I have the tool, but I don’t have the time, energy or expertise. Am I alone? I imagine you have a tool, gizmo, or gadget you purchased that is collecting dust for similar reasons.

Technology has made the ability to record and monitor phone calls simple for business. Many companies have the ability through the suite of services they purchased along with their phone system. However, like me and my router, the things that keeps many companies from entering into a Quality Assessment (QA) or Call Coaching program is the lack of time, energy or experience. Starting a QA program can seem like a daunting task for the executive or manager who has plenty of other daily fires that urgently require her/his attention. Resources are scarce and we don’t have the staff to dedicate to it. If that describes you, you’re not alone.

I may not be ready to build a fancy looking entertainment system with the unused router in my garage, but I could certainly pay a competent woodworker friend a few bucks to spend one evening helping me finish that one shelf for my office. Not only do I get the shelf done, but I can also learn a few things to build my knowledge and confidence so I might tackle another small project on my own.

The same principle can apply to your QA aspirations. You don’t have to create an entire QA program to benefit from the available technology. One of the ways our group serves companies who are new to world of QA is by providing a one-time pilot assessment.  The investment and risk are minimal. The process is simple. The value and ROI are potentially huge. 

Here’s how it works: We work with our client of QA novices to define their goals and develop a QA scale unique to their particular business, brand, customer, and call types. Our experienced call analysts then analyze a relatively small yet statistically valid sample of phone calls over a period of a few weeks. A few weeks later we deliver a detailed QA report that details:

  • Customer types (Who is calling?)
  • Call types (What are they calling about?)
  • CSR skill performance (How did our team do at serving the customer?)
  • Resolution rates (How many calls were unresolved? Why?)
  • Training priorities ( What do we need to work on?)
  • Policy/Procedural Issues (What policies & procedures are negatively impacting resolution and the customer experience?)
  • Brief call summaries of every call assessed (What did your team hear in each phone call?)

In addition, we always provide a follow-up session with management to review the data and discuss recommendations. We also provide a front-line training session(s) designed to effectively communicate the SQA data to your team and provide key service skill training based on the results of the assessment. In some cases, we also work with a company’s internal training/coaching personnel and help them leverage the data to set training priorities.

The Service Quality Assessment (SQA) Pilot Assessment is a great way for a company to get their feet wet in the world of QA, to help companies who have struggled to successfully implement a QA program, or to give executives/managers an outside perspective with which to audit and compare their internal efforts. You walk away from the SQA with:

  • a QA scale designed for your team which can be utilized/amended for future internal efforts
  • an objective benchmark of your current team’s service performance
  • a prioritized list of training/coaching opportunities which will help you maximize your training dollars
  • effective communication of pertinent data and training for your management team and front line CSRs
  • a knowledge of policy and procedural issues that are negatively impacting customers and/or needlessly wasting resources
  • a blueprint of how QA works and a hands on participation in the process which will increase your knowledge/confidence and can help you realistically proceed in jump starting those internal QA efforts you’ve been putting off
  • a low risk experience to measure the cost/benefit using a third-party to do QA for you.

You don’t have to dive into call monitoring or Quality Assessment and risk drowning. You can easily and reasonably get your feet wet. If you’d like to explore what an SQA Pilot Assessment would look like or cost for you and your company please give us a call or drop us an e-mail.

Now, does anyone know a capable woodworker in my area who has a free evening?

 

 

Enhanced by Zemanta

Beware of “Metrics Deception”

Histograms help analyze metrics

Image via Wikipedia

When talking to managers about their contact center’s quality program I’ll often ask what they are currently doing to measure quality.

“Well, we generate reports each day that give us various quality metrics which we then track. Those metrics then go into a monthly quality report to senior management and are broken down by Customer Service Representative (CSR) and tracked for their performance management.”

“Great,” I’ll answer. “Can you give me an idea of the metrics you use?”

“Sure, Average Handle Time (AHT) and Calls Per Hour (CPH) are the primary ones, but then we also track After Call Work (ACW) and amount of time spent on the phone ‘available.'”

At that moment, I know that this manager has fallen prey to the “Metrics Deception.” So, here’s the deception. Each of the metrics mentioned, while important to track as they relate to the cost of doing business, are not “quality” related metrics. They are quantitative metrics (number of minutes, number of calls, amount of time, and etc.) but they tell you nothing about the quality of the interaction that took place between the customer and the CSR. For managers it is really easy to fall into the Metrics Deception because the reports and data off of the phone switch are easily generated, easily quantified and easy to track. When the Executive Vice President of Operations asks for a quality report, it’s easy to provide a nice chart showing that your “quality” efforts have reduced Average Handle Time by ten seconds which will translates into a net savings of dollars over the course of the fiscal year. Well done. Cost savings is good. Everyone is happy.

Well, not everyone is happy. The customers who got poor, rushed answers that didn’t resolve their question were not happy. The customer who got hung-up on while on hold by a CSR who was trying to reduce his AHT was not happy. The CSR who feels pressured into short-changing the customer for the sake of making their “quality” metrics look good on next month’s report is not happy.

If your “quality metrics” don’t correspond to your customer satisfaction ratings, then you might just want to double check that you haven’t been  deceiving yourself into thinking that quantitative metrics are qualitative in nature.

Enhanced by Zemanta

Defiance is More Work Than Behavior Change

Image by PaulDCocker via Flickr

Years ago I went back to visit some old teachers and to thank them for their influence in my life. I enjoyed some great conversations. When I asked one teacher how things were at my alma mater, he sighed and shook his head. “If students put have as much energy into studying as they expended trying to find new ways of cheating, they’d be successful.”

Over the years I’ve coached and trained a dizzying number of Customer Service Representatives (CSRs). The vast majority of them have proven to be a pleasure to coach. I’m proud to watch their development and advancement within their respective organizations. The ones who baffle me are those rare few who approach the quality process with bad attitudes and outright defiance. I’ve known CSRs to audit every evaluation and prepare lengthy excuses, arguments and petitions against any infraction no matter how blatant. I will be the first to admit that QA is a human enterprise and mistakes will be made. I have no problem with CSRs checking to make sure that calls have been evaluated properly.  I have also observed some crazy employer expectations that deserved vigorous pushback from the front-line, and I have become their advocate. Most of the time, however, the quality expectations asked of CSRs is quite reasonable.  No matter how reasonable the expectations, I find that there are some CSRs who expend more energy fighting the system and aruging any critical mark than it would take to simply change their behavior and adhere to their employers standards.

For example, I knew one CSR whose employer asked that every employee include an “inviting question” as part of their greeting. There was no script and the company was generous in giving their team latitude in making whatever “inviting question” they chose both personal and conversational (e.g. “How can I help you today?”, “What can I do for you?”, “May I help you?”, and etc.). This particular CSR had a stock greeting she used whenever she picked up the phone. She would abruptly provide the name of her company followed by her name. Period. To be honest, it was abrupt greetings like this CSRs which prompted the company add an “inviting question” as a mandatory element of every phone greeting. Adding a “May I help you?” would have really softened this CSR’s greeting and help start the conversation with a more inviting tone. She refused, however. She argued the point vehemently ad nauseum. She felt it was silly. She said it wasn’t natural. In a desperate attempt to avoid simply adding a few words to her greeting, she hatched a plan to give customers her cell phone number. By asking them to call her cell phone she avoided being recorded and evaluated.

After coaching CSRs like the one I just described, I find myself sighing and shaking my head just like my old high school biology teacher. I wish I had a magic bullet for coaching these rebellious spirits. I have know some defiant ones who finally changed their behavior, their atttitude and have become exceptional service providers. I’ve have known others whose defiance finally led to their resignation or the termination of their employment. I’ve always approached my coaching sessions with a positive attitude and the desire to encourage the success of the CSR by empowering them to provide service that will satisfy customers. As the old saying goes, “you can lead a horse to water, but you can’t make them drink.”

New CSRs and the QA Question

Call Center Taxis Libres

Image via Wikipedia

The other day I received an email from a subscriber asking about my thoughts on how to transition new employees into the Quality program. For every Customer Service Representative (CSR) there is a period of training prior to getting on the phones to work with customers. Well, let me say that for most CSRs there is some kind of “nesting period” in the contact center. Every client I’ve ever worked with has struggled to figure out how to handle this nesting period. In my experience, there are three typical scenarios, and I believe one of the methods is better than the others.

  • Don’t Assess the CSR. This is a very common way to handle the new CSR question. The CSR will be on the phones for 30-90 days taking calls from customers without ever being recorded or assessed. On the surface this may seem like an act of kindness to the CSR, but there are two major drawbacks. First, you have very real customers who are having a very real customer service experience which will impact their satisfaction, loyalty and future purchase intent. If nothing else, you owe it to your customers to be monitoring the newbie’s service delivery. Second, you aren’t doing the CSR any favors by letting him or her develop unmonitored bad habits that will have to be corrected when the nesting period ends.
  • Listen and Coach, but Don’t Analyze. In many cases, the call center supervisor, QA team, a coach or trainer will listen to the new CSR and provide feedback and coaching, but they won’t actually analyze calls utilizing the quality form or scorecard. Often, the coach may jack in to the call and provide immediately side-by-side coacing. Again, this seems like a gracious way to transition the new CSR onto the floor. The CSR gets feedback and coaching but does not fear having some of the common rookie mistakes count against them in their probationary period. The problem with this approach is that the coaching and feedback is usually not documented and the feedback often centers around individual call-specific situations rather than preparing the CSR for the larger behavioral habits that will be expected of them. The new CSRs are also not getting prepared for what the scorecard or QA Form is going to look like or expect from them. They don’t get a chance to benchmark where they are against the standards to which they will be accountable when they get placed in the program.
  • Assess the calls but don’t count it. A compromise that many call centre quality teams will employ is providing a probation period or nesting period in which customer calls taken by new agents are sampled and scored just like everyone else. The scores, however, may not count against the CSR’s permanent record or be considered a simple benchmark prior to being held fully accountable when they transition out of the nesting period. My experience is that this is the best solution to the new CSR dilemma. The customer’s experience is being measured and not ignored. The CSR is able to quickly understand what is on the QA form, what behaviors matter and built habits during their nesting period that will give them the greatest opportunity to succeed once they are fully  transitioned to the call floor. Coaches can use the form to provide data-driven feedback about what will “count” for the CSR in the long run instead of providing haphazard or situational coaching that may have little measurable value for the CSR in the long run. In addition, CSRs usually enjoy seeing the progress they make in the first few months as their quality improves over the “benchmark” score they received during the probation period.

The key, I believe, is for managers to consider who you are serving with your QA strategy. Companies often rely on their QA program to simply be a CSR management tool and so consideration is only given to how the process  and program will affect and be perceived by the employee. When effectively utilized Quality Assessment should also provide you with an accurate, objective, and honest assessment of the experience your customers receive when they are engaged with your employees on the phone (not just the experience with trained veteran employees but with new CSRs, as well). Because customer interactions with new, untrained CSRs generally represent the greatest risk to your customer’s experience you need to be honestly assessing what is happening in those moments of truth. Doing so serves the customer, your company, and the new CSR well.

Enhanced by Zemanta

Addressing Team Differences in QA

Standardization can sometimes be a bit of a holy grail for corporations. We have several corporate clients who have multiple divisions and teams across one or more contact centers. It is understandable that a company would want to have a common scorecard by which Customer Service Representatives (CSRs) are measured. This not only ensures equitable performance management, but also helps drive a unified brand to a wide customer base.

All teams are not, however, the same. There are differences in function and procedure necessitated by a company’s business. Having diverse business functions sometimes drives the belief that there must be completely different QA programs or forms. Our experience is that companies can create standardization while addressing the internal differences.

Scorecard Considerations

A common way that companies approach unique business functions across teams is to divide the QA scorecard. One part of the form addresses common soft skills and behaviors expected of all CSRs to communicate the company brand and deliver a consistent customer experience. The other part of the form addresses procedural or technical aspects of the CSRs job which may be unique to each team.