Category: QA Methodology

Free Webinar! A Beginner’s Guide to Call Monitoring and Quality Assessment

Your call may be monitored for quality and training purposes” is a familiar phrase in today’s business world. For growing companies interested in beginning a call recording or quality program, the process can seem both confusing and daunting. This free webinar is intended to help companies who are exploring the development and implementation of a call recording and quality assessment program.

tom head shotThe webinar will be presented by Tom Vander Well, Executive Vice-President of c wenger group. Tom is a pioneer in the call monitoring and Quality Assessment industry and has over 20 years experience analyzing moments of truth between businesses and their customers. In this webinar Tom will help participants think through basic questions you should be asking. He will provide various methods for approaching both call recording and Quality Assessment, discuss their strengths and weaknesses, and present cost effective, practical solutions.

The FREE webinar will be July 13, 2017 at 12:00 p.m. CDT. Registration is limited to 25 participants, so register today! Click the button below or visit:

http://www.videoserverssite.com/register/cwengergroup/registration

Register for Webinar Button

Five Reasons to Outsource Your CSAT and QA Initiatives

Training & Coaching

Over the past decade more and more companies have adopted an attitude of “it’s cheaper for us to do it ourselves.” We have experienced an era of increased regulation, executive hesitation, and economic stagnation. Companies have hunkered down, tightened the purse strings, and found ways to play it safe. Customer Satisfaction (CSAT) research and Quality Assessment (QA) have been popular areas for businesses to do this given technology that makes it relatively easy to “do it yourself.”

Just because your team can do these things yourself, doesn’t mean that it’s a wise investment of your time and resources, nor does it guarantee that you’ll do it well. Based on a track record of mediocre (at best) renovations, my wife regularly reminds me that while I technically can do home improvement projects cheaper myself, she’d prefer that we pay an expert to do it well (and free me to invest my time doing more of what I do well so we can pay for it).

So why pay an outside group like ours to survey of your customers, or monitor your team’s calls to provide a Quality Assessment report on how they’re serving your customers?

I’ll give you five reasons.

  1. It gets done. Analyzing phone calls, surveying customers, and crunching data require a certain amount of discipline and attention to detail. When things are changing, fires are raging, and the needs of your own business are demanding a team’s time and attention, then things like crunching data or listening to recorded phone calls become back burner issues. It’s common for people to tell me that they have their own internal QA team. When I ask how that’s going for them, I usually hear excuses for why it’s hard to get it done with all the urgent matters to which team members must attend. When you hire a third party provider, it gets done. It’s what we’re hired do.
  2. It gets done well. Our clients represent diverse areas of the market from manufacturing to retail to financial services. Our clients tend to be leaders in their industries because they are good at what they do. Developing expertise outside of their discipline isn’t a wise investment of resources and (see #1) and who has time for that? Our clients want to invest their time and resources doing what they know and do well. Measuring what is important to their customers, turning those things into behavioral attributes, analyzing communication channels, and coaching their agents how to improve customer interactions in ways that improve customer satisfaction are what we do well.
  3. You get an objective perspective. When providing audits of internal Quality Assessment teams or reviewing internally produced customer survey data, it’s common for us to find evidence of various kinds of bias. Employees at different levels of an organization have motivations for wanting data to look good for their employers, or bad with respect to coworkers with whom there are other workplace conflicts. I’ve observed supervisors who are overly harsh assessing the calls of employees with whom they have conflicts. Internal call analysts, wanting to be kind to their coworkers, will commonly choose to “give them credit [for a missed service skill] and just ‘coach them on it.'” Internal research data can be massaged to provide results that gloss over problems or support presuppositions that are politically correct with the executive team. Our mission, however, is to provide objective, customer-centric data that give our clients a realistic picture of both customer perceptions and the company’s service performance. It is our mission to be accurate and objective in gathering and reporting data.
  4. You get an outside perspective. It has been famously observed that “a prophet is not welcome in his hometown.” Internal data is often discredited and dismissed for any number of reasons from (see #2) “What do they know?” doubts about the expertise of coworkers to (see #3) “They hate me” accusations of bias which we’ve discovered are sometimes accurate and other times not. Front line managers regularly tell me that they appreciate having our group providing assessment and coaching because they can’t be accused of being biased, and as outside experts we have no internal ax to grind. In addition, our years of experience with other companies provide insight and fresh ideas for handling common internal dilemmas.
  5. You can fire us with a phone call. “Do you know why I keep you around?” a client asked me one day. I took the bait and asked him why. “It’s because I take comfort in knowing I can pick up the phone and fire you whenever I want.” He went to explain that he had no desire to hire an internal team to provide the survey data, quality assessment, and call coaching our team provided their company. Not only would he bear the expense and headaches associated with developing an expertise outside of their company’s discipline (see #2), but once employed he couldn’t easily get rid of them should they prove as ineffective as he expected they would be (See #1, #3, and #4). His point was well taken. Our group has labored for years with the understanding that our livelihoods hinge on our ability to continually provide measurable value to our clients.

Yes, you can technically generate your own CSAT survey or call Quality Assessment data. Technology makes it feasible for any virtually any company to do these things internally. The question is whether it is wise for your company to do so. When calculating the ROI of internal vs. external survey and QA initiatives, most companies fail to calculate the expenses associated with ramp up, development, training, nor do they consider the cost associated with employee time and energy expended doing these things poorly and providing questionable data and  results.

Three Ways to Improve Your Quality Program in 2017

It’s still January and everyone is busy implementing goals for 2017. It’s not too late to take a good, long look at your contact center’s quality program with an eye to improving things this year. Here are three thoughts for taking your quality assessment (QA) to a new level.

Reevaluate the Scorecard

Most quality programs hinge on the quality of the criteria by which they measure performance. A few years ago there was a backlash against behavioral measurements (e.g. “Did the agent address the caller by name?”) as companies sought to avoid the calibration headaches and wrangling over definitions. The pendulum swung in true human nature to the opposite side of the continuum to become completely subjective. Multiple behaviors gave way to two or three esoteric questions such as, “Did the agent reflect the brand?”

This shift to the subjective is, of course, wrought with its own problems. You can forget about having any objective data with which to measure agent performance. If your analyst is Moonbeam Nirvana then you’ll get consistently positive evaluations complete with praise for what Moonbeam believes was your good intentions (and lots of smiley emoticons). If, on the other hand, your analyst is Gerhardt Gestapo then your performance will always fall short of the ideal and leave you feeling at risk of being written up.

Measuring performance does not have to be that difficult. First, consider what it is that you really desire to accomplish. Do you want to measure compliance or adherence to corporate or regulatory requirements? Do you want to drive customer satisfaction? Do you want to make agents feel better about themselves? Any of these can be an arguable position from which to develop criteria, but you should start with being honest about the goal. Most scorecards suffer from misunderstood and/or miscommunicated intention.

Next, be clear about what you want to hear from your agent in the conversation. Define it so that it can be easily understood, taught, and demonstrated.

Prioritizing is also important. While exhaustive measurement of the interaction can be beneficial, it is also time consuming and may not give you your bang for the investment of time and energy. If your priority is ad-on sales, then be honest about you intention of measuring it, define what you want to hear from your agents, then focus your analysts on listening for those priority items.

Look at Data for Both Agents and Analysts

One of the more frequently missed opportunities to keep your QA process on task is that of looking at the data of how your analysts actually measured the calls.

Years ago our team was the third party QA provider for several teams inside a global corporation while other internal teams managed the job for other locations. There was an initiative to create a hybrid approach that put the internal and external analysts together in sampling and measuring agents across all offices. When we ran the numbers to see how analysts were scoring, however, the internal analysts’ average results were consistently higher than the external analysts. Our analysis of analyst data provided the opportunity for some good conversations about the differences in how we were hearing and analyzing the same conversations.

Especially with larger quality operations in which many analysts measure a host of different agents and/or teams, the tracking of analyst data can provide you with critical insight. When performing audits of different QA programs, it is quite common for our team to find that analysts who happen to also be the team’s supervisor can be easily given to sacrifice objectivity in an effort to be “kind” to their agents (and make their team’s scores look a little better to the management team). Likewise, we have also seen instances where data reveal that one analyst is unusually harsh in their analysis of one particular agent (as evidenced in the deviation in scores compared to the mean). Upon digging into the reasons for the discrepancy it is discovered that there is some personality conflict or bad blood between the two. The analyst, perhaps unwittingly, is using their QA analysis to passive aggressively attack the agent.

If you’ve never done so, it might be an eye opener to simply run a report of last year’s QA data and sort by analyst. Look for disparities and deviations. The results could give you the blueprint you need to tighten up the objectivity of your entire program.

Free Yourself from Software Slavery

As a third party QA provider, our team is by necessity platform agnostic when it comes to the recording, playing and analyzing phone calls. We have used a veritable plethora of software solutions from the telephony “suites” of tech giants who run the industry like the Great and Powerful Oz to small programs coded for a client by some independent tech geek. They all have their positives and negatives.

Many call recording and QA software “suites” come with built in scoring and analysis tools. The programmers, however, had to create the framework by which you will analyze the calls and report the data. While some solutions are more flexible than others, I have yet to see one that gives one the flexibility truly desired. Most companies end up sacrificing their desire to measure, analyze, and/or report things a certain way because of the constraints inherent in the software. The amazing software that the sales person said was going to make things so easy now becomes an obstacle and a headache. Of course, the software provider will be happy to take more of your money to program a solution for you. I know of one company who, this past year, paid a big telephony vendor six figures to “program a solution” within their own software, only to watch them raise their hands in defeat and walk away (with the client’s money, of course).

Tech companies have, for years, sold companies on expensive promises that their software will do everything they want or need it to do. My experience is that very few, if any, of the companies who lay out the money for these solutions feel that the expensive promises are ever fully realized.

If your call data, analysis and reporting is not what you want it to be, and if you feel like you’re sacrificing data/reporting quality because the software “doesn’t do that,” then I suggest you consider liberating yourself. If the tool isn’t working, then find a way to utilize a different tool. What is it we want to know? How can we get to that information? What will allow us to crunch the numbers and create the reports we really want? Look into options for exporting all of the data out of your software suite and into a database or Excel type program that will allow you to sort and analyze data to get you the information you want and need. Our company has always used Excel (sometimes in conjunction with some other statistical software) because it’s faster, easier, more powerful and infinitely more flexible than any packaged QA software we’ve ever tested.

Continuous improvement is key to business success. Scrutinizing quality criteria, analyst data, and your software constraints are just three simple ways to take a step forward with your quality program. Here’s to making sure that we’re doing things better at the end of 2017 than we were doing at the start!

 

Five Reasons to Consider a Third Party QA Provider

c wenger group is a full service Quality Assessment provider, assisting clients set up their QA programs and providing QA as a 3rd party complete with call analysis, reporting of team and individual agent data, and even data led coaching and training.

c wenger group is a full service Quality Assessment provider, assisting clients set up their QA programs and providing QA as a 3rd party complete with call analysis, reporting of team and individual agent data, and even data led coaching and training.

If your team or company is thinking about getting into call monitoring and Quality Assessment (QA), or for those who are seeking a solution to their internal QA headaches, we would encourage you to at least give consideration to a third party QA solution. Many companies dismiss the idea of a third party provider without really weighing the option. With nearly a quarter century of experience and multiple client relationships of twenty years or more, the team here at c wenger group believes we’ve proven that it can be a sensible alternative.

Here are five reasons to consider a third party QA provider:

  1. Expertise. I’m sure your company is good at what it does. You have expertise in your field and would like to focus your resources and energies on doing what you do well. We feel the same way. It may seem that analyzing a phone call, e-mail, or chat should not be that difficult. The technology company who sold you your suite of software probably made it sound like it would practically run itself and give you all sorts of powerful information with a few clicks of the mouse. The truth is that a successful quality program is more complex than it seems. Many companies go down the road to setting up their own quality program only to find themselves bogged down in a quagmire of questions about methodology, sample sizes, criteria, and calibration. Don’t try to re-invent the wheel building expertise in a business discipline that distracts you from doing what you do well (and what makes you money). Let us do what we do well, and help you with that.
  2. Expediency. We’ve had many companies tell us that they purchased or installed a call recording and QA solution that they thought would deliver an easy, “out of the box” program. Instead, they find themselves feeling like they purchased an expensive plane that sits on the tarmac because no one knows how to fly it. Don’t spending months wrangling and struggling just to figure out how you want your QA program to look and work. How much time will you and your valuable, talented team members waste in meetings and strategy sessions just trying to figure out how you’re going to analyze calls? We’ve been doing QA for companies of all shapes, sizes, and types for many years, and in short period of time we can have a working, effective, successful QA program set up and delivering useful data and information right to your desktop.
  3. Objectivity. One of the most common pitfalls of internal quality programs is analyst bias. Supervisors are tasked with monitoring their own teams’ calls, but they don’t want the team (or themselves) to look bad so when they hear something that goes wrong in a call they give the agents credit on the QA form and (wink, wink) “coach them on it.” A quality team member has personality issues with an agent, so he scores that agent more stringently that the rest of the team. A team leader has an agent who is disruptive to the team. She starts looking for “bad calls” to help make a case to fire the problem team member. These are scenarios we’ve seen and documented in our QA audits. They happen. What’s the cost of an internal QA program that doesn’t deliver reliable data or results? A third-party QA provider is not worried about making people look good or grinding axes. We are concerned with delivering objective data that accurately reflects the customer’s experience.
  4. Results delivered regularly, and on time. One of the biggest problems with internal QA programs is that they chronically bow to the tyranny of the urgent (which is all of the time). When things get busy or stressed, the task of analyzing calls is the first thing pushed to the back burner. Internal analysts procrastinate their call analysis until the deadline looms. Then, they rifle through calls to get them done and the results are not thoughtful, accurate, or objective. Our clients tell us that they appreciate knowing that when we we’re on the job the QA process will get done and it will be done well. Calls will be analyzed and reports will be delivered regularly and on time. Better yet, the results will be effective at helping you make tactical goals for improvement, effectively focus your training, manage agent performance, and successfully move the needle on customer satisfaction, retention, and loyalty.
  5. You can always fire us. A client once told us that he kept us around because he slept better at night knowing that he could always fire us. His comment was, admittedly, a little unnerving but his logic made a lot of sense. “If I do this QA thing myself,” he explained, “I have to hire and pay people to do it. In today’s business environment it’s impossible for me to fire someone without a lot of HR headaches. So, if the people I pay to do it internally don’t do it well then I’m stuck with both them and the poor QA program. I like having you do QA for us. Not only do you do it well, but I know that if anything goes wrong I can just pick up the phone and say, ‘we’re done.'” The good news is that he never made that call before he retired!

If you’re looking at getting started in call monitoring and assessment, or if you have a program that isn’t working, we would welcome you to consider how one of our custom designed solutions could deliver reliable, actionable, and profitable results.

 

CWG logoLR

c wenger group designs and provides fully integrated Customer Experience solutions including Customer Satisfaction research, call/e-mail/chat Quality Assessment, and coaching/training solutions for teams and individual agents. Our clients include companies of all sizes in diverse market sectors.

Please feel free to contact us for a no obligation conversation!

Note: c wenger group will maintain your privacy

Getting Started with QA: Getting Your Feet Wet

BOSCH 90303 PLUNGE ROUTER BACK JPG

On the workbench in my garage is a router and router table. I bought it several years ago. It’s a nice one. I even bought a bunch of jigs for creating different kinds of edges. In all the time I’ve had it, I’ve turned it on less than five times. The problem is, I am not very proficient with the whole carpentry thing and I don’t have a lot of time on my hands to dedicate to learning the craft. I have the desire and I have the tool, but I don’t have the time, energy or expertise. Am I alone? I imagine you have a tool, gizmo, or gadget you purchased that is collecting dust for similar reasons.

Technology has made the ability to record and monitor phone calls simple for business. Many companies have the ability through the suite of services they purchased along with their phone system. However, like me and my router, the things that keeps many companies from entering into a Quality Assessment (QA) or Call Coaching program is the lack of time, energy or experience. Starting a QA program can seem like a daunting task for the executive or manager who has plenty of other daily fires that urgently require her/his attention. Resources are scarce and we don’t have the staff to dedicate to it. If that describes you, you’re not alone.

I may not be ready to build a fancy looking entertainment system with the unused router in my garage, but I could certainly pay a competent woodworker friend a few bucks to spend one evening helping me finish that one shelf for my office. Not only do I get the shelf done, but I can also learn a few things to build my knowledge and confidence so I might tackle another small project on my own.

The same principle can apply to your QA aspirations. You don’t have to create an entire QA program to benefit from the available technology. One of the ways our group serves companies who are new to world of QA is by providing a one-time pilot assessment.  The investment and risk are minimal. The process is simple. The value and ROI are potentially huge. 

Here’s how it works: We work with our client of QA novices to define their goals and develop a QA scale unique to their particular business, brand, customer, and call types. Our experienced call analysts then analyze a relatively small yet statistically valid sample of phone calls over a period of a few weeks. A few weeks later we deliver a detailed QA report that details:

  • Customer types (Who is calling?)
  • Call types (What are they calling about?)
  • CSR skill performance (How did our team do at serving the customer?)
  • Resolution rates (How many calls were unresolved? Why?)
  • Training priorities ( What do we need to work on?)
  • Policy/Procedural Issues (What policies & procedures are negatively impacting resolution and the customer experience?)
  • Brief call summaries of every call assessed (What did your team hear in each phone call?)

In addition, we always provide a follow-up session with management to review the data and discuss recommendations. We also provide a front-line training session(s) designed to effectively communicate the SQA data to your team and provide key service skill training based on the results of the assessment. In some cases, we also work with a company’s internal training/coaching personnel and help them leverage the data to set training priorities.

The Service Quality Assessment (SQA) Pilot Assessment is a great way for a company to get their feet wet in the world of QA, to help companies who have struggled to successfully implement a QA program, or to give executives/managers an outside perspective with which to audit and compare their internal efforts. You walk away from the SQA with:

  • a QA scale designed for your team which can be utilized/amended for future internal efforts
  • an objective benchmark of your current team’s service performance
  • a prioritized list of training/coaching opportunities which will help you maximize your training dollars
  • effective communication of pertinent data and training for your management team and front line CSRs
  • a knowledge of policy and procedural issues that are negatively impacting customers and/or needlessly wasting resources
  • a blueprint of how QA works and a hands on participation in the process which will increase your knowledge/confidence and can help you realistically proceed in jump starting those internal QA efforts you’ve been putting off
  • a low risk experience to measure the cost/benefit using a third-party to do QA for you.

You don’t have to dive into call monitoring or Quality Assessment and risk drowning. You can easily and reasonably get your feet wet. If you’d like to explore what an SQA Pilot Assessment would look like or cost for you and your company please give us a call or drop us an e-mail.

Now, does anyone know a capable woodworker in my area who has a free evening?

 

 

Enhanced by Zemanta

Beware of “Metrics Deception”

Histograms help analyze metrics

Image via Wikipedia

When talking to managers about their contact center’s quality program I’ll often ask what they are currently doing to measure quality.

“Well, we generate reports each day that give us various quality metrics which we then track. Those metrics then go into a monthly quality report to senior management and are broken down by Customer Service Representative (CSR) and tracked for their performance management.”

“Great,” I’ll answer. “Can you give me an idea of the metrics you use?”

“Sure, Average Handle Time (AHT) and Calls Per Hour (CPH) are the primary ones, but then we also track After Call Work (ACW) and amount of time spent on the phone ‘available.'”

At that moment, I know that this manager has fallen prey to the “Metrics Deception.” So, here’s the deception. Each of the metrics mentioned, while important to track as they relate to the cost of doing business, are not “quality” related metrics. They are quantitative metrics (number of minutes, number of calls, amount of time, and etc.) but they tell you nothing about the quality of the interaction that took place between the customer and the CSR. For managers it is really easy to fall into the Metrics Deception because the reports and data off of the phone switch are easily generated, easily quantified and easy to track. When the Executive Vice President of Operations asks for a quality report, it’s easy to provide a nice chart showing that your “quality” efforts have reduced Average Handle Time by ten seconds which will translates into a net savings of dollars over the course of the fiscal year. Well done. Cost savings is good. Everyone is happy.

Well, not everyone is happy. The customers who got poor, rushed answers that didn’t resolve their question were not happy. The customer who got hung-up on while on hold by a CSR who was trying to reduce his AHT was not happy. The CSR who feels pressured into short-changing the customer for the sake of making their “quality” metrics look good on next month’s report is not happy.

If your “quality metrics” don’t correspond to your customer satisfaction ratings, then you might just want to double check that you haven’t been  deceiving yourself into thinking that quantitative metrics are qualitative in nature.

Enhanced by Zemanta

New CSRs and the QA Question

Call Center Taxis Libres

Image via Wikipedia

The other day I received an email from a subscriber asking about my thoughts on how to transition new employees into the Quality program. For every Customer Service Representative (CSR) there is a period of training prior to getting on the phones to work with customers. Well, let me say that for most CSRs there is some kind of “nesting period” in the contact center. Every client I’ve ever worked with has struggled to figure out how to handle this nesting period. In my experience, there are three typical scenarios, and I believe one of the methods is better than the others.

  • Don’t Assess the CSR. This is a very common way to handle the new CSR question. The CSR will be on the phones for 30-90 days taking calls from customers without ever being recorded or assessed. On the surface this may seem like an act of kindness to the CSR, but there are two major drawbacks. First, you have very real customers who are having a very real customer service experience which will impact their satisfaction, loyalty and future purchase intent. If nothing else, you owe it to your customers to be monitoring the newbie’s service delivery. Second, you aren’t doing the CSR any favors by letting him or her develop unmonitored bad habits that will have to be corrected when the nesting period ends.
  • Listen and Coach, but Don’t Analyze. In many cases, the call center supervisor, QA team, a coach or trainer will listen to the new CSR and provide feedback and coaching, but they won’t actually analyze calls utilizing the quality form or scorecard. Often, the coach may jack in to the call and provide immediately side-by-side coacing. Again, this seems like a gracious way to transition the new CSR onto the floor. The CSR gets feedback and coaching but does not fear having some of the common rookie mistakes count against them in their probationary period. The problem with this approach is that the coaching and feedback is usually not documented and the feedback often centers around individual call-specific situations rather than preparing the CSR for the larger behavioral habits that will be expected of them. The new CSRs are also not getting prepared for what the scorecard or QA Form is going to look like or expect from them. They don’t get a chance to benchmark where they are against the standards to which they will be accountable when they get placed in the program.
  • Assess the calls but don’t count it. A compromise that many call centre quality teams will employ is providing a probation period or nesting period in which customer calls taken by new agents are sampled and scored just like everyone else. The scores, however, may not count against the CSR’s permanent record or be considered a simple benchmark prior to being held fully accountable when they transition out of the nesting period. My experience is that this is the best solution to the new CSR dilemma. The customer’s experience is being measured and not ignored. The CSR is able to quickly understand what is on the QA form, what behaviors matter and built habits during their nesting period that will give them the greatest opportunity to succeed once they are fully  transitioned to the call floor. Coaches can use the form to provide data-driven feedback about what will “count” for the CSR in the long run instead of providing haphazard or situational coaching that may have little measurable value for the CSR in the long run. In addition, CSRs usually enjoy seeing the progress they make in the first few months as their quality improves over the “benchmark” score they received during the probation period.

The key, I believe, is for managers to consider who you are serving with your QA strategy. Companies often rely on their QA program to simply be a CSR management tool and so consideration is only given to how the process  and program will affect and be perceived by the employee. When effectively utilized Quality Assessment should also provide you with an accurate, objective, and honest assessment of the experience your customers receive when they are engaged with your employees on the phone (not just the experience with trained veteran employees but with new CSRs, as well). Because customer interactions with new, untrained CSRs generally represent the greatest risk to your customer’s experience you need to be honestly assessing what is happening in those moments of truth. Doing so serves the customer, your company, and the new CSR well.

Enhanced by Zemanta

Addressing Team Differences in QA

Standardization can sometimes be a bit of a holy grail for corporations. We have several corporate clients who have multiple divisions and teams across one or more contact centers. It is understandable that a company would want to have a common scorecard by which Customer Service Representatives (CSRs) are measured. This not only ensures equitable performance management, but also helps drive a unified brand to a wide customer base.

All teams are not, however, the same. There are differences in function and procedure necessitated by a company’s business. Having diverse business functions sometimes drives the belief that there must be completely different QA programs or forms. Our experience is that companies can create standardization while addressing the internal differences.

Scorecard Considerations

A common way that companies approach unique business functions across teams is to divide the QA scorecard. One part of the form addresses common soft skills and behaviors expected of all CSRs to communicate the company brand and deliver a consistent customer experience. The other part of the form addresses procedural or technical aspects of the CSRs job which may be unique to each team.