Category: QA Methodology

Does a 3rd Party QA Provider Make Sense?

Third Party QA Solution. Just as there are pros and cons to utilizing front line supervisors or a dedicated internal Quality Assessment (QA) analyst/team, there are also pros and cons to considering an independent third party. It may not make sense for some companies, while for others it is a great fit.

In the interest of full disclosure, I must tell you that our group provides third-party QA solutions. For some clients, we are their entire QA program from call capture & analysis to the one-on-one call coaching. For some we have established the QA program and then trained an internal team to take it over as we help them develop the necessary skills and discipline. For others, we simply provide an objective outside assessment in an effort to audit and improve their existing internal program. It is, however, a foundational principle of our group that we will not do a project if we do not believe we can bring measureable value to the client. So, I recognize that it works for some and not for others.

Having said that, here are a few reasons it makes sense to consider a third party:

  • Established experience & expertise means that you aren't wasting precious time, energy and resources trying to develop a program from scractch. In addition, a third party brings experience from other client call centers which helps avoid common pitfalls and introduce innovations your internal team may not know about without sending them to expensive conferences. As one client told us, "I know our products. I know how to sell them. I know how to service them. But you guys know our customers (we do regular customer sat surveys for them) and you know how to measure quality. I'd rather pay my people to take care of customers and pay you to make sure we're doing it well." In addition, I commonly find that our team can analyze calls better, faster, and provide a greater depth of actionable data than internal teams. It comes from 20 years experience analyzing tens of thousands of phone calls.
  • Objectivity. A third-party analyst doesn't have the potential bias you find internally, especially with supervisors whose call analysis can be colored by personality or other performance issues. Even internal QA analysts can find it tempting to listen with an internal perspective. A third party QA provider can generally provide you with more objective, customer-centric feedback than you'll get internally.
  • Accountability. One client put it to me bluntly: "The reason I have your team do our QA is because it gives me the luxury of picking up the phone and firing you at any given moment. If I hire a team internally it will cost me far more in the long run. I'll expend far more resources in FTEs as they try to figure out how to do it well. They probably won't do it as well in the end. And then I'm stuck with them. If I'm disappointed with your team, I just pick up the phone and tell you I'm done with you." The fact is, if our team doesn't perform well, we won't be around long. I know a lot of V.P.s of customer service who feel stuck with a broken, inefficient quality process that they wish they could scrap.

That doesn't mean a third party is always the best option. There are challenges inherent to a third party approach:

  • Limited product/procedural knowledge. A third party QA provider will rarely, if ever, have the intimate knowledge of your internal products, services, systems and procedures. While a third party can give you a clear picture of the customer's perspective (who also does not have this internal knowledge), the third party analyst will not always be able to catch the correctness of the CSR's answer or know all of the possible options the CSR had available to resolve the caller's issue the way a front line superisor will. A good third party provider will learn as much as possible in order to give the best possible feedback, but will almost never be as accurate as an internal analyst in measuring the correctness of answers given.
  • Control. A third party provider of QA is a hired vendor, and does not give managers or senior managers the degree of oversight and control that comes with an internal team. My experience is that some companies and executives require a high degree of control over the entire quality process. A good third party provider will be responsive, communicative and flexible, but will never offer the access or control a manager will have with direct reports.
  • Cost. Getting a worthwhile third party QA provider will cost money that most call center managers do not have easily accessible in their budgets. I know that our team has consistently provided clients with better data analyzing fewer calls and costing less money, time and energy than would have been expended with an internal team. Nevertheless, the number one reason most call centers do not consider a third party solution is for financial reasons. If you want, or a required, to analyze a tremendous number of phone calls -the cost of a third party solution easily becomes out of reach for most companies.

In conclusion, I've found that our services as a third party QA provider have consistently worked well in a handful of common situations:

  • Small to Mid-Size Contact Centers. Companies who have 1-50 seats will often struggle to justify the expense of a dedicated internal QA program. A 3rd party QA provider can often provide effective and cost efficient solutions that fit the budget.
  • QA Start Up. If setting up a program from scratch, you can often save yourself a lot of time and headaches by having a 3rd party assist in establishing, implementing the program and training your internal team of analyst and coaches.
  • Multiple Division/Site Situations. Larger corporations often find themselves with multiple call centers in different locations who each have a quality program. It's common to find that each location measures quality differently. We have been able to help companies in this scenario by providing an objective assessment across multiple centers on a common scorecard so that the executive team can get an "apples-to-apples" perspective of how each contact center is doing in service quality. We have also assisted with the development and implementation of a "common scorecard" for multiple contact center situations.

Sometimes contact centers choose not to exclusively use a front-line supervisor, QA team, or third party, but rather find a hybrid approach that works for them. We'll explore some of those options in our next post of this series.

Creative Commons photo courtesy of Flickr and barron

Should You Have a Dedicated QA Analyst or QA Team?

A dedicated listener. In the first post of our series we explored the pros and cons of having front line supervisors be the Quality Assessment (QA) analyst and call coaches. Rather than burdening an already loaded supervisory staff with the taks of QA, some companies choose to utilize a dedicated QA individual or team. As with the supervisors, there are pros and cons to this choice.

Having a dedicated QA analyst or team had advantages.

  • A dedicated QA function generally ensures that the call analysis will receive greater time and attention. A good QA analyst will not only listen for the quality of the CSR's performance but also mine the calls for more information and detail. That detail can sometimes surface policy or procedural issues which can increase productivity and reduce costs.
  • As a result of the increased focus, the resulting data will tend to be more reliable. For companies who utilize QA data for performance management, this reliability can be crucial in ensuring that your process will meet necessary HR standards.
  • Because QA analysts do not have direct supervisory role with the CSR, the possibility of bias due to personality issues or performance issues outside of the call is greatly diminished.

Having a dedicated QA analyst role is not always a slam-dunk, either.

  • Because the QA analyst or team is typically not on the phones, they are less knowledgable of the day-to-day issues facing CSRs on the call floor. While this lends itself to objectivity, it may be more difficult in call coaching situations when the CSR questions the QA coach's knowledge or experience.
  • A QA analyst can easily create contention in the call center. Supervisors and QA analysts find themselves at odds as the supervisor feels the need to "defend" their CSRs and raise their team's quality scores. Rather than working with the QA team to improve CSR performance, supervisors regularly see the QA team as overly critical grinches who are making them (and their team) look bad.
  • For call centers strapped to stay adequately staffed, there simply not be resources available for a dedicated QA function.

Is there a compromise? Some companies opt for a hybrid approach and others choose to hire a 3rd party. We explore both options in the continuation of this series.

Creative Commons photo courtesy of Flickr and personalspokesman

 

Are Supervisors the Best People to Do QA?

Bigstockphoto_Business_Meeting_1264154 Who should monitor your teams phone calls and do your Quality Assessment (QA)? Today we begin a multiple post series on who should analyze your team's phone calls.

We begin with the front line supervisor who seems like the natural choice, and for good reason:

  • They are the closest managerial person to the floor.
  • They usually know the Customer Service Representative (CSR) better than anyone else.
  • They usually have direct responsibility for the CSR's performance management.
  • They can closely monitor progress and keep their eye on the CSR day-to-day.

However, in over 15 years of working with call center QA programs, I've found that there are inherent problems with supervisors being the primary call analysts and coaches:

  • Quality becomes back burner issue. I've always held that front-line supervisors have the toughest job in the call center and they are usually the most stressed out level of management. I have the greatest respect for them. They have the competing priorities of helping their agents, training, mentoring, managing, taking escalated customer calls, answering e-mails, scheduling, facilitating team meetings, motivating, counseling, and we haven't even gotten to all of the things call center and upper management ask of them by way of reports, special projects, performance management, and committee meetings. And this is before call volumes spike, systems crash, and a viral epidemic spreads through the floor. This is why, in most cases I've encountered, the front line supervisor struggles as a QA analyst and coach. With all of the pressing issues demanding their attention at any given moment, QA responsibilities are quickly and easily pushed to the back burner.
  • Evaluations are rushed. For all the reasons I just stated, even when supervisors do get to their QA duties, they simply can't afford the time and attention required to objectively analyze phone calls with the required precision. Quality Assessment is done with little, well, quality. QA duties get procrastinated until the just before their report is due and then a bunch of calls are hastily evaluated just to meet the requirement. This is not a criticism of the supervisor! This is simply the reality of most call center organizational systems.
  • Objectivity is easily skewed. People are people. When you work with someone everyday, and you have issues with someone everyday, it's easy to lose your objectivity. Through the years I've had some pretty tense discussions with supervisors who are upset when a CSR's quality scores are good (you read that right). When a supervisor has issues with a CSR's attitude, attendance, or personality – it's easy for their frustration to bleed over into their analysis of the CSR's behavior on the phone. The reverse is also true. When a CSR happens to be a model employee and has the favor of the supervisor, the supervisor is apt to overlook and excuse negative behaviors that the CSR consistently demonstrates with customers on the phone. In either case, you've got problems which undermine the objectivity and validity of your entire quality program.
  • Call Coaching becomes HR Coaching. When supervisors coach calls, it is easy for the call coaching session to get sidetracked into all sorts of other productivity or HR related issues. Instead of the session being centered on how the CSR can provide better service to the customer, it ends up being about how the CSR can be a better employee for the supervisor. 

While many call centers utilize supervisors to analyze calls and provide quality coaching, the issues I've just related usually have some degree of impact on the effectiveness of the quality program. Call Centers must actively work to minimize these problems or take their QA progam another direction.

Next post: The Pros & Cons of having a dedicated QA team.

Best of QAQnA: Who Are You Satisfying with Your QA Scale?

I’m always struck by the mixture of motivations underlying many call center QA scorecards. Companies love to give lip service to delivering excellent customer service and improving customer satisfaction. Their QA scale, however, may reflect the designs of a cost-sensitive mangement team (driven by lowering costs with no regard to impact on the customer) or a sales-driven management team (driven by increasing sales with no regard to impact on the customer).

This mixture of messages frustrates front-line agents who see the hypocrisy: "You say you believe in customer service but all I’m told in QA is to keep it short or push the cross-sell." It also frustrates the QA coach who must try to justify or explain the obvious, mixed message. Of course, it is possible to have a balanced methodology in which you satisfy customers and look for ways to be efficient and opportunistic. The key is to make sure the customer is not left out of the QA equation.

If you really care about keeping your customers coming back, you should start your entire QA program with a valid, objective customer satisfaction survey. The results can give you the data you need to impact Customer satisfaction and retention.

Find out what is really driving your customer’s satisfaction and loyalty. Then use that information in building and weighting your QA score card. In fact, some of our surveys have measured the customer’s willingness to hear up-sells and cross-sells in a customer service interaction. The results are often surprisingly positive, and the data can be a powerful tool in building buy-in among the front-lines for your sales drive.

Oh, and by the way, it’s possible that your company already does customer sat research and you’ve never seen it. Just the other day we provided a call center manager with a copy of a survey our group had done for his boss a few years ago. He was never aware that it had been done and had not been given access to the information, even though it was critical for driving tactical decisions in his call center. I wish that this was an isolated incident, but my gut tells me it happens more often than not. It may be worth it to ask around. Of course, trying to decipher the data in many customer sat surveys we’ve seen can be a mind-numbing tast – but that rant will have to wait for another post!

Technorati Tags:
, ,

100,000 Call Maintenance

Regular maintenance required. I'm a believer in regular maintenance on my car. Through the years I've owned several cars and I've found that following the maintenance schedule makes a huge difference in the life and durability of the vehicle. My car doesn't need the exhaustive 100,000 mile maintenance every 3,000 miles. It only needs an oil change.

I'm also a firm believer in an exhaustive assessment of service quality. Not just subjectively considering how I thought a Customer Service Representative (CSR) did on a particular call, but a systematic, statistically valid and comprehensive analysis of what is, and what is not, happening when a CSR picks up the phone to talk with a customer.

I watch companies who, for the sake of time/energy/ease, will assess their agents on a small list of fairly subjective elements. It's fine. It's good. It serves the purpose of checking the oil and keeping the quality effort lubricated. However, sometimes there's a crack in the brake line that will prove costly, even deadly, if it's not caught and remedied. Pulling the oil dipstick every 3,000 miles isn't going to warn you of a problem with the brakes.

I understand you don't have resources for a comprehensive tune-up every day, but an exhaustive and objective assessment of the call center on a regular, if periodic, basis isa great way to ensure your service delivery continues running smoothly over time. It can help you catch little problems and blind spots before they become tragic and costly issues.

Who QA’s the QA Team?

Bigstockphoto_Business_Meeting_1264156 It’s a classic dilemma. The Quality Assesment (QA) team, whether it’s supervisor or separate QA analyst, evaluates calls and coaches Customer Service Reps (CSRs). But, how do you know that they are doing a good job with their evaluations and their coaching? Who QA’s the QA team?

The question is a good one, and here are a couple of options to consider:

  • QA Data analysis. At the very least, you should be compiling the data from each supervisor or QA analyst. With a little up front time spent setting up some tracking on a spreadsheet program, you can, overtime, quanitfy how your QA analysts score. How do the individual analysts compare to the average of the whole? Who is typically high? Who is the strictest? Which elements does this supervisor score more strictly than the rest of the group? The simple tracking of data can tell you a lot about your team and give you the tool you need to help manage them.
  • CSR survey. I hear a lot of people throw this out as an option. While a periodic survey of CSRs to get their take on each QA coach or supervisor can provide insight, you want to be careful how you set this up. If the CSR is going to evaluate the coach after every coaching session, then it puts the coach in an awkward position. You may be creating a scenario in which the coach is more concerned with how the CSR will evaluate him/her than providing an objective analysis. If you’re going to poll your CSR ranks, do so only on a periodic basis. Don’t let them or the coaches know when you’re going to do it. Consider carefully the questions you ask and make sure they will give you useful feedback data.
  • Third-party Assessment. Our team regularly provides a periodic, objective assessment of a call center’s service quality. By having an independent assessment, you can reality test and validate that your own internal process is on-target. You can also get specific, tactical ideas for improving your own internal scorecard.
  • QA Audit. Another way to periodically get a report card on the QA team is through an audit. My team regularly provides this service for clients, as well. Internal audits can be done, though you want to be careful of any internal bias. In an audit, you have a third party evaluate a valid sample of calls that have already been assessed by the supervisor or coach. The auditor becomes the benchmark and you see where there are deviations in the way analysts evaluate the call. In one recent audit, we found that one particular member of the QA team was more consistent than any other member of the QA and supervisory staff. Nevertheless,there was one element of the scorecard that this QA analyst never scored down (while the element was missed on an average of 20% of phone calls). Just discovering this one “blind spot” helped an already great analyst improve his accuracy and objectivity.

Any valid attemps you make to track and evaluate the quality of your call analysis is helpful to the entire process. Establishing a method for validating the consistency of your QA team will bring credibility to the process, help silence internal critics and establish a model of continuous improvement.

If you think our team may be of service in helping you with an objective assessment or audit, please drop me an e-mail. I’d love to discuss it with you.

Managing Appeals & Challenges in QA

A process of appeal. Special thanks to one of our readers, Sarah M., who sent an email asking about the process of a CSR challenging their Quality Assessment (QA) evaluation. Unless you've gone the route of having speech analytics evaluate all of your calls (which has inherent accuracy challenges of its own), your QA process is a human affair. Just as every CSR will fall short of perfection, so will every QA analyst. No matter how well you set up the process to ensure objectivity, mistakes will be made.

Because QA is a human affair, you will also be evaluating individuals who do not respond positively to having their performance questioned or criticized. There are a myriad of reasons for this and I won't bother to delve into that subject. The reality is that some individuals will challenge every evaluation.

So, we have honest mistakes being made, and we have occasional individuals who will systematically challenge every evaluation no matter how objective it is. How do you create a process of appeal that acknowledges and corrects obvious mistakes without bogging down the process in an endless bureaucratic system of appeals, similar to the court system?

Here are a couple of thoughts based on my experience:

  • Decide on an appropriate "Gatekeeper." Front line supervisors, or a similar initial "gatekeeper" are often the key to managing the chaos. There should be a person who hears the initial appeal and rightfully acknowledges there was an honest mistake, a worthy calibration issue, or dismisses the appeal outright. Now we've quickly addressed to probabilities: the honest mistake can be quickly corrected or the appeal without standing is quickly dismissed.
  • Formulate an efficient process for appeal. If an appeal is made that requires more discussion, than it needs to go a step further. I have seen many different set ups and scenarios this may successfully take. The "gatekeeper" might take it to the QA manager for a quick verdict. There might be a portion of regular calibration sessions given to addressing and discussing the issues raised by appeals. Two supervisors might discuss it and, together, render a quick decision.
  • Identify where the buck stops. When it comes to QA, my mantra has always been that"Managers should manage." A process of appeal becomes bogged down like a political process when you try to run it democratically. The entire QA process is more efficient, including the process of appeal, when a capable manager, with an eye to the brand/vision/mission of the company, can be the place where the buck stops.

Those are my two cents worth. What have you found to be key to handling challenges and appeals in your QA program?

Anticipating the Customer’s Questions is Key to One-call Resolution

Anticipate the customers next move. Great chess players are always anticipating their opponents moves. Great Customer Service Represenatives are always anticipating their customers needs.

One of my team members and I are conducting group call coaching this morning. We have 6-8 associates together and listen to an example of each person's phone call. The associates get to receive positive feedback from their peers and hear how others on their team approach very common calls. For this client, it works very well and has been an efficient way to call coach. The client has been doing QA for many years and the associates are, for the most part, mature in their attitude towards call monitoring and quality service.

In one call this morning, the associate provided the customer with the answer to the stated request, then proceeded to offer the caller additional information that he would need. The customer was taken by surprise by the offer, but clearly acknowledged that the additional information was necessary. By anticipating the customer's question, the associate not only provided extra-mile service, but also saved himself from having to take another call when the customer eventually realized he would need it.

One great way to identify anticipated questions is to carefully pay attention to the calls you receive. This can also be done by the supervisor or QA analyst as a part of their monitoring duties as a way of gleaning more than just a measurement of quality on a given call. If a customer is calling back after an earlier conversation a red flag should go up. Ask yourself "what information could I, or should I, have provided in the earlier call which would have eliminated the need for that follow up phone call?"

Anticipating related questions is a win-win for customer and company. Customers receive information they may not even know they need and the company eliminates unecessary phone calls in the future!

Creative Commons photo courtesy of Flickr and frankblacknoir

But, You’re NOT the Customer

My wife and I are two very different people. Like many married couples, opposites attracted. At least, personality-wise. My wife is a bit of a lion. You always know exactly where she stands because she'll tell you. When she's upset, she roars. It's actually an admirable trait. I rarely have to guess where she stands. On the other hand, I'm much more of a Golden Retriever (I'm flashing on a scene from "When Harry Met Sally" when Meg Ryan says "Is one of us a DOG in this scenario?!"). I'm a people pleaser, so I tend to hide my true emotions from people in a given moment.

When dealing with customer service situations, there is a distinct difference in the ways my wife and react. My wife will make it very clear how she feels about a given situation. She will be very up front with the CSR and explain exactly what she expects, when she expects it, and what she will do if the situation isn't remedied. On the other hand, I will sit there, quietly smile, and nod my head. Then I'll quietly walk away and never do business with the company again….ever.

I share this because understanding human nature is important to QA. It's very common for me to hear supervisors and QA coaches analyzing a call based on their perception of what a customer might have been thinking…

  • "The customer was like…"
  • "If I were the customer, I would…"
  • "I think this customer…"

Granted, if you have my wife on the line you're likely going to hear exactly what the customer was thinkng. (You still have to be careful, some customers will say one thing on a call but behave completely differently in their future purchase intent). But, if you have me on the line, you'd never know. That's why an objective QA process sticks closely to measuring things that you can hear (or not hear) and see (or not see) in the CSR's behavior. If you're fortunate, you've got some reliable research data that provides you with a picture of what your customers, in general, expect when they call. But, even with research data, there are always outliers in a pool of customers. Trying to divine what a given customer was thinking or feeling is a slippery slope.

Remember, you're NOT the customer on that call. QA is not a magic 8 ball peering into the mind of each customer on each call. Reliable QA defines, based on reliable data, which behaviors are likely to have the greatest impact on overall customer satisfaction if demonstrated consistently and done well. Then it measures if those behaviors are consistently demonstrated over time and a valid sample of calls, and it uses that data to coach CSRs towards better and more consistent performance on those defined behaviors.

Iowa Telecom’s Healthy Mix of Service & Sales

Iowa Telecom. I had a great time Wednesday at the Association of Contact Center Professionals (ACCP) get together at Iowa Telecom's (ITC) Newton, Iowa contact centers. ITC was a fantastic host and gave those of us in attendance a terrific overview of their contact center operations along with a tour of their facilities.

Our group has worked with ITC for many years providing Customer Satisfaction surveys. We also helped them start their QA process and get it off the ground. I have to admit that I felt some pride as I toured the facilities and saw what a great job their team was doing.

One of the things that I've witnessed ITC doing well is to merge cross-sells and up-sells into their customer service environment. It is an example of how the "cost center" mentality normally associated with Customer Service can evolve into a revenue generator for the company. A couple of high-points:

  • The CSAT research our team provides ITC clearly established that a good percentage of ITC customers are sometimes or always willing to hear up-sell or cross-sell offers if their primary issue has been resolved. This knowledge provided a firm foundation on which to build their up-sell process.
  • The management team has done a good job of establishing realistic guidelines for when CSRs should, or should not, present offers – and which offers to present.The up-sells are natural value adds to the customer's communication needs (not an irrelevant add-on like time-shares in Jamaica).
  • The QA team focuses on a healthy mix of customer service skills and sales skills. Sales opportunities are aggressively tracked right along with soft skills and resolution.
  • Utilizing a combination of outbound surveys and post-call IVR surveys, ITC is keeping their finger on the pulse of the customer. If there is a shift in winds of customer satisfaction, they should know and respond.

We all know that great customer service builds customer loyalty. Leveraging that into immediate sales opportunities is a delicate balance. Iowa Telecom is walking the tightrope well.

Thanks to Tim Lockhart and the Iowa Telecom team for being such generous hosts. Thanks to the local ACCP board for putting the event together. Thanks to Avtex for sponsoring the event.

Creative Commons photo courtesy of Flickr and jalexartis