Category: Customer Satisfaction

Buyer Beware! QA Software Considerations

It has become vogue for call centers to have the latest, greatest software for monitoring and scoring phone calls. For most companies, the decision to purchase one of these products is no small consideration. These software options can be a major investment running well into six figures on just the initial capital outlay. I’ve had the experience of working with various call centers who have utilized the products of different software vendors. My suggestion is that you take your time and give plenty of consideration before making an investment in software. A couple of thoughts:

  • Software is only a tool, you still have to know how to use it. You wouldn’t purchase bookkeeping software and expect it to make you financially solvent. In the same way, you can’t expect that having one of these software products is going to make you an expert in call quality assessment. Unfortunately, I’ve watched companies spend a lot of money on software with the expectation that they’ll simply turn it on and have instant, successful QA. Most of the time, there is a large hidden cost in man power, time and resources just to figure out how you’re going to use it and program the software with your own QA metrics.
  • Slide Shows and slick sales presentations are no substitute for a real-life demonstration. Just last week a client told me how angry they were with their QA software vendor. The client had asked the vendor for a “hands-on” demonstration of the software update on which they were spending a considerable sum of money. The vendor flew in (at the client’s expense!) with nothing but a handful of slides and screen shots. The client was angry and the vendor maintained a “you’ll get what we give you and like it” mentality.
  • Get good references. I asked one of our clients what she thought of the QA software her company had purchased a few years ago. “How do I like it?” she repeated, incredulously looking around the room. “Do you see anyone from the software vendor around here helping me? They’re not here helping me, you’re the one here helping me! How do you think I feel about them?” I wish her experience was isolated, but it’s not. It is not uncommon for contact centers to feel that they were courted by a vendor who disappeared after they said, “I do.” They spent hundreds of thousands of dollars on software that you can’t just return with a receipt, only to find themselves in an unhappy marriage to the vendor.
  • Software experts are not necessarily QA experts. One of our clients was told by their software vendor that, if they wanted to purchase a certain add-on module, they must also pay for the vendor’s experts to help them with their QA scale. They were not given a choice and the resulting QA scale, in our opinion, was a muddled, statistically invalid mess. Programming software to capture audio and data isn’t the same as measuring and analyzing it the data that’s captured.
  • Beware of the money-pit. I remember a Looney Tunes animated short where Daffy Duck is a salesman demonstrating all these great home-improvement technologies to Porky Pig. He keeps warning Porky not to push the red button on the control panel. When Porky gives in to temptation and pushes the forbidden red button, his house is lifted thousands of feet in the air on a hydraulic lift. Daffy comes by in a helicopter and says, “For small fee, you can buy the blue button to get you down!” This is a similar experience to clients who have purchased QA software. You spend a ton of money on this product, you get it installed and integrated with your phone system – now you’re stuck with it. When it doesn’t quite do what you want it to, the software company will tell you they’ll be happy to turn on that feature – for a not-so-small fee.

Don’t get me wrong, I do believe these powerful software tools can be invaluable in helping you efficiently manage your QA program. In most cases, they actually make my job easier, so I don’t generally have a problem with them. It’s just that I’ve just witnessed a lot of frustration from my clients. I would encourage anyone to do their homework, check references, and count the cost (not just the initial cost of the software, but the cost of developing internal QA expertise, additional licenses, frequent updates, and program downtime waiting for the vendor to provide after-the-sale service).

Technorati Tags: , , , ,

flickr photo courtesy of stephenm

Making Allowances for New CSRs

 Many call centers struggle with how to handle new CSRs as it relates to quality assessment. There is more and more pressure to get CSRs out of training and on the floor. The result is that CSRs are often taking calls before they are fully knowledgeable and there’s going to be a period of time when they struggle to deliver a level of service expected by the QA scorecard. So, what do you do?
First, you always want to be objective. Communicate the QA standard or expectation and score it accordingly. If they missed an element – mark it down. If it’s on the form then you should always score it appropriately.
The customer doesn’t care that the CSR is new – they have the same expectations no matter who picks up the phone. Giving the CSR credit and simply “coaching” her on it will ultimately do a disservice to everyone involved. It tends to undermine the objectivity, validity and credibility of the QA program.
To sum it up, let your “yes be yes” and your “no be no.” It does, however, make sense to give new agents a nesting period to get up to speed. Rather than dumbing down the scale or pretending that they delivered better service than they actually did, it makes more sense to me to have a grace period. Some call centers will have a graduated performance expectation (e.g. by 60 days your QA scores have to average 85 by 90 days they have to be at 90, etc.). Other call centers will allow new CSRs to drop a set number of QA evaluations from their permanent record to account for the outliers that frequently occur (e.g. “We expect you to perform at an average QA score of 95. I realize that newbie mistakes cost you on this evaluation, but over the first 90 days you get to drop the lowest three QA scores from your permanent record, so this may be one of three). Either one of these strategies allow you to make allowance for rookie mistakes without having to sacrifice your objectivity.

Technorati Tags:
, , , ,

Where Do You Invest Your Research Dollars?

I was at a conference a few weeks ago and struck up a pleasant conversation with an insurance company’s Customer Service manager. In the course of conversation, I found out that our group had proposed doing some customer satisfaction research for his company. He explained to me that his superiors chose to spend their money on some different research, not because it would give him meaningful data about his customers, but because it would rank them against other call centers across the nation. The end result, by his admission, was that he doesn’t know any more about what will help him satisfy his customers and make tactical decisions to drive their loyalty – but he does have a nicely framed certificate in his office telling him his call center ranks 44th in the country.

I understand the temptation. It’s nice to be able to have some bragging rights and prove to your superiors how you compare to the call center down the street. The problem is, it doesn’t ultimately matter how you compare to the call center down the street. What is ultimately going to impact your company and your bottom line is how you measure up against the expectations of the customers who are calling you each day. If you’re going to spend money on research, invest in something that’s going to give you ROI in the form of valid, actionable data. Get results you can use to make strategic decisions about where you will invest your time, energy and resources. Where does your customer rank you? That’s the only profitable ranking there is for long-term success.

It Only Takes One Experience to Make a Customer for Life

 Customers walk away from every call with a perception of your company that will impact their relationship with you. It only takes one experience to make a difference, that’s why they’re called “moments of truth.” When a customer’s expectations are exceeded you might just win a customer for life.

Here’s a true story that happened to me.

Technorati Tags: , , ,

How Many Calls Should Your QA Analyze?

I spoke a few weeks ago at the LOMA conference on Customer Service. LOMA is a great organization that caters to the insurance and financial services industry and my workshop was about Avoiding Common QA Pitfalls.” I’m always interested in what I learn from these conferences. You get a feeling for the hot issues in call centers.

The question that seemed to raise the most discussion at LOMA was “How many calls should I score and coach per person?” A book could probably be written on the subject, but let me give you a couple of thoughts based on our group’s experience.

Are you using QA results in performance management? If you are, then the question really needs to be, “do we have enough calls to be statistically valid and hold up to scrutiny?” If you are giving any kind of merit pay, incentives, bonuses or promotions based on their QA scores, then you’ll want a valid number. Assuming your QA scorecard has a valid methodology (which is a big assumption based on the fact that most QA scorecards we audit have major problems with their statistical validity), you’ll want at least 30 randomly selected calls. More is great, but there is sort of a rule in statistics that once you have more than 30 of anything, you’ve got enough that you know they can’t all be outliers. Let me say again, I’m talking minimums here.

The “Wait ’til Mom & Dad are Gone” Syndrome. Many call centers coach each agent religiously once a week. That’s fine from a feedback point-of-view. But like kids who wait until they see their parents pull out of the driveway to start the party, agents often know that they only have to watch their service until they’ve been coached for the week. After that, all bets are off. Sometimes a seemingly random coaching schedule that keeps agents guessing is a good thing.

It might depend on the agent. In our politically correct world we are conditioned to do the same thing for everybody. Yet, some agents need little feedback or coaching. Score the calls, make sure they’re still stellar, and then let them know their scores and give them their bonus.

Why waste time, energy and money coaching them? That’s like the guy who washes his car everyday whether it needs it or not (then parks it diagonally across two spots in the parking lot…I hate that guy!). Seriously, the number of coaching sessions is a separate issue from how many calls should you score to have a valid sample. Spend your coaching energy on agents who need it the most. It even becomes an incentive for some agents who dread the coaching sessions: “Keep your numbers up and you don’t have to be coached as much.”

From the discussion I had with some QA managers at the LOMA conference, there were several who – in my opinion – were coaching their people more than was necessary. We’ve seen agents greatly improve performance with quarterly and even semi-annual call coaching. Still, that’s not going to be enough for other agents.

There’s the challenge for you – finding out which agent is which and tailoring your QA process to meet each agent’s needs.

Technorati Tags:
, , ,