Owning a Spatula Doesn’t Make Me a Chef

Spatula The latest trend in call center software suites is to add a “survey solution“.

I have no problem with software companies. I love the tools they provide. I have, however, watched call centers spend fortunes, both large and small, on software under the assumption that it will be a turn-key “solution” to their QA needs. Double-click and “BINGO!” you have a successful quality assessment program. Like all other tools, software is only as good as the expertise of the user.

I’m afraid that the same problem will occur with survey “solutions.” You can gather all the data you want, but without a sound statistical methodology I’m afraid you’ll end up with a lot of numbers that have limited actionability and questionable validity.

Having a custom guitar does not make you a rock star (I tried). Having a cool nail gun does not make you a carpenter (tried that, too). Having survey software does not make you a statistician.

Technorati Tags: , , , , ,

Buyer Beware! QA Software Considerations

It has become vogue for call centers to have the latest, greatest software for monitoring and scoring phone calls. For most companies, the decision to purchase one of these products is no small consideration. These software options can be a major investment running well into six figures on just the initial capital outlay. I’ve had the experience of working with various call centers who have utilized the products of different software vendors. My suggestion is that you take your time and give plenty of consideration before making an investment in software. A couple of thoughts:

  • Software is only a tool, you still have to know how to use it. You wouldn’t purchase bookkeeping software and expect it to make you financially solvent. In the same way, you can’t expect that having one of these software products is going to make you an expert in call quality assessment. Unfortunately, I’ve watched companies spend a lot of money on software with the expectation that they’ll simply turn it on and have instant, successful QA. Most of the time, there is a large hidden cost in man power, time and resources just to figure out how you’re going to use it and program the software with your own QA metrics.
  • Slide Shows and slick sales presentations are no substitute for a real-life demonstration. Just last week a client told me how angry they were with their QA software vendor. The client had asked the vendor for a “hands-on” demonstration of the software update on which they were spending a considerable sum of money. The vendor flew in (at the client’s expense!) with nothing but a handful of slides and screen shots. The client was angry and the vendor maintained a “you’ll get what we give you and like it” mentality.
  • Get good references. I asked one of our clients what she thought of the QA software her company had purchased a few years ago. “How do I like it?” she repeated, incredulously looking around the room. “Do you see anyone from the software vendor around here helping me? They’re not here helping me, you’re the one here helping me! How do you think I feel about them?” I wish her experience was isolated, but it’s not. It is not uncommon for contact centers to feel that they were courted by a vendor who disappeared after they said, “I do.” They spent hundreds of thousands of dollars on software that you can’t just return with a receipt, only to find themselves in an unhappy marriage to the vendor.
  • Software experts are not necessarily QA experts. One of our clients was told by their software vendor that, if they wanted to purchase a certain add-on module, they must also pay for the vendor’s experts to help them with their QA scale. They were not given a choice and the resulting QA scale, in our opinion, was a muddled, statistically invalid mess. Programming software to capture audio and data isn’t the same as measuring and analyzing it the data that’s captured.
  • Beware of the money-pit. I remember a Looney Tunes animated short where Daffy Duck is a salesman demonstrating all these great home-improvement technologies to Porky Pig. He keeps warning Porky not to push the red button on the control panel. When Porky gives in to temptation and pushes the forbidden red button, his house is lifted thousands of feet in the air on a hydraulic lift. Daffy comes by in a helicopter and says, “For small fee, you can buy the blue button to get you down!” This is a similar experience to clients who have purchased QA software. You spend a ton of money on this product, you get it installed and integrated with your phone system – now you’re stuck with it. When it doesn’t quite do what you want it to, the software company will tell you they’ll be happy to turn on that feature – for a not-so-small fee.

Don’t get me wrong, I do believe these powerful software tools can be invaluable in helping you efficiently manage your QA program. In most cases, they actually make my job easier, so I don’t generally have a problem with them. It’s just that I’ve just witnessed a lot of frustration from my clients. I would encourage anyone to do their homework, check references, and count the cost (not just the initial cost of the software, but the cost of developing internal QA expertise, additional licenses, frequent updates, and program downtime waiting for the vendor to provide after-the-sale service).

Technorati Tags: , , , ,

flickr photo courtesy of stephenm

How Many Calls Should Your QA Analyze?

I spoke a few weeks ago at the LOMA conference on Customer Service. LOMA is a great organization that caters to the insurance and financial services industry and my workshop was about Avoiding Common QA Pitfalls.” I’m always interested in what I learn from these conferences. You get a feeling for the hot issues in call centers.

The question that seemed to raise the most discussion at LOMA was “How many calls should I score and coach per person?” A book could probably be written on the subject, but let me give you a couple of thoughts based on our group’s experience.

Are you using QA results in performance management? If you are, then the question really needs to be, “do we have enough calls to be statistically valid and hold up to scrutiny?” If you are giving any kind of merit pay, incentives, bonuses or promotions based on their QA scores, then you’ll want a valid number. Assuming your QA scorecard has a valid methodology (which is a big assumption based on the fact that most QA scorecards we audit have major problems with their statistical validity), you’ll want at least 30 randomly selected calls. More is great, but there is sort of a rule in statistics that once you have more than 30 of anything, you’ve got enough that you know they can’t all be outliers. Let me say again, I’m talking minimums here.

The “Wait ’til Mom & Dad are Gone” Syndrome. Many call centers coach each agent religiously once a week. That’s fine from a feedback point-of-view. But like kids who wait until they see their parents pull out of the driveway to start the party, agents often know that they only have to watch their service until they’ve been coached for the week. After that, all bets are off. Sometimes a seemingly random coaching schedule that keeps agents guessing is a good thing.

It might depend on the agent. In our politically correct world we are conditioned to do the same thing for everybody. Yet, some agents need little feedback or coaching. Score the calls, make sure they’re still stellar, and then let them know their scores and give them their bonus.

Why waste time, energy and money coaching them? That’s like the guy who washes his car everyday whether it needs it or not (then parks it diagonally across two spots in the parking lot…I hate that guy!). Seriously, the number of coaching sessions is a separate issue from how many calls should you score to have a valid sample. Spend your coaching energy on agents who need it the most. It even becomes an incentive for some agents who dread the coaching sessions: “Keep your numbers up and you don’t have to be coached as much.”

From the discussion I had with some QA managers at the LOMA conference, there were several who – in my opinion – were coaching their people more than was necessary. We’ve seen agents greatly improve performance with quarterly and even semi-annual call coaching. Still, that’s not going to be enough for other agents.

There’s the challenge for you – finding out which agent is which and tailoring your QA process to meet each agent’s needs.

Technorati Tags:
, , ,