For many companies, the months of November, December and January signal the end of a fiscal year. With the end of the year comes annual performance management reviews which often include a service quality component. It is quite typical for this service quality component to be a score from the call monitoring and coaching QA program (e.g. “your call may be monitored to ensure quality service”). After almost two decades of doing QA as a third party provider as well as helping companies set up and improve their QA programs, I can tell you that year end reviews bring heightened scrutiny to your QA process. This is especially true if monetary bonuses or promotions hinge upon the results.
Not to be a fear monger (it is Halloween as I write this), but now is a good time to do a little self check on your program:
- Sample: If you’re QA process is intended to measure a CSR’s overall service quality across the entire population of calls, make sure your sampling process is robust and you’ve collected a truly random sample of calls. This means that calls were not excluded for time and that they are representative across hours of the day, days of the week and weeks/months of the year.
- Objectivity: Make sure you’ve checked your internal call analysts objectivity. This can be done by a simple analysis of the data. Run averages of each analysts results for both the overall score as well as for each element on your scorecard. By comparing individual scores against the group average, you will see where there may be objectivity issues that clouded results. This can also be checked through a robust and disciplined calibration program, though that is not done quickly.
- Bias: Make sure that your program is not set up in such a way that those who analyze the calls have an inherent interest in the outcome. A classic example of where this happens is when supervisors score their teams calls. The team’s QA results reflect on the supervisor (in some cases there are incentives for the supervisor that hinge on the quality scores), so it is often hard for supervisors to be completely objective in their analysis. A good quality program will reward analysts for the objectivity of the results, not the results themselves.
- Collusion: If, month after month, the QA results consistently show that your entire team is performing at 98-100% of goal, then one of two things is likely true. 1) Your QA program has the bar set so low that almost anyone with blood pressure and a pulse can meet goal or 2) Everyone in the organization from the front-line CSR to the executive suite has colluded in making the company’s service quality look a lot better than it is. I get it. Sometimes it’s easier to pretend a problem doesn’t exist rather than doing the work to address it. Every organization that has more than a handful of CSRs can count on having a wide range of quality across their front-line ranks. It’s a human nature thing. If everyone is scoring almost perfectly, then something’s definitely rotten in the state of Denmark.
If your year-end is coming up, it’s a good idea for Call Center Managers and executives to start asking some questions now so that there are no surprises when CSRs, unhappy with the results of their performance management, begin asking questions. If you’re interested in an independent 3rd party audit of your current program, contact me. It’s one of the things we do.