Most call centers have some version of “calibration.” It’s where a group of people get together and evaluate a call together to make sure they would score it the same way. It also reveals areas of the scale that needs to be updated, altered or clarified. I always remind call center personnel that a Quality Assessment (QA) scale is not the Constitution and it’s not Holy Scripture. It needs to be easily altered and adaptable to the constancy of change in the call center culture. Of course, those changes need to take place strategically and with proper timing so that the front-line doesn’t start feeling like someone is playing the QA version of “bait and switch” on them. A couple of other calibration basics:
- Everyone should score the call individually before the calibration session and report your score before the call is played or discussed. If you just get together, listen and discuss – then you’ll never know how people would have really scored it. You’ll have strong communicators who will frame the discussion and quiet followers who, though disagreeing, just keep their mouths shut.
- Track the beginning scores and the “consensus” score you reach at the end. By tracking these scores over a long period of time, you can see trends of who tend to be “QA liberals” and who tend to be “QA nazis”. You can even use the results in managing QA analyst performance and incentives.
- The QA version of “minutes” should be kept and cataloged so that, when the same issue crops up in seven months, you can quickly and easily remember what was discussed and decided.
- Leaders should lead. There are often times when a call has to be made. There are two ways of looking at an issue, neither one is right or wrong, but someone has to decide. Too many times I think that this process is left up to a democratic vote. Once again, the loudest voices in the group tend to win. This is the perfect time for a capable manager to listen carefully, then step forward to say, “Folks, in keeping with the mission of this company we are trying to drive x,y,z with this process. I’ve heard both sides, and I understand what you’re saying. Nevertheless, we’re going to score the CSR down in these situations because…” When management uses these opportunities to lead, the QA team will tend to walk out with a clear understanding of the decisions made and how they need to proceed. Many times I’ve watched the calibration become a free-for-all that ends in a vote. Those who “lost” the vote will walk out determined that it won’t change the way they score the call.
When done well, calibration sessions can be an indispensable tool for driving objectivity and unity into the QA process. When not done well, QA sessions can be hair-pulling crazymakers that drive doubt and division throughout the organization. Which one have you experienced?