I have been in QA calibration sessions with a handful of different clients the past few weeks. In each one, the participants had scored the call ahead of time and the session started with everyone sharing their scores with the group. The call was then played and everyone went around the group to discuss where there were differences in scoring. One comment I routinely heard at some point in every session:
"Oops. I gave credit for that. I didn’t catch it when I scored it, but now that I hear it again..."
Quality Assessment is a human process, and we all know that human beings aren’t perfect. Just as every CSR will have a "clinker" now and then, every QA analyst will miss an element or two. Nevertheless, every QA analyst has an obligation to the CSR and the customer to be conscientious in their analysis. We control data that can and will have far reaching impact. To that end, it’s important that we take measures to monitor our own performance:
- Avoid scoring calls during "live listening" unless it is a simple spot check that does not impact the CSR’s performance review. The margin of error on live listening is too high to be reliable.
- Don’t score a call on one hearing. Once again, the margin of error is too high.
- Don’t multi-task while you are scoring. While your brain is distracted, you will miss something important. If you’re interrupted while scoring, go back and start the call over again.
- Track QA analyst performance in calibration. By keeping track of pre-calibration scores and comparing them against the consensus score, over time you can discover some interesting trends that can be a reality check for the QA analyst.
- Perform regular audits. An audit of your QA team’s analysis by an objective party can unearth blind spots in your process and provide healthy accountability for your QA team.
- When you find yourself in calibration saying "oops, I didn’t catch that", ask yourself why you missed it. Try to diminish those "misses" by identifying why they happen, and then alter your behavior accordingly.