I’ve noticed a pattern while sitting in on calibration sessions with various clients. It’s my theory of relativity in QA. Scores (S) are the result (=) of avoiding two potential conflicts (-C2).
The two critical factors in the equation are:
- Outcome (O) – You make a decision on a behavioral element based on the resulting outcome. For example, you’re scoring a call and the CSR’s voice tone was really flat and robotic. You’re considering "dinging" them on this behavioral element, but you then check to find that marking them down will result in the Overall Service Score being 89.9. If that happens, the CSR won’t make their incentive. If they don’t make their incentive they will be upset and argue the point. You don’t want to deal with the conflict so you figure you’ll "just give it to them".
- History (H) – Let’s say that you’re analyzing a call and the CSR was impatient and interruptive with the customer. You should really "ding" them for this, but once again you know that it will probably result in a confrontation with the CSR. Then you remember that, in the past, the CSR was much worse. So, since their behavior is a relative improvement over past behavior – you "just give it to them".
The problem with both of these scenarios is that you destroy the objectivity of the process and the credibility of your program. The decision of whether to give credit or mark down on a particular behavioral element should be a simple consideration of the standards or the behaviors you’re attempting to drive with the QA scale for that particular element. Factoring in the resulting scores or the CSR’s past behavior turns an objective decision into a relative decision based on criteria outside the scope of the QA scale.