We’ve had several clients over the years who have created QA scales in which the call analyst can mark that a particular behavior was "Not Applicable", but then the Customer Service Representative (CSR) is given credit for that particular behavior in the calculation of their quality score. In some cases, this is driven by call scoring software that won’t (or won’t easily) run the calculations for non-applicable attributes. Other times, the scorecard was created this way and it was never given much consideration.
Giving credit for "Not Applicable" elements creates problems on different levels. The core issue is that you are diminishing the statistical reliability of your results. For the sake of simplicity, let’s say you have ten elements that are worth ten points each. On a given call, only five of them apply and the CSR missed one. The CSR got four-fifths or 80 percent of the applicable elements (40 points out of a possible 50). If we give credit for the five elements that really didn’t apply, the CSR gets 90 percent. That means you have created "noise" in the data. You’re not accurately measuring what the CSR actually did because you’ve given credit where credit for something that wasn’t even a factor in the call.
Not only does this create problems for the data, but it can diminish the effectiveness of your quality efforts. We have witnessed many situations in which a CSR consistently gets high quality marks because of all the credit they receive for non-applicable behaviors. CSRs have little motivation or challenge to improve because they figure, "I’m getting great scores. I must be doing all right!" If you took the "noise" out of the data, it would reveal that the CSRs have several key opportunities to improve.
For example, certain behaviors (like Hold elements) may rarely apply. When they do apply, the CSR often misses them. However, because the CSR is credited for 90 percent of calls in which the customer was never placed on hold, the score does not truly reflect their performance on that behavior. The CSR can easily look at a score of 90 percent and think that they are doing just fine when it comes to putting the customer on hold, when the truth is that they missed it every time on the 10 percent of calls to which it applied.
Another problem is raised when senior management attempts to correlate their quality scores and their customer satisfaction numbers. We’ve watched many executives scratch their heads when they continually get reports with QA scores near 100, only to find out that customers aren’t that satisfied with the service received when they call.
When measuring quality, it’s critical to accurately measure only the behaviors which applied in a given interaction! To say that a behavior didn’t apply to the phone call but somehow does apply to the calculation of the overall service experience…well, that just doesn’t compute.