While auditing QA programs I commonly find that the QA team provides feedback on a particular call and gives CSRs their overall scores, but rarely do they track or provide data on an element-by-element basis. CSRs may know what their average overall service scores, but do they know which elements they commonly miss? Are they given data that track their performance on individual behavioral elements?
This is important for a couple of reasons:
- "I usually do that. I must have just missed it this time." I hear this statement in coaching sessions a lot. Call coaches may know in their gut that this isn’t true, but without the data to back it up they choose not to get into a potential conflict about it. Tracking the element data would allow the coach to say, "Out of the last 20 calls we’ve analyzed, you’ve missed this element 17 times. Let’s try to figure out what’s going on and remedy it."
- Providing the CSR with scores isn’t necessarily a bad thing, but if the only thing they get is a number then they will be focused on a number instead of the behaviors they need to modify to improve their service delivery. It’s much more powerful to provide them with feedback regarding the particular elements they are missing which are driving the resulting score and then help them track their performance on those elements.
- Often, the coaching session revolves around one particular call. CSRs need to understand how this particular call fits in context to their previous performance. Were the missing elements the same elements the CSR commonly misses, or was it something out of the ordinary? Has the CSR started ignoring one service element because he was so focused on another? Does the overall score reflect true behavioral improvement or was it the result of an "easy" call?
In the tyranny of the urgent that seems to squeeze us all, it’s tempting to make the coaching session or QA process as simple as possible. Here’s the call. What you think? Here’s your score. But sometimes simple and efficient adds up to ineffective.