I was in a calibration session this morning and a a service issue came up as part of the call that was not covered anywhere in the QA scale. It was interesting to watch the mental gymnastics of the group as everyone tried to figure out where to address the issue on the scale. There were multiple suggestions for where to "stick it", but in each case it was like forcing a round peg into a square hole. It just didn’t fit.
When addressing a clear service element that isn’t addressed on your QA form, it will be tempting to just "make it fit", but that mentality creates future problems:
- You have to expand the definition of the element you’re trying to force it into – which will only muddle up the works now and in the future – making the definition cloudy. This only leads to the potential for longer and more confusing calibration sessions, and more difficult call analysis.
- The CSR will scream bloody murder when you try to explain why you scored them down. It won’t make sense and they will be right. I’d question it, too, if I were in their shoes.
- Because it doesn’t fit, it will likely be forgotten where you "stuck it in" and if the situation comes up in the future it will generate the discussion all over again. Arrrgghhhh! I don’t like meetings, anyway. I especially don’t like rehashing issues that have been previously hashed.
So, what was the solution?
We opted to craft a new, clearly articulated element that addressed the situation in question along with other similar problems. It will be easy to score because it’s well defined (thus it will add only a fraction of a second to the call analysis). Because it’s clearly addressed, we don’t have to waste time in future calibration sessions trying to figure out where it goes.