Our group recently performed an audit of our client’s internal quality process. In a QA audit, our team typically analyzes a sample of calls which have already been scored by the client’s Quality or Supervisory team. After analyzing the same calls using the client’s internal QA scale, our audit typically pin-points several improvement opportunities. An audit can reveal:
- QA analysts or Supervisors who are unduly harsh in their analysis
- QA analysts or Supervisors who are unduly lenient in their analysis
- Areas of the QA scale which are creating confusion among analysts and CSRs
- Elements within the scale which are driving calibration problems
- Policies or procedures which are undermining the effectiveness of the program
For example, in our recent audit we took a look at the dates and times on the Supervisors QA reports. It became quickly apparent that most supervisors were waiting until the last possible minute before starting their QA analysis for the month. They then rifled through their assigned calls. Elements were easily missed. The analysis was shoddy and the results were unreliable.
I have witnessed many a call center manager who simply wants a quality report on his or her desk once a month. Typically, they just want a number. I’ve even witnessed call center managers who will say to their teams, "I don’t care how you do it. I just want a report with a ’95’ or better on my desk on the last day of each month." The number is never questioned. The methodology used to derive the number is given no consideration. What’s worse, the question is never asked: "Is the process used to analyze the calls actually having an impact on front-line service?"
If your quality program is about providing a report with a number, perhaps you should print the same report each month and stop wasting everyone’s time.