I’m always struck by the mixture of motivations underlying many call center QA scorecards. Companies love to give lip service to delivering excellent customer service and improving customer satisfaction. Their QA scale, however, may reflect the designs of a cost-sensitive management team (driven by lowering costs with no regard to impact on the customer) or a sales-driven management team (driven by increasing sales with no regard to impact on the customer).
This mixture of messages frustrates front-line agents who see the hypocrisy: “You say you believe in customer service but all I’m told in QA is to keep it short or push the cross-sell.” It also frustrates the QA coach who must try to justify or explain the obvious, mixed message. Of course, it is possible to have a balanced methodology in which you satisfy customers and look for ways to be efficient and opportunistic. The key is to make sure the customer is not left out of the QA equation.
If you really care about keeping your customers coming back, you should start your entire QA program with a valid, objective customer satisfaction survey. The results can give you the data you need to impact Customer satisfaction and retention.
Find out what is really driving your customer’s satisfaction and loyalty. Then use that information in building and weighting your QA score card. In fact, some of our surveys have measured the customer’s willingness to hear up-sells and cross-sells in a customer service interaction. The results are often surprisingly positive, and the data can be a powerful tool in building buy-in among the front-lines for your sales drive.
Oh, and by the way, it’s possible that your company already does customer sat research and you’ve never seen it. Just the other day we provided a call center manager with a copy of a survey our group had done for his boss a few years ago. He was never aware that it had been done and had not been given access to the information, even though it was critical for driving tactical decisions in his call center. I wish that this was an isolated incident, but my gut tells me it happens more often than not. It may be worth it to ask around. Of course, trying to decipher the data in many customer sat surveys we’ve seen can be a mind-numbing task – but that rant will have to wait for another post!
QA scale, quality assessment, call elements
I scored a lot of calls today and it was really satisfying. The calls were fantastic. I mean, these calls were really World-class. I began working with this client years ago. They had no quality program in place. They had never monitored a call or coached their agents on service quality. Actually, when we began they could be described as decent. You might have said that they were very good – above average, even. That’s the thing. It’s one thing to help a customer service team who knows they’re bad. I think it’s a tougher job to take a team who’s doing well and motivate them to excellence.
This team is a good study in some of the keys to developing a consistent, world-class delivery:
- A management team that’s committed for the long-haul. This team had the same manager for several years. He was committed to developing a culture of quality and had the support of his superiors. No matter how much the front-line railed against the program or how wishy-washy the front-line supervisors may have been at times, the consistent message and commitment to quality has always been there.
- Outlast the critics. The QA program has not always been popular among the ranks. As is true whenever you start a quality program, there are plenty of crusty veterans who have been used to having the free-reign to do and say whatever they desire. Over the years, the nay-sayers on this team were quietly faced with three choices: get on board, retire or find another job. There are few of them left.
- Set a high expectation for new hires. This team has had turnover – like all call centers. This team implemented a new hire orientation training in which it’s clearly communicated that quality service and exemplary phone skills are mandatory.
- Individual accountability. The program for this client began by measuring and reporting team-based results. This was great to get the process started and to get front-line buy-in. You can only get so far with team-based reporting, however. This team let their program evolve until every team member received regular, individual feedback. Their QA scores are now a significant part of their annual performance review.
- Have fun rewarding performance. Through the years, this team has done a mixture of incentives. One year there were quarterly team rewards like going bowling for an hour at the end of the work day, taking a limo out for ice cream or having lunch in the board room. One popular incentive cost almost nothing – it allowed agents to throw a pie in their supervisor’s face. Another year, each agent who achieved a certain quality score got his/her name in a drawing for a major prize (like, $1500 nice). Perhaps the most motivating reward I’ve witnessed, however, comes from this team’s senior manager. He sends an e-mail or voice mail to every agent who achieves World-class QA scores and thanks them for their efforts.
I hate to think how many thousand phone calls I’ve scored from this team over the years. But listening to their calls today and hearing the difference…it feels pretty good.