- Conversationally use the customer’s name. “But, this one time, a customer got angry because I mispronounced his name – so I never use the caller’s name. Don’t want to make that mistake again.”
- Apologize if something has not met the customer’s expectations. “But, this one time, I had a customer who told me, ‘I don’t want your apology’ – so I’ve never apologized to a customer again.”
- Give customers a time frame of when you’ll get back them with an answer. “But I never know when I’m going to hear back from accounting, can’t give a time frame because I might not meet it.”
It’s important to keep “rules” and “exceptions” in balance. Don’t make rules based on occasional exceptions. Base the elements of your QA scorecard on the general rules that apply to the vast majority of your calls. If an “exceptional” situation arises, you can deal with it on a situation by situation basis. For example, if the customer’s name was fifteen syllables long and difficult to pronounce, you would mark “use the customer’s name” not applicable for that particular call and talk to the CSR about how to handle those situations in the future.
There are other ways to deal with exceptional calls and situations within calls. My point is simply that, when coaching CSRs, you have to continually communicate your understanding of exceptional situations and your willingness to treat those situations in a just and fair manner. I try to be equally rigorous in communicating the message that the QA elements can be easily performed in the vast majority of contacts and it is expected that they will.
There was a post in the Customer Service Reader that discussed declining Customer Satisfaction in the retail sector. Claes Fornell of the National Quality Research Center attributes the decline to companies pushing their staff to generate sales at the expense of service: “Too much pressure on staff to generate sales can have a detrimental effect on the quality of service that the staff is able to provide, which, in turn, has a negative effect on repeat buying. Since many retailers measure and manage productivity, but don’t usually have good measures of the quality of customer service [emphasis added], it seems possible that some companies put too much emphasis on productivity at the expense of service.”
We have been seeing this trend in call centers recently. We’ve seen a manager alter the weighting of his team’s QA scale so that the upselling component counted for over one-third of the CSR’s Overall Service score. The push for cross-selling and up-selling is on the rise, and companies are not always weighing the long-term effects that this can have on customer satisfaction and loyalty. Up-selling and cross-selling can be tremendous tools for revenue generation, but it is critical that companies measure their customer’s willingness to hear these offers. Even with customers who are open to hearing these offers, it is important that a customer’s issues and questions be resolved with exemplary soft skills before the offer is made. Without the resolution and soft skill components delivered prior to the sales pitch, the sales efforts will not be as effective and may serve to erode customer satisfaction and loyalty.
I spoke a few weeks ago at the LOMA conference on Customer Service. LOMA is a great organization that caters to the insurance and financial services industry and my workshop was about “Avoiding Common QA Pitfalls.” I’m always interested in what I learn from these conferences. You get a feeling for the hot issues in call centers.
The question that seemed to raise the most discussion at LOMA was “How many calls should I score and coach per person?” A book could probably be written on the subject, but let me give you a couple of thoughts based on our group’s experience.
Are you using QA results in performance management? If you are, then the question really needs to be, “do we have enough calls to be statistically valid and hold up to scrutiny?” If you are giving any kind of merit pay, incentives, bonuses or promotions based on their QA scores, then you’ll want a valid number. Assuming your QA scorecard has a valid methodology (which is a big assumption based on the fact that most QA scorecards we audit have major problems with their statistical validity), you’ll want at least 30 randomly selected calls. More is great, but there is sort of a rule in statistics that once you have more than 30 of anything, you’ve got enough that you know they can’t all be outliers. Let me say again, I’m talking minimums here.
The “Wait ’til Mom & Dad are Gone” Syndrome. Many call centers coach each agent religiously once a week. That’s fine from a feedback point-of-view. But like kids who wait until they see their parents pull out of the driveway to start the party, agents often know that they only have to watch their service until they’ve been coached for the week. After that, all bets are off. Sometimes a seemingly random coaching schedule that keeps agents guessing is a good thing.
It might depend on the agent. In our politically correct world we are conditioned to do the same thing for everybody. Yet, some agents need little feedback or coaching. Score the calls, make sure they’re still stellar, and then let them know their scores and give them their bonus.
Why waste time, energy and money coaching them? That’s like the guy who washes his car everyday whether it needs it or not (then parks it diagonally across two spots in the parking lot…I hate that guy!). Seriously, the number of coaching sessions is a separate issue from how many calls should you score to have a valid sample. Spend your coaching energy on agents who need it the most. It even becomes an incentive for some agents who dread the coaching sessions: “Keep your numbers up and you don’t have to be coached as much.”
From the discussion I had with some QA managers at the LOMA conference, there were several who – in my opinion – were coaching their people more than was necessary. We’ve seen agents greatly improve performance with quarterly and even semi-annual call coaching. Still, that’s not going to be enough for other agents.
There’s the challenge for you – finding out which agent is which and tailoring your QA process to meet each agent’s needs.
I’m always struck by the mixture of motivations underlying many call center QA scorecards. Companies love to give lip service to delivering excellent customer service and improving customer satisfaction. Their QA scale, however, may reflect the designs of a cost-sensitive management team (driven by lowering costs with no regard to impact on the customer) or a sales-driven management team (driven by increasing sales with no regard to impact on the customer).
This mixture of messages frustrates front-line agents who see the hypocrisy: “You say you believe in customer service but all I’m told in QA is to keep it short or push the cross-sell.” It also frustrates the QA coach who must try to justify or explain the obvious, mixed message. Of course, it is possible to have a balanced methodology in which you satisfy customers and look for ways to be efficient and opportunistic. The key is to make sure the customer is not left out of the QA equation.
If you really care about keeping your customers coming back, you should start your entire QA program with a valid, objective customer satisfaction survey. The results can give you the data you need to impact Customer satisfaction and retention.
Find out what is really driving your customer’s satisfaction and loyalty. Then use that information in building and weighting your QA score card. In fact, some of our surveys have measured the customer’s willingness to hear up-sells and cross-sells in a customer service interaction. The results are often surprisingly positive, and the data can be a powerful tool in building buy-in among the front-lines for your sales drive.
Oh, and by the way, it’s possible that your company already does customer sat research and you’ve never seen it. Just the other day we provided a call center manager with a copy of a survey our group had done for his boss a few years ago. He was never aware that it had been done and had not been given access to the information, even though it was critical for driving tactical decisions in his call center. I wish that this was an isolated incident, but my gut tells me it happens more often than not. It may be worth it to ask around. Of course, trying to decipher the data in many customer sat surveys we’ve seen can be a mind-numbing task – but that rant will have to wait for another post!
I scored a lot of calls today and it was really satisfying. The calls were fantastic. I mean, these calls were really World-class. I began working with this client years ago. They had no quality program in place. They had never monitored a call or coached their agents on service quality. Actually, when we began they could be described as decent. You might have said that they were very good – above average, even. That’s the thing. It’s one thing to help a customer service team who knows they’re bad. I think it’s a tougher job to take a team who’s doing well and motivate them to excellence.
This team is a good study in some of the keys to developing a consistent, world-class delivery:
- A management team that’s committed for the long-haul. This team had the same manager for several years. He was committed to developing a culture of quality and had the support of his superiors. No matter how much the front-line railed against the program or how wishy-washy the front-line supervisors may have been at times, the consistent message and commitment to quality has always been there.
- Outlast the critics. The QA program has not always been popular among the ranks. As is true whenever you start a quality program, there are plenty of crusty veterans who have been used to having the free-reign to do and say whatever they desire. Over the years, the nay-sayers on this team were quietly faced with three choices: get on board, retire or find another job. There are few of them left.
- Set a high expectation for new hires. This team has had turnover – like all call centers. This team implemented a new hire orientation training in which it’s clearly communicated that quality service and exemplary phone skills are mandatory.
- Individual accountability. The program for this client began by measuring and reporting team-based results. This was great to get the process started and to get front-line buy-in. You can only get so far with team-based reporting, however. This team let their program evolve until every team member received regular, individual feedback. Their QA scores are now a significant part of their annual performance review.
- Have fun rewarding performance. Through the years, this team has done a mixture of incentives. One year there were quarterly team rewards like going bowling for an hour at the end of the work day, taking a limo out for ice cream or having lunch in the board room. One popular incentive cost almost nothing – it allowed agents to throw a pie in their supervisor’s face. Another year, each agent who achieved a certain quality score got his/her name in a drawing for a major prize (like, $1500 nice). Perhaps the most motivating reward I’ve witnessed, however, comes from this team’s senior manager. He sends an e-mail or voice mail to every agent who achieves World-class QA scores and thanks them for their efforts.
I hate to think how many thousand phone calls I’ve scored from this team over the years. But listening to their calls today and hearing the difference…it feels pretty good.