Category: Customer Research

Executive Reality Check: What We Say vs. What We Measure

A while back I read a fascinating article by Lou Gerstner in the Wall Street Journal. He was examining the response of a financial institution’s CEO to the debacle in which they found themselves. The CEO said that it was the employees who failed to honor the corporate culture of “putting the customer first.” Gerstner goes on argue that what companies say they value in their mission and value statements often flies in the face of the corporate culture dictated from the executive suites:

What is critical to understand here is that people do not do what you expect but what you inspect. Culture is not a prime mover. Rather it is a derivative. It forms as a result of signals employees get from the corporate processes that structure their work priorities.

If the financial-reporting system focuses entirely on short-term operating results, that’s what will get priority from employees. If you want employees to care a lot about customers, then customer-satisfaction data should get as prominent a place in the reporting system as sales and profit.

I have seen the truth of Gerstner’s observations over and over again in our years of providing Customer Satisfaction (CSAT) research and Quality Assessment (QA) for companies large and small.

When I tell people about our group it is quite common to have them respond by telling me that their company has a “quality” program. When I ask them to describe their program, however, they explain that they get regular reports about Average Speed of Answer, Average Call Time, Call Counts, and similar metrics. In other words, they are measuring quantity (of calls and time) and equating it with quality. To Gerstner’s point, you get what you inspect. When our group is given an opportunity to do a true quality assessment for such a company, we find Customer Service Representatives (CSRs) more focused on cranking through as many calls as quickly as they can than they are providing any kind of positive customer experience. Despite their company’s well worded value statements about customer service, the CSRs know that their employer truly values efficiency, productivity and cost containment because that’s what the employer measures.

Alternatively, when our group has enjoyed long term partnerships with clients it is typically because the CEO and executive team truly believe in the long-term value and profitability of providing a superior customer experience. To that end, they understand the value of getting reliable data about what drives their customer’s satisfaction and the importance of objectively measuring the customer experience against those drivers. Front-line CSRs know that their company values providing a truly superior customer experience because that is what their employer measures.

It’s a simple exercise for any corporate executive. First take a look at your company’s stated values and mission with regard to customer service and/or the customer experience. Next, take a look at what’s truly being measured on your front-lines where customers interact with your team. Is there a disconnect?

If you need an experienced partner in finding out what drives your customers’ satisfaction, how to measure quality the right way, and how to effectively communicate these things throughout your organization then give us a call. It’s what we’ve been doing for over a quarter century. We’d love the opportunity to work with you and your team.

 

tom head shotTom Vander Well is partner and Executive Vice-President of C Wenger Group. Tom has written about Customer Satisfaction and Quality Assessment on previous blogs (QAQnA and Service Quality Central) and was a contributing Customer Service blogger for the Des Moines Business Record

A Representative CSAT Sample is Crucial

One of the keys to getting reliable Customer Satisfaction (CSAT) data is to make sure that you have a representative sample of the entire customer population you want to target. E-mail and on-line surveys are relatively cheap and easy to build and implement, but the sample of those who respond may not be representative of all your customers.

We are inundated with survey requests in our modern culture. There’s the annoying pop-up request to rate a website (one second after you’ve arrived on the page), the standardized post-call opt-in surveys when you call almost any major company’s Customer Service line, and the awkward moment the auto dealer asks you to give them all great marks or they might lose their jobs. With the survey overload it’s more common than ever for giant segments of a customer population to ignore the survey altogether. Surveys responses are likely to be biased toward customer segments of those who are very angry, very happy, or who simply like to take surveys. This means there may be entire segments of your customer population who are not represented in your CSAT data.

The risk for you and your business comes when you start making tactical and strategic business decisions based on skewed CSAT data. 

There are ways to ensure representative sampling and proven techniques for getting reliable CSAT data. It requires good customer data to identify an appropriate pool of potential respondents and a well-crafted approach for requesting that your customers take the survey. If doing a personal, interactive survey you need an experienced team who can put respondents at ease and get them talking.

Having reliable customer data can make all the difference in making crucial business decisions that will affect your company’s future. It’s worth the investment to have our group work with you and your team ensure that the sample is representative, the data is real, and the results are reliable.

CWG logoLR

C Wenger Group’s Research and Survey Services

Five Reasons to Outsource Your CSAT and QA Initiatives

Training & Coaching

Over the past decade more and more companies have adopted an attitude of “it’s cheaper for us to do it ourselves.” We have experienced an era of increased regulation, executive hesitation, and economic stagnation. Companies have hunkered down, tightened the purse strings, and found ways to play it safe. Customer Satisfaction (CSAT) research and Quality Assessment (QA) have been popular areas for businesses to do this given technology that makes it relatively easy to “do it yourself.”

Just because your team can do these things yourself, doesn’t mean that it’s a wise investment of your time and resources, nor does it guarantee that you’ll do it well. Based on a track record of mediocre (at best) renovations, my wife regularly reminds me that while I technically can do home improvement projects cheaper myself, she’d prefer that we pay an expert to do it well (and free me to invest my time doing more of what I do well so we can pay for it).

So why pay an outside group like ours to survey of your customers, or monitor your team’s calls to provide a Quality Assessment report on how they’re serving your customers?

I’ll give you five reasons.

  1. It gets done. Analyzing phone calls, surveying customers, and crunching data require a certain amount of discipline and attention to detail. When things are changing, fires are raging, and the needs of your own business are demanding a team’s time and attention, then things like crunching data or listening to recorded phone calls become back burner issues. It’s common for people to tell me that they have their own internal QA team. When I ask how that’s going for them, I usually hear excuses for why it’s hard to get it done with all the urgent matters to which team members must attend. When you hire a third party provider, it gets done. It’s what we’re hired do.
  2. It gets done well. Our clients represent diverse areas of the market from manufacturing to retail to financial services. Our clients tend to be leaders in their industries because they are good at what they do. Developing expertise outside of their discipline isn’t a wise investment of resources and (see #1) and who has time for that? Our clients want to invest their time and resources doing what they know and do well. Measuring what is important to their customers, turning those things into behavioral attributes, analyzing communication channels, and coaching their agents how to improve customer interactions in ways that improve customer satisfaction are what we do well.
  3. You get an objective perspective. When providing audits of internal Quality Assessment teams or reviewing internally produced customer survey data, it’s common for us to find evidence of various kinds of bias. Employees at different levels of an organization have motivations for wanting data to look good for their employers, or bad with respect to coworkers with whom there are other workplace conflicts. I’ve observed supervisors who are overly harsh assessing the calls of employees with whom they have conflicts. Internal call analysts, wanting to be kind to their coworkers, will commonly choose to “give them credit [for a missed service skill] and just ‘coach them on it.'” Internal research data can be massaged to provide results that gloss over problems or support presuppositions that are politically correct with the executive team. Our mission, however, is to provide objective, customer-centric data that give our clients a realistic picture of both customer perceptions and the company’s service performance. It is our mission to be accurate and objective in gathering and reporting data.
  4. You get an outside perspective. It has been famously observed that “a prophet is not welcome in his hometown.” Internal data is often discredited and dismissed for any number of reasons from (see #2) “What do they know?” doubts about the expertise of coworkers to (see #3) “They hate me” accusations of bias which we’ve discovered are sometimes accurate and other times not. Front line managers regularly tell me that they appreciate having our group providing assessment and coaching because they can’t be accused of being biased, and as outside experts we have no internal ax to grind. In addition, our years of experience with other companies provide insight and fresh ideas for handling common internal dilemmas.
  5. You can fire us with a phone call. “Do you know why I keep you around?” a client asked me one day. I took the bait and asked him why. “It’s because I take comfort in knowing I can pick up the phone and fire you whenever I want.” He went to explain that he had no desire to hire an internal team to provide the survey data, quality assessment, and call coaching our team provided their company. Not only would he bear the expense and headaches associated with developing an expertise outside of their company’s discipline (see #2), but once employed he couldn’t easily get rid of them should they prove as ineffective as he expected they would be (See #1, #3, and #4). His point was well taken. Our group has labored for years with the understanding that our livelihoods hinge on our ability to continually provide measurable value to our clients.

Yes, you can technically generate your own CSAT survey or call Quality Assessment data. Technology makes it feasible for any virtually any company to do these things internally. The question is whether it is wise for your company to do so. When calculating the ROI of internal vs. external survey and QA initiatives, most companies fail to calculate the expenses associated with ramp up, development, training, nor do they consider the cost associated with employee time and energy expended doing these things poorly and providing questionable data and  results.

Quality Assessment and Survey Data Efficiently Delivered to your Desktop

SQC Screen Capture 1

One of the many frustrations of corporate Quality Assessment programs is how to efficiently get the results and data to the agents so that they are aware of their performance and can make necessary efforts to improve. For years, c wenger group has been delivering our clients’ QA and research data, both team and individual agent reports, directly through our Service Quality Central web portal. Managers and agents can both access their most recent data 24/7/365 with a provided user name and password. Supervisors and Managers can quickly access all of their agents individual QA data from one easy to use source and agents are able to utilize down time to pull up their QA data right at their desk.

SQC Screen Capture 3

Our Service Quality Central website can be branded for each client and offers the flexibility to provide more than just data. We have are able to provide audio and video content. In some cases we’ve uploaded individual agents calls and coaching notes so that they can hear one of their own calls at their leisure, right at their own desk. Training videos, coaching handouts, training manuals can all be shared with agents.

SQC Screen Capture 2

Our clients are busy doing what they do best and the tasks that make them profitable. They don’t want to be burdened with tasks that may be strategic and valuable, but aren’t in their area of expertise. Surveying customers, analyzing calls for quality and compliance, interpreting data trends, and reporting data to the front lines are activities that drain time, energy and resources; Resources that our clients would rather invest in their core business. That’s where c wenger group comes in.

We have over a quarter century experience surveying customers, analyzing phone calls, and turning data into actionable, effective training and coaching solutions. Our Service Quality Central web portal is one of the ways we take the burden off of our clients and deliver effective, measureable value directly to their desktop on an on-going basis.

For more information, please drop us an e-mail at info@cwengergroup.com.

Three Great Examples from This Week’s Call Coaching

Above view of several business people planning work at round tabWe had a great day call coaching two of our client’s sales teams on Wednesday. We provide integrated services for this client which include a customer survey, on-going call assessment and bi-monthly call coaching.

A few highlights from our coaching sessions:
  • Data from our ongoing Service Quality Assessment revealed that a common courtesy service skill had significantly declined for a team of Regional Account Managers. The objective data pinpointed the real reason for the decline: one agent’s performance has drastically deteriorated since the beginning of the year. We were able to share with this agent, from our survey results, how courtesy is not only a key driver of their customers’ satisfaction but is also a key differentiator between his company and their competitors. We were able to listen to a call together in which the courtesy skill was missed and discussed strategies for implementation. The agent left informed, motivated and equipped. The ongoing call assessment will hold him accountable to make progress.
  • Another agent on the same team was new about 18 months ago. The agent transferred from an operations position and was new to the phones. In the beginning he was the poorest performer, struggling to learn the ropes. In our coaching session yesterday we were able to show that our data now quantifies that he is currently the team’s best performer. From “worst to first” in less than two years. “It’s all because of your coaching and data,” he said. “It’s thanks to you.” He did the hard work, but we’ll gladly take the compliment. It was great to celebrate with him. We love being the bearers of good news!
  • In a different session we coached a member of the Inside Sales team. The Service Quality Assessment revealed the agent’s service performance has declined slightly in recent months. It wasn’t a major problem, but we wanted to address it before it got worse. Digging into the behavioral data, we could identify specifically which service skills had been demonstrated less consistently, listen to examples in actual calls, and discuss strategies for remembering and employing the skills. The agent will receive monthly data via our Service Quality Central web portal to track the progress.
We leave our time with the client feeling good about the measurable value we’ve been able to provide through real data that translates into actionable coaching and training. If you’d like real data to quantify which dimensions of service drive your customer’s satisfaction and real number that reveal how your agents are performing in those daily moments of truth with customer, the give us a call (515.278.1516) or drop us an e-mail. It’s what we love to do!

We’ve Improved! Why Isn’t the Customer Satisfied?

I’m working with a client’s contact center this week and the data from our Customer Satisfaction and Quality Assessment (QA) reveal an interesting story. The Customer Service team has been working on improving their service delivery, and the behavioral data from our on-going QA work reveal clear improvement in specific areas of the call. The improvement in call quality, however, has not translated into corresponding improvement in Customer Satisfaction.

So, in my presentations this week, I dug into the data with the Customer Service Representatives (CSRs) and Call Center management team. Because our Customer Satisfaction surveys have helped identify key drivers of the client’s customer satisfaction, specifically when the customers call the contact center, we could compare the improved CSR service behaviors to the key drivers of customer satisfaction.

The bottom line is that the improvements were great and will have a  positive impact  operationally, but the improvements weren’t necessarily in areas that their customers would immediately notice or reward. Some of the key service soft skills that will move the needle on customer satisfaction have been stagnant. If the contact center wants customers to reward them with increased satisfaction, they’ll have to keep doing a great job in their improved hard skills, but add to it the key soft skills for which their customers will reward them.

This week’s presentations have reminded me, once again, that focusing on industry standards, industry benchmarks, and best practices will only improve your customer’s satisfaction if those standards, benchmarks, and best practices are what your customers care about. Until you find out what matters to your customers and link them to your QA program, you might just be moving the needle on all the wrong things.

“Must Learn BALANCE, Daniel-san”

I had an interesting post come across my RSS feed this afternoon from Syed Masood Ibrahim, in which he presented the following statement:

Nothing frustrates me more than the waste associated with counseling, monitoring and inspecting the agent for improved performance.  No organization can inspect in good service.

95% of the performance of any organization is attributable to the system and only 5% the individual.  This challenges the modern attempts by many contact centers to focus attention on the agent.  The problem is that the design of the work is so poor that an agent has little chance of being successful.  Blaming the individual for a bad system is nonsense.

I agree with Mr. Ibrahim that some contact centers place an inordinate amount of blame on the CSR for failures in the service delivery system. His supposition is correct. If the system is broken, it doesn’t matter how nice your CSRs are to the customer, the customer is going to walk away dissatisfied.

With all due respect to my colleague, however, I must disagree that CSR performance is only 5% of the equation. I believe the opposite of Mr. Ibrahim’s supposition is also true: If you have a perfect system, but your CSR communicates with the customer in a less than appropriate manner,  you still have a dissatisfied customer. I’ve had the privilege of working with some of the world’s best companies; companies who manage their systems with an exceptional degree of excellence. In each case, the system will only drive customer satisfaction so far. It is the CSRs consistent world-class service that drives the highest possible levels of customer satisfaction, retention, and loyalty.

In the famed words of Mr. Miyagi, “Must learn balance, Daniel-san.”

A good quality program will identify and address both system related and CSR related issues that impede service quality. When our group performs a Service Quality Assessment for our clients, our analysts are focused on the entire customer experience. That includes both the agents communication skills and the system related problems that stand in the way of the customer receiving resolution to his or her issue. The client’s management team receives an analysis of both CSR skill issues and policy/procedural related issues that need to be addressed if the customer experience is going to improve.

The pursuit of service excellence requires attention to every part of the service delivery system.

Creative Commons photo courtesy of Flickr and bongarang

But, You’re NOT the Customer

My wife and I are two very different people. Like many married couples, opposites attracted. At least, personality-wise. My wife is a bit of a lion. You always know exactly where she stands because she'll tell you. When she's upset, she roars. It's actually an admirable trait. I rarely have to guess where she stands. On the other hand, I'm much more of a Golden Retriever (I'm flashing on a scene from "When Harry Met Sally" when Meg Ryan says "Is one of us a DOG in this scenario?!"). I'm a people pleaser, so I tend to hide my true emotions from people in a given moment.

When dealing with customer service situations, there is a distinct difference in the ways my wife and react. My wife will make it very clear how she feels about a given situation. She will be very up front with the CSR and explain exactly what she expects, when she expects it, and what she will do if the situation isn't remedied. On the other hand, I will sit there, quietly smile, and nod my head. Then I'll quietly walk away and never do business with the company again….ever.

I share this because understanding human nature is important to QA. It's very common for me to hear supervisors and QA coaches analyzing a call based on their perception of what a customer might have been thinking…

  • "The customer was like…"
  • "If I were the customer, I would…"
  • "I think this customer…"

Granted, if you have my wife on the line you're likely going to hear exactly what the customer was thinkng. (You still have to be careful, some customers will say one thing on a call but behave completely differently in their future purchase intent). But, if you have me on the line, you'd never know. That's why an objective QA process sticks closely to measuring things that you can hear (or not hear) and see (or not see) in the CSR's behavior. If you're fortunate, you've got some reliable research data that provides you with a picture of what your customers, in general, expect when they call. But, even with research data, there are always outliers in a pool of customers. Trying to divine what a given customer was thinking or feeling is a slippery slope.

Remember, you're NOT the customer on that call. QA is not a magic 8 ball peering into the mind of each customer on each call. Reliable QA defines, based on reliable data, which behaviors are likely to have the greatest impact on overall customer satisfaction if demonstrated consistently and done well. Then it measures if those behaviors are consistently demonstrated over time and a valid sample of calls, and it uses that data to coach CSRs towards better and more consistent performance on those defined behaviors.