Free Webinar! A Beginner’s Guide to Call Monitoring and Quality Assessment

Your call may be monitored for quality and training purposes” is a familiar phrase in today’s business world. For growing companies interested in beginning a call recording or quality program, the process can seem both confusing and daunting. This free webinar is intended to help companies who are exploring the development and implementation of a call recording and quality assessment program.

tom head shotThe webinar will be presented by Tom Vander Well, Executive Vice-President of c wenger group. Tom is a pioneer in the call monitoring and Quality Assessment industry and has over 20 years experience analyzing moments of truth between businesses and their customers. In this webinar Tom will help participants think through basic questions you should be asking. He will provide various methods for approaching both call recording and Quality Assessment, discuss their strengths and weaknesses, and present cost effective, practical solutions.

The FREE webinar will be July 13, 2017 at 12:00 p.m. CDT. Registration is limited to 25 participants, so register today! Click the button below or visit:

http://www.videoserverssite.com/register/cwengergroup/registration

Register for Webinar Button

Lessons from the Weeds in TruGreen’s Treatment

My wife and I built a house a couple of years ago. We had to seed an entire yard in late 2015 and, like all new lawns, it has its issues. Early this spring, after one full growing season, I realized that I needed some professional help controlling the weeds and getting the yard healthy. I noticed that several of our neighbors used TruGreen lawn service and TruGreen has made a huge marketing push this spring so I went online to check them out and request a quote. Literally, within a minute of submitting my online quote request I received a phone call from a TruGreen sales person. I was impressed.

One of the things that TruGreen made a big deal about on their website and in their sales pitch was the fact that my “Ph.D certified lawn specialist” would come, do a site analysis, a soil sample, identify the type of grass we have, and discuss with me with a site plan for my lawn. I care about my new lawn and I realize that I have some responsibility in its success. TruGreen even promised they’d visit between treatments, if needed, to ensure my satisfaction. The idea of a lawn specialist who would talk to me, answer my lawn care questions, and partner with me in making my lawn healthy was a big driver in convincing me to sign up.

Within a couple of weeks I received a phone call informing me that I would get my first visit the following day. A TruGreen “specialist” arrived and knocked on my door. When I opened it he quietly said “I’m going to treat your lawn” as he backed away from the door. Fine. I figured I’d let him do his thing and wait for my site analysis, soil sample, and a discussion of the plan when he was done. I never heard back from him, but I did receive a computer generated report on my door knob with what appeared to be stock information and instructions.

A couple of weeks later I received a call, and the Caller ID said it was from TruGreen. I figured maybe this was my call with the results of my site analysis and a chance to discuss the plan.

Tom! This is your local TruGreen office here in Ankeny, Iowa. I understand you’re interested in some lawn care services,” he said.

I was confused. “Actually, I’m already signed up and I already had my first treatment,” I replied.

Okay. Well, I must have gotten an old message. Sorry to bother you. You have a good day.” [click]

At this point, I wasn’t so impressed with TruGreen. However, almost 25 years in Customer Satisfaction (CSAT) research and Quality Assessment (QA) have taught me that the best of corporate service systems have their glitches. I wanted to give TruGreen a chance and see how they would respond to a sincere customer expressing my dissatisfaction. I went to the website and contacted Customer Support using their on-line form. I explained my frustration and what I had both expected and experienced. Later that day, to their credit, I received a phone call from Holly on the TruGreen Customer Support team.

As a Customer Service QA professional, I can testify that Holly was a total pro (much like the Sales associate who initially called me from TruGreen). If I was doing a C Wenger Group Service Quality Assessment (SQA) analysis of Holly’s call she would have received a complete 100. She was personable, conversational, and empathetic. She apologized, articulated a thorough review of the situation, and then assured me that the following morning I would receive a visit from my lawn care specialist from 7:30-9:30 a.m. She also advised me that my specialist was relatively new and a little shy. Nevertheless, she promised he would do the site analysis and soil sample. He would share the results with me, discuss my site plan, and answer my questions. And, she gave me a 50% reduction in my second treatment.

The following morning at 8:30 my door bell rang. As before my specialist quietly said he was going to treat my lawn as he stood fifteen feet from my front door. That was it. This time I stepped out and walked down to him. I told him I was concerned about my lawn and the patches of clover that were growing. I wanted to know what the “plan” was.

Clover’s hard to get rid of,” he said. “I’ll spray it really well today.” He added it might take multiple applications to get rid of clover. There was no mention of my site analysis. There was no mention of my soil sample. There was no mention of a site plan, discussion of my lawn, or mention of the concerns I’d discussed with Holly. I figured I’d see how things played out and returned inside. Thirty minutes later my TruGreen Lawn Specialist pulled away from the curb having left his stock, computer generated printout of what he’d done to my lawn on the door knob.

I contacted Customer Support once again on May 19 (ticket #1698933), referencing my previous ticket number. I repeated what Holly told me I could expect and then described what I had actually experienced. I reiterated that I simply wanted TruGreen to deliver on their promise. I asked that they either provide me with a lawn specialist who will communicate with me as advertised or be honest with me that I was stuck with what I’d received so that I could pay my bill, cancel my services, and search for other alternatives. The auto reply stated TruGreen would respond as quickly as possible.

It has been over two weeks. I have yet to receive a reply from TruGreen Customer Support.

You can learn a lot about a company and the systemic issues that negatively impact their customers’ satisfaction with a relatively small sample of phone calls, emails, chats or other communications. My experience with TruGreen has me pondering several thoughts and assumptions…

  • TruGreen has a top notch technology system with regard to national marketing, sales, support. They are everywhere in the media. They have great ads and a well articulated promise. They are quick to respond to any on-line quote request and a well-trained sales person gave me a great introduction. Likewise, my initial Customer Support experience was both responsive and top notch.
  • TruGreen has local branches across the nation who deliver their lawn services. I’m not sure if they are independently owned franchises or corporately owned subsidiaries, but I quickly learned that the actual customer experience with TruGreen is highly dependent on my local TruGreen branch, their staff, and their abilities. The fact that my local Ankeny office called to sell me services I had already contracted and that they themselves had already delivered tells me that there is at least some disconnect between TruGreen Corporate and the TruGreen branch.
  • TruGreen corporate sales and support is at the mercy of the local branches to deliver a satisfactory experience and resolve actual customer issues. Holly in Grand Rapids could make all the promises and assurances she wanted, but if the local branch in Ankeny was unwilling or unable to deliver, the promise would remain hollow and unmet. The local branches, their communication with national sales and support, and their ability to deliver appears to be a crucial weak link.
  • I assume that TruGeen branches are struggling with an annual seasonal crunch exacerbated by their aggressive national sales efforts. Local branches must quickly hire and train “specialists” and meet increased demands. I have to assume that my specialist was part of an army of newly hired, quickly trained specialists who were rapidly deployed and are struggling to meet demand.
  • I’m sure that most TruGreen customers are happy with a regularly scheduled visit from an anonymous lawn specialist who treats their lawn when they’re not home. I’m also reasonably sure that most customers are satisfied with the stock thermal paper report on their door knob. I accept that I may be among the few customers for whom the site analysis, soil sample, and partnership of a lawn specialist who communicates with me about my lawn is a key driver of my satisfaction.
  • Given that I’ve not received any response from TruGreen’s typically efficient and responsive national support system leads me to suspect that they’ve either accepted that I’m a lost customer or have placed responsibility for resolving my issue on the local branch who has not responded. Pardon the pun, but I’m left feeling like I’m just a pesky weed.

Our experiences as customers, both positive and negative, are opportunities to learn, grow, and continuously improve. That’s what C Wenger Group’s Service Quality Assessments are all about. My experience with TruGreen reminds me that great front end communication with sales and support can only go so far. Customers will ultimately judge us by the actual experience that happens in the moment of truth when we’re interacting. For TruGreen that is at the front door and on then lawn. I am also reminded that almost every customer service problem is rooted in a communication issue.

TruGreen simply needed their lawn specialist to spend 5-10 minutes communicating with me on the initial visit:

“Hi Tom. I’m Joe. If you’ve got a second, let me chat with you about your lawn. Tell me a little bit about your lawn and what your concerns you have? I hear you and I understand. Here’s what I know having analyzed your lawn. Here’s what we’re going to do and what you can expect to see happening with your lawn. Here are a few suggestions I have for your mowing and watering that will make a big difference in us getting this lawn healthy.”

Once I became a dissatisfied caller, TruGreen simply needed someone to “come out as often as needed” to say:

“We’re sorry. We dropped the ball on you. We’re going to do what we promised to do in the first place, and then we’re going to follow-up with you to make sure we get this right.”

As a customer, a company’s silence can be deafening.

 

More great reading from Tom Vander Well:

 

Tom Vander Well serves as Executive Vice-President of C Wenger Group and has led the group’s Quality Assessment, Training, and coaching efforts for over 20 years. A long-time blogger, Tom’s QAQnA and Service Quality Central blogs were awarded for their content in the Customer Satisfaction, Customer Service, and contact center industries. Tom was also the contributing Customer Service columnist for the Des Moines Business Record‘s IowaBiz blog. Tom consults with businesses, large and small, in improving customer satisfaction and customer service. tom@cwengergroup.com  @cwengergroup

Executive Reality Check: What We Say vs. What We Measure

A while back I read a fascinating article by Lou Gerstner in the Wall Street Journal. He was examining the response of a financial institution’s CEO to the debacle in which they found themselves. The CEO said that it was the employees who failed to honor the corporate culture of “putting the customer first.” Gerstner goes on argue that what companies say they value in their mission and value statements often flies in the face of the corporate culture dictated from the executive suites:

What is critical to understand here is that people do not do what you expect but what you inspect. Culture is not a prime mover. Rather it is a derivative. It forms as a result of signals employees get from the corporate processes that structure their work priorities.

If the financial-reporting system focuses entirely on short-term operating results, that’s what will get priority from employees. If you want employees to care a lot about customers, then customer-satisfaction data should get as prominent a place in the reporting system as sales and profit.

I have seen the truth of Gerstner’s observations over and over again in our years of providing Customer Satisfaction (CSAT) research and Quality Assessment (QA) for companies large and small.

When I tell people about our group it is quite common to have them respond by telling me that their company has a “quality” program. When I ask them to describe their program, however, they explain that they get regular reports about Average Speed of Answer, Average Call Time, Call Counts, and similar metrics. In other words, they are measuring quantity (of calls and time) and equating it with quality. To Gerstner’s point, you get what you inspect. When our group is given an opportunity to do a true quality assessment for such a company, we find Customer Service Representatives (CSRs) more focused on cranking through as many calls as quickly as they can than they are providing any kind of positive customer experience. Despite their company’s well worded value statements about customer service, the CSRs know that their employer truly values efficiency, productivity and cost containment because that’s what the employer measures.

Alternatively, when our group has enjoyed long term partnerships with clients it is typically because the CEO and executive team truly believe in the long-term value and profitability of providing a superior customer experience. To that end, they understand the value of getting reliable data about what drives their customer’s satisfaction and the importance of objectively measuring the customer experience against those drivers. Front-line CSRs know that their company values providing a truly superior customer experience because that is what their employer measures.

It’s a simple exercise for any corporate executive. First take a look at your company’s stated values and mission with regard to customer service and/or the customer experience. Next, take a look at what’s truly being measured on your front-lines where customers interact with your team. Is there a disconnect?

If you need an experienced partner in finding out what drives your customers’ satisfaction, how to measure quality the right way, and how to effectively communicate these things throughout your organization then give us a call. It’s what we’ve been doing for over a quarter century. We’d love the opportunity to work with you and your team.

 

tom head shotTom Vander Well is partner and Executive Vice-President of C Wenger Group. Tom has written about Customer Satisfaction and Quality Assessment on previous blogs (QAQnA and Service Quality Central) and was a contributing Customer Service blogger for the Des Moines Business Record

A Representative CSAT Sample is Crucial

One of the keys to getting reliable Customer Satisfaction (CSAT) data is to make sure that you have a representative sample of the entire customer population you want to target. E-mail and on-line surveys are relatively cheap and easy to build and implement, but the sample of those who respond may not be representative of all your customers.

We are inundated with survey requests in our modern culture. There’s the annoying pop-up request to rate a website (one second after you’ve arrived on the page), the standardized post-call opt-in surveys when you call almost any major company’s Customer Service line, and the awkward moment the auto dealer asks you to give them all great marks or they might lose their jobs. With the survey overload it’s more common than ever for giant segments of a customer population to ignore the survey altogether. Surveys responses are likely to be biased toward customer segments of those who are very angry, very happy, or who simply like to take surveys. This means there may be entire segments of your customer population who are not represented in your CSAT data.

The risk for you and your business comes when you start making tactical and strategic business decisions based on skewed CSAT data. 

There are ways to ensure representative sampling and proven techniques for getting reliable CSAT data. It requires good customer data to identify an appropriate pool of potential respondents and a well-crafted approach for requesting that your customers take the survey. If doing a personal, interactive survey you need an experienced team who can put respondents at ease and get them talking.

Having reliable customer data can make all the difference in making crucial business decisions that will affect your company’s future. It’s worth the investment to have our group work with you and your team ensure that the sample is representative, the data is real, and the results are reliable.

CWG logoLR

C Wenger Group’s Research and Survey Services

Five Reasons to Outsource Your CSAT and QA Initiatives

Training & Coaching

Over the past decade more and more companies have adopted an attitude of “it’s cheaper for us to do it ourselves.” We have experienced an era of increased regulation, executive hesitation, and economic stagnation. Companies have hunkered down, tightened the purse strings, and found ways to play it safe. Customer Satisfaction (CSAT) research and Quality Assessment (QA) have been popular areas for businesses to do this given technology that makes it relatively easy to “do it yourself.”

Just because your team can do these things yourself, doesn’t mean that it’s a wise investment of your time and resources, nor does it guarantee that you’ll do it well. Based on a track record of mediocre (at best) renovations, my wife regularly reminds me that while I technically can do home improvement projects cheaper myself, she’d prefer that we pay an expert to do it well (and free me to invest my time doing more of what I do well so we can pay for it).

So why pay an outside group like ours to survey of your customers, or monitor your team’s calls to provide a Quality Assessment report on how they’re serving your customers?

I’ll give you five reasons.

  1. It gets done. Analyzing phone calls, surveying customers, and crunching data require a certain amount of discipline and attention to detail. When things are changing, fires are raging, and the needs of your own business are demanding a team’s time and attention, then things like crunching data or listening to recorded phone calls become back burner issues. It’s common for people to tell me that they have their own internal QA team. When I ask how that’s going for them, I usually hear excuses for why it’s hard to get it done with all the urgent matters to which team members must attend. When you hire a third party provider, it gets done. It’s what we’re hired do.
  2. It gets done well. Our clients represent diverse areas of the market from manufacturing to retail to financial services. Our clients tend to be leaders in their industries because they are good at what they do. Developing expertise outside of their discipline isn’t a wise investment of resources and (see #1) and who has time for that? Our clients want to invest their time and resources doing what they know and do well. Measuring what is important to their customers, turning those things into behavioral attributes, analyzing communication channels, and coaching their agents how to improve customer interactions in ways that improve customer satisfaction are what we do well.
  3. You get an objective perspective. When providing audits of internal Quality Assessment teams or reviewing internally produced customer survey data, it’s common for us to find evidence of various kinds of bias. Employees at different levels of an organization have motivations for wanting data to look good for their employers, or bad with respect to coworkers with whom there are other workplace conflicts. I’ve observed supervisors who are overly harsh assessing the calls of employees with whom they have conflicts. Internal call analysts, wanting to be kind to their coworkers, will commonly choose to “give them credit [for a missed service skill] and just ‘coach them on it.'” Internal research data can be massaged to provide results that gloss over problems or support presuppositions that are politically correct with the executive team. Our mission, however, is to provide objective, customer-centric data that give our clients a realistic picture of both customer perceptions and the company’s service performance. It is our mission to be accurate and objective in gathering and reporting data.
  4. You get an outside perspective. It has been famously observed that “a prophet is not welcome in his hometown.” Internal data is often discredited and dismissed for any number of reasons from (see #2) “What do they know?” doubts about the expertise of coworkers to (see #3) “They hate me” accusations of bias which we’ve discovered are sometimes accurate and other times not. Front line managers regularly tell me that they appreciate having our group providing assessment and coaching because they can’t be accused of being biased, and as outside experts we have no internal ax to grind. In addition, our years of experience with other companies provide insight and fresh ideas for handling common internal dilemmas.
  5. You can fire us with a phone call. “Do you know why I keep you around?” a client asked me one day. I took the bait and asked him why. “It’s because I take comfort in knowing I can pick up the phone and fire you whenever I want.” He went to explain that he had no desire to hire an internal team to provide the survey data, quality assessment, and call coaching our team provided their company. Not only would he bear the expense and headaches associated with developing an expertise outside of their company’s discipline (see #2), but once employed he couldn’t easily get rid of them should they prove as ineffective as he expected they would be (See #1, #3, and #4). His point was well taken. Our group has labored for years with the understanding that our livelihoods hinge on our ability to continually provide measurable value to our clients.

Yes, you can technically generate your own CSAT survey or call Quality Assessment data. Technology makes it feasible for any virtually any company to do these things internally. The question is whether it is wise for your company to do so. When calculating the ROI of internal vs. external survey and QA initiatives, most companies fail to calculate the expenses associated with ramp up, development, training, nor do they consider the cost associated with employee time and energy expended doing these things poorly and providing questionable data and  results.

Who Knew Siri Can Coach Your Employees, Too?!

siri-fail-2

We just posted last week about the rather disappointing realities two of our clients experienced in comparison to the bright promises on which they’d been sold speech analytic technology. In both cases they were sold on the idea of speech analytics replacing their human QA programs by analyzing every call and flagging calls in which there were problems. Our clients found that the technology itself took a much greater investment of time and resources than anticipated just to make it work at a basic level. The results were equally disappointing, requiring even more time and resources just to sort through the many false-positives that the software flagged.

It is with great interest then, that I received an MIT Technology Review article  from a former co-worker this week. The article reports on what the writers claim is the latest technology trend, offered by Cogito, to revolutionize contact centers. Apparently speech analytics has been so successful and popular at accurately analyzing customer conversations that the technology experts now want to sell technology to do call coaching, as well. Who knew that Siri could now offer us sage advice on how to communicate more effectively and connect more emotionally with our customers. By the way, according to their marketing they think their technology might help you with your marriage, as well.

I have noted over the years just how much big technology drives our industry. Go to any Contact Center Conference and look at who is paying big bucks, commanding the show floor, introducing the latest revolutionary advancement, and driving the conference agenda. C’est la vie. That’s how the market works. I get it.

I have also noted, however, that technology companies have often sold us on the next big thing, even when it wasn’t. Does anyone remember the Apple Newton? Laser Discs? Quadrophonic sound? Have you scanned a QR code lately? Ever heard of Sony Beta?

Technology is an effective tool when utilized for the strengths it delivers. I am more appreciative than most my colleagues with the advancements we’ve made in technology. I remember days sitting in a small closet jacking cassette tape recorders into an analog phone switch. I also know from a quarter century of coaching Customer Service Representatives (CSRs), Collection agents, and Sales representatives that human communication and interactions are complex on a number of levels. It isn’t just the customer to CSR conversation that is complex, but also the Call Coach to CSR conversations and relationship. Technology may be able to provide objective advice based on voice data, but I doubt that technology can read the personality type of the CSR. I don’t believe it can read the mood that the CSR is in that day and the nonverbal clues they are giving off regarding their openness and receptivity to the information. I doubt it can learn the communication style that works most effectively with each CSR and alter its coaching approach accordingly.

But, I’m sure they’re working on that. Just check it out at your next conference. They’ll have a virtual reality demonstration ready for you, I’m sure.

 

 

 

Three Things They Won’t Tell You About Speech Analytics

Me: “Hey Siri? I’m bleeding badly. Call me an ambulance!”
Siri: “Okay. From now on I’ll call you ‘Anambulance.'”

Most all of us have humorous, and often aggravating, anecdotes about trying to communicate with Siri, Alexa, or any of the other voice prompted technology apps available to us. I am quite regularly thankful that no one is around to hear the tirades I scream to the disembodied, robotic female voice of my car’s voice prompt technology. It amazes me, then, to know that businesses spend huge amounts of money on speech analytic technology as a way to replace their Quality Assessment (QA) programs.

Let me start with full disclosure. Our company, C Wenger Group, has spent a quarter century monitoring and analyzing our clients’ phone calls as a third-party QA provider. Sometimes our clients hire us to be their QA team, and other times they hire us to provide periodic audits and reality checks to their internal efforts. Over the past few years we have learned that speech analytic technology has become a competitor to our service. I can quickly name two clients who have dismissed our services in favor of speech analytic software.

The promise of speech analytics is in the ability to monitor vast quantities of phone calls. Most human QA efforts, by comparison, utilize relatively small random statistical samples. Our data over the years reveals that our team can quite capably provide an accurate reflection of service performance with relatively few calls. I remember calling one skeptical client after our initial month listening to a minimal sample of calls for sales compliance. I gave him the names of three sales people whom our call analysis identified as problems. He laughed and told me that all three had been fired the previous day agreeing that our sample and analysis was, indeed, sufficient.

Nevertheless, the idea of being able to catch the needle in the haystack has certain appeal. Random samples don’t capture every instance of irate customers, lying representatives, and forbidden behaviors. That’s where tech companies and their (big ticket) speech analytic software promise nervous executives a peaceful night sleep knowing that every phone company can be monitored by computers and flag problem calls when they occur.

Just like Siri flawlessly hears my every spoken request and never fails to provides me with exactly the answer I was looking for.

I have followed up and spoken to both clients who dismissed our company’s services in favor of speech analytics. In one case, my contact admitted that they abandoned the technology after a year of unsuccessfully investing significant resources (money and man hours) trying to get it to provide meaningful results or value. In the other case my client contact admitted that the technology never worked, but that his company continued to play the political game of pretending it was working because they didn’t want to admit that they’d wasted so much money on something that wasn’t working. I have also spoken to representatives of other companies with similar words of warning. As with most technologies, it’s important to know what you are, and aren’t, getting before you sign on the dotted line.

My conversations with those who’ve employed speech analytics reveal three key things that should be considered when considering it as a technology investment.

It’s going to require a lot more work to set it up, monitor, tweak, and successfully utilize it than you think. At one industry conference I attended a forum of companies were using speech analytics. I found it fascinating that all of the panelists admitted that the technology required far more time and energy than they anticipated when they purchased it. One company on the panel admitted that they hired five full time employees just to make the technology work and to keep it working. Many people don’t realize that you have teach the speech analytic software what to listen for, what to flag, and what to report. Then you have to continually refine it so that it’s catching the things you want it to catch and ignoring the things you don’t.

In many cases, this process is not intuitive. It’s more akin to computer programming. Operations associates who thought they were going to save themselves time having to personally analyze phone calls find themselves spending even more time mired in IT issues related to the technology.

The technology is going to give you a lot of false-positives. I love that I can say “Hey, Siri” and my iPhone will come to life and ask what I need. I have also been annoyed and embarrassed at the number of times in normal conversation or business meetings that I say something that my iPhone mistakenly hears as “Hey, Siri” only to wake-up, interrupt my conversation, and ask what I want. In similar fashion, you can expect that for every instance of speech analytic software catching the right thing, it is going make at least as many, if not more, mistakes.

One of my former clients told me that the speech analytic software they employed never worked as well as advertised. “Every time it flagged a call for us to listen to there was nothing wrong with the call,” he admitted. They quickly stopped listening to any of the calls flagged by speech analytics because they soon saw it as the proverbial child crying “Wolf!”

Speech analytics can monitor volume, pitch, and words that are said, but cannot intelligently analyze content across calls. Our team recently monitored a randomly sampled set of phone calls for a customer service team. The CSRs were articulate and professional in the words they used and the tone with which they communicated with callers. Across the calls, however, we quickly noted a pattern:

  • “Let me get you to the person who handles your account.”
  • “I don’t handle your area.”
  • “You’ll need to speak with….”

In various ways, using different words, many of the CSRs were refusing to help callers. They would immediately “pawn them off” (one customer’s words) to other CSRs or dumping callers into voice mail. In some cases we heard veteran employees claim that they didn’t know how to do the most basic of customer service functions in an effort to avoid helping callers.

Our team quickly recognized that our client was struggling with a culture on their call floor in which CSRs tried to avoid helping callers (in the most professional sounding way). Customers were being dumped into voice-mail and transferred unnecessarily as CSRs played an internal game of “that’s not my customer, that’s your customer.” We addressed it with our client, citing examples. They quickly moved to address the issue and are already making significant progress toward changing behavior on the call floor.

I tried to imagine how I would tell a speech analytics program to catch such an occurrence. The ways that CSRs communicated that they couldn’t help were as varied as the CSRs themselves and their own communication styles. Customers frustration never escalated to shouting or profanity. It was all very subtle, and required experienced analysts making connections across multiple calls to recognize the pattern of behavior. Speech analytics could never do that.

Like most technologies, speech analytics has its place and its purpose. For those companies who have the resources to successfully employ it, speech analytics can analyze vast quantities of interactions and flag, with relative degrees of accuracy, when certain words are spoken or certain levels of emotion are expressed. Those considering this technology as a replacement for a thorough and well structured QA program should understand, however, that the technology has requirements and drawbacks that the technology salesperson will be quick to ignore or minimize.

Three Ways to Improve Your Quality Program in 2017

It’s still January and everyone is busy implementing goals for 2017. It’s not too late to take a good, long look at your contact center’s quality program with an eye to improving things this year. Here are three thoughts for taking your quality assessment (QA) to a new level.

Reevaluate the Scorecard

Most quality programs hinge on the quality of the criteria by which they measure performance. A few years ago there was a backlash against behavioral measurements (e.g. “Did the agent address the caller by name?”) as companies sought to avoid the calibration headaches and wrangling over definitions. The pendulum swung in true human nature to the opposite side of the continuum to become completely subjective. Multiple behaviors gave way to two or three esoteric questions such as, “Did the agent reflect the brand?”

This shift to the subjective is, of course, wrought with its own problems. You can forget about having any objective data with which to measure agent performance. If your analyst is Moonbeam Nirvana then you’ll get consistently positive evaluations complete with praise for what Moonbeam believes was your good intentions (and lots of smiley emoticons). If, on the other hand, your analyst is Gerhardt Gestapo then your performance will always fall short of the ideal and leave you feeling at risk of being written up.

Measuring performance does not have to be that difficult. First, consider what it is that you really desire to accomplish. Do you want to measure compliance or adherence to corporate or regulatory requirements? Do you want to drive customer satisfaction? Do you want to make agents feel better about themselves? Any of these can be an arguable position from which to develop criteria, but you should start with being honest about the goal. Most scorecards suffer from misunderstood and/or miscommunicated intention.

Next, be clear about what you want to hear from your agent in the conversation. Define it so that it can be easily understood, taught, and demonstrated.

Prioritizing is also important. While exhaustive measurement of the interaction can be beneficial, it is also time consuming and may not give you your bang for the investment of time and energy. If your priority is ad-on sales, then be honest about you intention of measuring it, define what you want to hear from your agents, then focus your analysts on listening for those priority items.

Look at Data for Both Agents and Analysts

One of the more frequently missed opportunities to keep your QA process on task is that of looking at the data of how your analysts actually measured the calls.

Years ago our team was the third party QA provider for several teams inside a global corporation while other internal teams managed the job for other locations. There was an initiative to create a hybrid approach that put the internal and external analysts together in sampling and measuring agents across all offices. When we ran the numbers to see how analysts were scoring, however, the internal analysts’ average results were consistently higher than the external analysts. Our analysis of analyst data provided the opportunity for some good conversations about the differences in how we were hearing and analyzing the same conversations.

Especially with larger quality operations in which many analysts measure a host of different agents and/or teams, the tracking of analyst data can provide you with critical insight. When performing audits of different QA programs, it is quite common for our team to find that analysts who happen to also be the team’s supervisor can be easily given to sacrifice objectivity in an effort to be “kind” to their agents (and make their team’s scores look a little better to the management team). Likewise, we have also seen instances where data reveal that one analyst is unusually harsh in their analysis of one particular agent (as evidenced in the deviation in scores compared to the mean). Upon digging into the reasons for the discrepancy it is discovered that there is some personality conflict or bad blood between the two. The analyst, perhaps unwittingly, is using their QA analysis to passive aggressively attack the agent.

If you’ve never done so, it might be an eye opener to simply run a report of last year’s QA data and sort by analyst. Look for disparities and deviations. The results could give you the blueprint you need to tighten up the objectivity of your entire program.

Free Yourself from Software Slavery

As a third party QA provider, our team is by necessity platform agnostic when it comes to the recording, playing and analyzing phone calls. We have used a veritable plethora of software solutions from the telephony “suites” of tech giants who run the industry like the Great and Powerful Oz to small programs coded for a client by some independent tech geek. They all have their positives and negatives.

Many call recording and QA software “suites” come with built in scoring and analysis tools. The programmers, however, had to create the framework by which you will analyze the calls and report the data. While some solutions are more flexible than others, I have yet to see one that gives one the flexibility truly desired. Most companies end up sacrificing their desire to measure, analyze, and/or report things a certain way because of the constraints inherent in the software. The amazing software that the sales person said was going to make things so easy now becomes an obstacle and a headache. Of course, the software provider will be happy to take more of your money to program a solution for you. I know of one company who, this past year, paid a big telephony vendor six figures to “program a solution” within their own software, only to watch them raise their hands in defeat and walk away (with the client’s money, of course).

Tech companies have, for years, sold companies on expensive promises that their software will do everything they want or need it to do. My experience is that very few, if any, of the companies who lay out the money for these solutions feel that the expensive promises are ever fully realized.

If your call data, analysis and reporting is not what you want it to be, and if you feel like you’re sacrificing data/reporting quality because the software “doesn’t do that,” then I suggest you consider liberating yourself. If the tool isn’t working, then find a way to utilize a different tool. What is it we want to know? How can we get to that information? What will allow us to crunch the numbers and create the reports we really want? Look into options for exporting all of the data out of your software suite and into a database or Excel type program that will allow you to sort and analyze data to get you the information you want and need. Our company has always used Excel (sometimes in conjunction with some other statistical software) because it’s faster, easier, more powerful and infinitely more flexible than any packaged QA software we’ve ever tested.

Continuous improvement is key to business success. Scrutinizing quality criteria, analyst data, and your software constraints are just three simple ways to take a step forward with your quality program. Here’s to making sure that we’re doing things better at the end of 2017 than we were doing at the start!

 

White Paper: Why Customer Service Training Isn’t Enough

Companies often desire to provide basic customer service training for their team(s). Our group is often asked to provide a “Customer Service 101” training session that teaches employees some basic customer service phone skills, and we do provide that type of training. We have, however, always believed that training alone, without any kind of assessment or accountability, will have limited impact. A recent experience with one client allowed us to quantify this reality with data. Please click on the link below to download our white paper.

White Paper_Customer Service Training is Not Enough

 

Technology & Addressing the Human Side of Customer Service

I read an interesting article this morning in the Wall Street Journal by Susan Credle. The article was about how storytelling is still very much a necessity in marketing. In the article she laments the impact technology is having on her industry, and the way it hinders the human creativity in marketing:

Data and technology dominate the conversations. And conference rooms and conferences are filled with formulaic approaches. “Make a template and put the creative in this box” approaches. Often, we appear to be more concerned with filling up these boxes than with the actual creative.

Her story resonated with me as it parallels the similar impact technology has had on customer service and QA in contact centers. Technology has allowed many large businesses to “offload” common customer service interactions to IVRs, VRUs, and apps. Actual customer interactions with human agents is diminishing, yet there are two very important distinctions to be made here. First, when customers finally  escalate their issue by navigating the labyrinth of self-serve options the human interaction at the end of the line tends to be even more complex, emotional, and critical to that customer’s satisfaction. Second, not many small to mid-sized businesses have deep corporate pockets to integrate large technology suites which will automate many of their customer interactions. Many businesses are still out there manning the phones and serving customers through good, old-fashioned human interaction.

Like professional athletes who spend hours in the video room breaking down their performance with coaches, Customer Service Representatives (CSRs) still benefit from call analysis, coaching, and accountability of performance. Yet, I find many companies still want to offload this process to formulaic approaches defined by any number of confined boxes created by software developers.

Please don’t hear what I’m not saying. Technology offers wonderful tools to make the Quality Assessment (QA) process more efficient and effective. Nevertheless, I have found that there is no technology that effectively replaces the very human communication that takes place between agent and call coach. Effective QA combines objectivity and motivation. It both encourages and holds accountable. It addresses the often messy reality of human desire, emotions, behaviors, and personalities. Much like Ms. Credle’s observations of marketing, I find that technology often leads more to simply checking boxes and less to actually helping a human CSR improve their communication with human customers.