Category: QA Software

Free Webinar! A Beginner’s Guide to Call Monitoring and Quality Assessment

Your call may be monitored for quality and training purposes” is a familiar phrase in today’s business world. For growing companies interested in beginning a call recording or quality program, the process can seem both confusing and daunting. This free webinar is intended to help companies who are exploring the development and implementation of a call recording and quality assessment program.

tom head shotThe webinar will be presented by Tom Vander Well, Executive Vice-President of c wenger group. Tom is a pioneer in the call monitoring and Quality Assessment industry and has over 20 years experience analyzing moments of truth between businesses and their customers. In this webinar Tom will help participants think through basic questions you should be asking. He will provide various methods for approaching both call recording and Quality Assessment, discuss their strengths and weaknesses, and present cost effective, practical solutions.

The FREE webinar will be July 13, 2017 at 12:00 p.m. CDT. Registration is limited to 25 participants, so register today! Click the button below or visit:

http://www.videoserverssite.com/register/cwengergroup/registration

Register for Webinar Button

Who Knew Siri Can Coach Your Employees, Too?!

siri-fail-2

We just posted last week about the rather disappointing realities two of our clients experienced in comparison to the bright promises on which they’d been sold speech analytic technology. In both cases they were sold on the idea of speech analytics replacing their human QA programs by analyzing every call and flagging calls in which there were problems. Our clients found that the technology itself took a much greater investment of time and resources than anticipated just to make it work at a basic level. The results were equally disappointing, requiring even more time and resources just to sort through the many false-positives that the software flagged.

It is with great interest then, that I received an MIT Technology Review article  from a former co-worker this week. The article reports on what the writers claim is the latest technology trend, offered by Cogito, to revolutionize contact centers. Apparently speech analytics has been so successful and popular at accurately analyzing customer conversations that the technology experts now want to sell technology to do call coaching, as well. Who knew that Siri could now offer us sage advice on how to communicate more effectively and connect more emotionally with our customers. By the way, according to their marketing they think their technology might help you with your marriage, as well.

I have noted over the years just how much big technology drives our industry. Go to any Contact Center Conference and look at who is paying big bucks, commanding the show floor, introducing the latest revolutionary advancement, and driving the conference agenda. C’est la vie. That’s how the market works. I get it.

I have also noted, however, that technology companies have often sold us on the next big thing, even when it wasn’t. Does anyone remember the Apple Newton? Laser Discs? Quadrophonic sound? Have you scanned a QR code lately? Ever heard of Sony Beta?

Technology is an effective tool when utilized for the strengths it delivers. I am more appreciative than most my colleagues with the advancements we’ve made in technology. I remember days sitting in a small closet jacking cassette tape recorders into an analog phone switch. I also know from a quarter century of coaching Customer Service Representatives (CSRs), Collection agents, and Sales representatives that human communication and interactions are complex on a number of levels. It isn’t just the customer to CSR conversation that is complex, but also the Call Coach to CSR conversations and relationship. Technology may be able to provide objective advice based on voice data, but I doubt that technology can read the personality type of the CSR. I don’t believe it can read the mood that the CSR is in that day and the nonverbal clues they are giving off regarding their openness and receptivity to the information. I doubt it can learn the communication style that works most effectively with each CSR and alter its coaching approach accordingly.

But, I’m sure they’re working on that. Just check it out at your next conference. They’ll have a virtual reality demonstration ready for you, I’m sure.

 

 

 

Three Things They Won’t Tell You About Speech Analytics

Me: “Hey Siri? I’m bleeding badly. Call me an ambulance!”
Siri: “Okay. From now on I’ll call you ‘Anambulance.'”

Most all of us have humorous, and often aggravating, anecdotes about trying to communicate with Siri, Alexa, or any of the other voice prompted technology apps available to us. I am quite regularly thankful that no one is around to hear the tirades I scream to the disembodied, robotic female voice of my car’s voice prompt technology. It amazes me, then, to know that businesses spend huge amounts of money on speech analytic technology as a way to replace their Quality Assessment (QA) programs.

Let me start with full disclosure. Our company, C Wenger Group, has spent a quarter century monitoring and analyzing our clients’ phone calls as a third-party QA provider. Sometimes our clients hire us to be their QA team, and other times they hire us to provide periodic audits and reality checks to their internal efforts. Over the past few years we have learned that speech analytic technology has become a competitor to our service. I can quickly name two clients who have dismissed our services in favor of speech analytic software.

The promise of speech analytics is in the ability to monitor vast quantities of phone calls. Most human QA efforts, by comparison, utilize relatively small random statistical samples. Our data over the years reveals that our team can quite capably provide an accurate reflection of service performance with relatively few calls. I remember calling one skeptical client after our initial month listening to a minimal sample of calls for sales compliance. I gave him the names of three sales people whom our call analysis identified as problems. He laughed and told me that all three had been fired the previous day agreeing that our sample and analysis was, indeed, sufficient.

Nevertheless, the idea of being able to catch the needle in the haystack has certain appeal. Random samples don’t capture every instance of irate customers, lying representatives, and forbidden behaviors. That’s where tech companies and their (big ticket) speech analytic software promise nervous executives a peaceful night sleep knowing that every phone company can be monitored by computers and flag problem calls when they occur.

Just like Siri flawlessly hears my every spoken request and never fails to provides me with exactly the answer I was looking for.

I have followed up and spoken to both clients who dismissed our company’s services in favor of speech analytics. In one case, my contact admitted that they abandoned the technology after a year of unsuccessfully investing significant resources (money and man hours) trying to get it to provide meaningful results or value. In the other case my client contact admitted that the technology never worked, but that his company continued to play the political game of pretending it was working because they didn’t want to admit that they’d wasted so much money on something that wasn’t working. I have also spoken to representatives of other companies with similar words of warning. As with most technologies, it’s important to know what you are, and aren’t, getting before you sign on the dotted line.

My conversations with those who’ve employed speech analytics reveal three key things that should be considered when considering it as a technology investment.

It’s going to require a lot more work to set it up, monitor, tweak, and successfully utilize it than you think. At one industry conference I attended a forum of companies were using speech analytics. I found it fascinating that all of the panelists admitted that the technology required far more time and energy than they anticipated when they purchased it. One company on the panel admitted that they hired five full time employees just to make the technology work and to keep it working. Many people don’t realize that you have teach the speech analytic software what to listen for, what to flag, and what to report. Then you have to continually refine it so that it’s catching the things you want it to catch and ignoring the things you don’t.

In many cases, this process is not intuitive. It’s more akin to computer programming. Operations associates who thought they were going to save themselves time having to personally analyze phone calls find themselves spending even more time mired in IT issues related to the technology.

The technology is going to give you a lot of false-positives. I love that I can say “Hey, Siri” and my iPhone will come to life and ask what I need. I have also been annoyed and embarrassed at the number of times in normal conversation or business meetings that I say something that my iPhone mistakenly hears as “Hey, Siri” only to wake-up, interrupt my conversation, and ask what I want. In similar fashion, you can expect that for every instance of speech analytic software catching the right thing, it is going make at least as many, if not more, mistakes.

One of my former clients told me that the speech analytic software they employed never worked as well as advertised. “Every time it flagged a call for us to listen to there was nothing wrong with the call,” he admitted. They quickly stopped listening to any of the calls flagged by speech analytics because they soon saw it as the proverbial child crying “Wolf!”

Speech analytics can monitor volume, pitch, and words that are said, but cannot intelligently analyze content across calls. Our team recently monitored a randomly sampled set of phone calls for a customer service team. The CSRs were articulate and professional in the words they used and the tone with which they communicated with callers. Across the calls, however, we quickly noted a pattern:

  • “Let me get you to the person who handles your account.”
  • “I don’t handle your area.”
  • “You’ll need to speak with….”

In various ways, using different words, many of the CSRs were refusing to help callers. They would immediately “pawn them off” (one customer’s words) to other CSRs or dumping callers into voice mail. In some cases we heard veteran employees claim that they didn’t know how to do the most basic of customer service functions in an effort to avoid helping callers.

Our team quickly recognized that our client was struggling with a culture on their call floor in which CSRs tried to avoid helping callers (in the most professional sounding way). Customers were being dumped into voice-mail and transferred unnecessarily as CSRs played an internal game of “that’s not my customer, that’s your customer.” We addressed it with our client, citing examples. They quickly moved to address the issue and are already making significant progress toward changing behavior on the call floor.

I tried to imagine how I would tell a speech analytics program to catch such an occurrence. The ways that CSRs communicated that they couldn’t help were as varied as the CSRs themselves and their own communication styles. Customers frustration never escalated to shouting or profanity. It was all very subtle, and required experienced analysts making connections across multiple calls to recognize the pattern of behavior. Speech analytics could never do that.

Like most technologies, speech analytics has its place and its purpose. For those companies who have the resources to successfully employ it, speech analytics can analyze vast quantities of interactions and flag, with relative degrees of accuracy, when certain words are spoken or certain levels of emotion are expressed. Those considering this technology as a replacement for a thorough and well structured QA program should understand, however, that the technology has requirements and drawbacks that the technology salesperson will be quick to ignore or minimize.

Three Ways to Improve Your Quality Program in 2017

It’s still January and everyone is busy implementing goals for 2017. It’s not too late to take a good, long look at your contact center’s quality program with an eye to improving things this year. Here are three thoughts for taking your quality assessment (QA) to a new level.

Reevaluate the Scorecard

Most quality programs hinge on the quality of the criteria by which they measure performance. A few years ago there was a backlash against behavioral measurements (e.g. “Did the agent address the caller by name?”) as companies sought to avoid the calibration headaches and wrangling over definitions. The pendulum swung in true human nature to the opposite side of the continuum to become completely subjective. Multiple behaviors gave way to two or three esoteric questions such as, “Did the agent reflect the brand?”

This shift to the subjective is, of course, wrought with its own problems. You can forget about having any objective data with which to measure agent performance. If your analyst is Moonbeam Nirvana then you’ll get consistently positive evaluations complete with praise for what Moonbeam believes was your good intentions (and lots of smiley emoticons). If, on the other hand, your analyst is Gerhardt Gestapo then your performance will always fall short of the ideal and leave you feeling at risk of being written up.

Measuring performance does not have to be that difficult. First, consider what it is that you really desire to accomplish. Do you want to measure compliance or adherence to corporate or regulatory requirements? Do you want to drive customer satisfaction? Do you want to make agents feel better about themselves? Any of these can be an arguable position from which to develop criteria, but you should start with being honest about the goal. Most scorecards suffer from misunderstood and/or miscommunicated intention.

Next, be clear about what you want to hear from your agent in the conversation. Define it so that it can be easily understood, taught, and demonstrated.

Prioritizing is also important. While exhaustive measurement of the interaction can be beneficial, it is also time consuming and may not give you your bang for the investment of time and energy. If your priority is ad-on sales, then be honest about you intention of measuring it, define what you want to hear from your agents, then focus your analysts on listening for those priority items.

Look at Data for Both Agents and Analysts

One of the more frequently missed opportunities to keep your QA process on task is that of looking at the data of how your analysts actually measured the calls.

Years ago our team was the third party QA provider for several teams inside a global corporation while other internal teams managed the job for other locations. There was an initiative to create a hybrid approach that put the internal and external analysts together in sampling and measuring agents across all offices. When we ran the numbers to see how analysts were scoring, however, the internal analysts’ average results were consistently higher than the external analysts. Our analysis of analyst data provided the opportunity for some good conversations about the differences in how we were hearing and analyzing the same conversations.

Especially with larger quality operations in which many analysts measure a host of different agents and/or teams, the tracking of analyst data can provide you with critical insight. When performing audits of different QA programs, it is quite common for our team to find that analysts who happen to also be the team’s supervisor can be easily given to sacrifice objectivity in an effort to be “kind” to their agents (and make their team’s scores look a little better to the management team). Likewise, we have also seen instances where data reveal that one analyst is unusually harsh in their analysis of one particular agent (as evidenced in the deviation in scores compared to the mean). Upon digging into the reasons for the discrepancy it is discovered that there is some personality conflict or bad blood between the two. The analyst, perhaps unwittingly, is using their QA analysis to passive aggressively attack the agent.

If you’ve never done so, it might be an eye opener to simply run a report of last year’s QA data and sort by analyst. Look for disparities and deviations. The results could give you the blueprint you need to tighten up the objectivity of your entire program.

Free Yourself from Software Slavery

As a third party QA provider, our team is by necessity platform agnostic when it comes to the recording, playing and analyzing phone calls. We have used a veritable plethora of software solutions from the telephony “suites” of tech giants who run the industry like the Great and Powerful Oz to small programs coded for a client by some independent tech geek. They all have their positives and negatives.

Many call recording and QA software “suites” come with built in scoring and analysis tools. The programmers, however, had to create the framework by which you will analyze the calls and report the data. While some solutions are more flexible than others, I have yet to see one that gives one the flexibility truly desired. Most companies end up sacrificing their desire to measure, analyze, and/or report things a certain way because of the constraints inherent in the software. The amazing software that the sales person said was going to make things so easy now becomes an obstacle and a headache. Of course, the software provider will be happy to take more of your money to program a solution for you. I know of one company who, this past year, paid a big telephony vendor six figures to “program a solution” within their own software, only to watch them raise their hands in defeat and walk away (with the client’s money, of course).

Tech companies have, for years, sold companies on expensive promises that their software will do everything they want or need it to do. My experience is that very few, if any, of the companies who lay out the money for these solutions feel that the expensive promises are ever fully realized.

If your call data, analysis and reporting is not what you want it to be, and if you feel like you’re sacrificing data/reporting quality because the software “doesn’t do that,” then I suggest you consider liberating yourself. If the tool isn’t working, then find a way to utilize a different tool. What is it we want to know? How can we get to that information? What will allow us to crunch the numbers and create the reports we really want? Look into options for exporting all of the data out of your software suite and into a database or Excel type program that will allow you to sort and analyze data to get you the information you want and need. Our company has always used Excel (sometimes in conjunction with some other statistical software) because it’s faster, easier, more powerful and infinitely more flexible than any packaged QA software we’ve ever tested.

Continuous improvement is key to business success. Scrutinizing quality criteria, analyst data, and your software constraints are just three simple ways to take a step forward with your quality program. Here’s to making sure that we’re doing things better at the end of 2017 than we were doing at the start!

 

Technology & Addressing the Human Side of Customer Service

I read an interesting article this morning in the Wall Street Journal by Susan Credle. The article was about how storytelling is still very much a necessity in marketing. In the article she laments the impact technology is having on her industry, and the way it hinders the human creativity in marketing:

Data and technology dominate the conversations. And conference rooms and conferences are filled with formulaic approaches. “Make a template and put the creative in this box” approaches. Often, we appear to be more concerned with filling up these boxes than with the actual creative.

Her story resonated with me as it parallels the similar impact technology has had on customer service and QA in contact centers. Technology has allowed many large businesses to “offload” common customer service interactions to IVRs, VRUs, and apps. Actual customer interactions with human agents is diminishing, yet there are two very important distinctions to be made here. First, when customers finally  escalate their issue by navigating the labyrinth of self-serve options the human interaction at the end of the line tends to be even more complex, emotional, and critical to that customer’s satisfaction. Second, not many small to mid-sized businesses have deep corporate pockets to integrate large technology suites which will automate many of their customer interactions. Many businesses are still out there manning the phones and serving customers through good, old-fashioned human interaction.

Like professional athletes who spend hours in the video room breaking down their performance with coaches, Customer Service Representatives (CSRs) still benefit from call analysis, coaching, and accountability of performance. Yet, I find many companies still want to offload this process to formulaic approaches defined by any number of confined boxes created by software developers.

Please don’t hear what I’m not saying. Technology offers wonderful tools to make the Quality Assessment (QA) process more efficient and effective. Nevertheless, I have found that there is no technology that effectively replaces the very human communication that takes place between agent and call coach. Effective QA combines objectivity and motivation. It both encourages and holds accountable. It addresses the often messy reality of human desire, emotions, behaviors, and personalities. Much like Ms. Credle’s observations of marketing, I find that technology often leads more to simply checking boxes and less to actually helping a human CSR improve their communication with human customers.

 

Five Reasons to Consider a Third Party QA Provider

c wenger group is a full service Quality Assessment provider, assisting clients set up their QA programs and providing QA as a 3rd party complete with call analysis, reporting of team and individual agent data, and even data led coaching and training.

c wenger group is a full service Quality Assessment provider, assisting clients set up their QA programs and providing QA as a 3rd party complete with call analysis, reporting of team and individual agent data, and even data led coaching and training.

If your team or company is thinking about getting into call monitoring and Quality Assessment (QA), or for those who are seeking a solution to their internal QA headaches, we would encourage you to at least give consideration to a third party QA solution. Many companies dismiss the idea of a third party provider without really weighing the option. With nearly a quarter century of experience and multiple client relationships of twenty years or more, the team here at c wenger group believes we’ve proven that it can be a sensible alternative.

Here are five reasons to consider a third party QA provider:

  1. Expertise. I’m sure your company is good at what it does. You have expertise in your field and would like to focus your resources and energies on doing what you do well. We feel the same way. It may seem that analyzing a phone call, e-mail, or chat should not be that difficult. The technology company who sold you your suite of software probably made it sound like it would practically run itself and give you all sorts of powerful information with a few clicks of the mouse. The truth is that a successful quality program is more complex than it seems. Many companies go down the road to setting up their own quality program only to find themselves bogged down in a quagmire of questions about methodology, sample sizes, criteria, and calibration. Don’t try to re-invent the wheel building expertise in a business discipline that distracts you from doing what you do well (and what makes you money). Let us do what we do well, and help you with that.
  2. Expediency. We’ve had many companies tell us that they purchased or installed a call recording and QA solution that they thought would deliver an easy, “out of the box” program. Instead, they find themselves feeling like they purchased an expensive plane that sits on the tarmac because no one knows how to fly it. Don’t spending months wrangling and struggling just to figure out how you want your QA program to look and work. How much time will you and your valuable, talented team members waste in meetings and strategy sessions just trying to figure out how you’re going to analyze calls? We’ve been doing QA for companies of all shapes, sizes, and types for many years, and in short period of time we can have a working, effective, successful QA program set up and delivering useful data and information right to your desktop.
  3. Objectivity. One of the most common pitfalls of internal quality programs is analyst bias. Supervisors are tasked with monitoring their own teams’ calls, but they don’t want the team (or themselves) to look bad so when they hear something that goes wrong in a call they give the agents credit on the QA form and (wink, wink) “coach them on it.” A quality team member has personality issues with an agent, so he scores that agent more stringently that the rest of the team. A team leader has an agent who is disruptive to the team. She starts looking for “bad calls” to help make a case to fire the problem team member. These are scenarios we’ve seen and documented in our QA audits. They happen. What’s the cost of an internal QA program that doesn’t deliver reliable data or results? A third-party QA provider is not worried about making people look good or grinding axes. We are concerned with delivering objective data that accurately reflects the customer’s experience.
  4. Results delivered regularly, and on time. One of the biggest problems with internal QA programs is that they chronically bow to the tyranny of the urgent (which is all of the time). When things get busy or stressed, the task of analyzing calls is the first thing pushed to the back burner. Internal analysts procrastinate their call analysis until the deadline looms. Then, they rifle through calls to get them done and the results are not thoughtful, accurate, or objective. Our clients tell us that they appreciate knowing that when we we’re on the job the QA process will get done and it will be done well. Calls will be analyzed and reports will be delivered regularly and on time. Better yet, the results will be effective at helping you make tactical goals for improvement, effectively focus your training, manage agent performance, and successfully move the needle on customer satisfaction, retention, and loyalty.
  5. You can always fire us. A client once told us that he kept us around because he slept better at night knowing that he could always fire us. His comment was, admittedly, a little unnerving but his logic made a lot of sense. “If I do this QA thing myself,” he explained, “I have to hire and pay people to do it. In today’s business environment it’s impossible for me to fire someone without a lot of HR headaches. So, if the people I pay to do it internally don’t do it well then I’m stuck with both them and the poor QA program. I like having you do QA for us. Not only do you do it well, but I know that if anything goes wrong I can just pick up the phone and say, ‘we’re done.'” The good news is that he never made that call before he retired!

If you’re looking at getting started in call monitoring and assessment, or if you have a program that isn’t working, we would welcome you to consider how one of our custom designed solutions could deliver reliable, actionable, and profitable results.

 

CWG logoLR

c wenger group designs and provides fully integrated Customer Experience solutions including Customer Satisfaction research, call/e-mail/chat Quality Assessment, and coaching/training solutions for teams and individual agents. Our clients include companies of all sizes in diverse market sectors.

Please feel free to contact us for a no obligation conversation!

Note: c wenger group will maintain your privacy

An Airplane on the Tarmac Profits You Little

Plane on tarmac: Sydney NS
Plane on tarmac: Sydney NS (Photo credit: mattjiggins)

I had an interesting conversation with a call center manager the other day over breakfast. I asked him how things were going at work. After a pause and a long sigh, I wondered if our breakfast was going to become an informal counseling session. He launched into his story. His company recently made a huge capital investment in the latest technology for call monitoring and evaluation. This is good news, right?! He’s got the latest programs that allow him to do all sorts of things in capturing, analyzing, and reporting on service quality. So, why was he looking so glum?

With all the investment in technology, there was no money in the budget to hire anyone to actually use the shiny new QA program. The marching orders from the executive suite were to use the new whiz-bang technology to work more efficiently and productively. “We bought you technology so we don’t have to hire more people,” was the mantra. He went on to make an interesting statement:

“It makes about as much sense as me going out and buying a new airplane. What can I do with it sitting there on the ground? I can stare at it. I can keep it clean. I can sit on the ground, stare at the dials, and play with the controls. But, I certainly can’t fly the thing.”

My colleague went on to explain how the corporate decision not to back-fill positions while increasing responsibilities for his call center staff meant that everyone had far more on their plate than could reasonably be accomplished. He knew his skeletal QA efforts were not coming close to utilizing the new, expensive technology, but the IT department who chose the system does not have the human resources to help the get it optimized or train the call center staff on how to best utilize it. Without human resources and human expertise, the investment in technology seemed a total waste. The company can certainly brag and feel good about having the latest technology that will allow them to fly with the best in the business world. However, without the necessary expertise and investment in human capital to actually make it fly, their team will sit on the tarmac admiring the dials on their very expensive placebo.

Enhanced by Zemanta

The Truth of the Tape

A typical home reel to reel tape recorder, thi...

Image via Wikipedia

Since Prohibition, when recorded phone conversations with a bootlegger were first used in a criminal prosecution, the taped phone call has had a colorful history. Movies and television have made familiar the image of FBI agents hunkered over spinning reels of tape in a van or an empty warehouse loft as they listen in on the calls of shady mobsters. Go to the new Mob Museum in Las Vegas and you’ll get to hear some of the actual calls for yourself.

The recorded conversation is a powerful tool. In our training with clients, our team will often go into a studio and recreate a phone call using voice actors to protects the identify of caller and CSR, but accurately recreate the customer service conversation between the two. These calls are always a fun and effective training tool because they are based on an actual interaction with which CSRs identify. “I took a call just like that,” we hear all the time, “I think that mighta been me!” Because the pertinent identifying information is hidden, the focus can be on what we can learn from the call and how the interaction might have been improved.

Another important way to utilize recordings is as evidence of a particular procedural or systems related issue. Call recording software often includes a video capture of what is happening on the agent’s desktop during the phone call. When trying to make a point about how obtuse or cumbersome a particular system is for agents while they are on the phone call, a recorded example complete with visual can be a powerful piece of evidence for upper management and decision makers. As they sit and uncomfortably witness first hand the CSR struggling through a jungle of screens as they try to maintain conversation and call flow with the customer, it makes a much more persuasive argument than a mere description of the issue.

Of course, the recordings can also be very effective tools to highlight both positive and negative performance. It’s hard for CSRs to defend their poor service behaviors when there is a plethora of recorded evidence with which to coach them. People often think of call recording as merely a tool to catch people doing things wrong, but our team regularly reminds CSRs that the truth of the tape can also catch people doing things right and become hard evidence of an agents exemplary service skills. Many years ago a frustrated manager asked our team to do a special assessment of an agents calls. The manager wanted to fire the agent and was looking for evidence to do so. In this case, the tape revealed that the agent performed well when serving customers on the phone. The truth of the tape helped protect the CSR from being unfairly terminated.

Call recordings are tools. As with all tools, the results lie in the wisdom and abilities of the person or persons wielding them. When misused, call recording can do damage to people and businesses. When used with discernment and expertise, those same recordings can effectively help build a successful business.

Enhanced by Zemanta

The Social Media Buzz; Time for Decaf?

I was part of a great ACCP event last week sponsored by Avtex and hosted by Pella Corporation at their headquarters. There was a wonderful presentation made on the subject of monitoring and responding to customers through social media by Spindustry and their clients from Omaha Steaks. Then, this morning, the Wall Street Journal dedicated an entire section to the subject of Social Media and IT.

In case you’ve had your head buried in the sand for the past year or two, the buzz in the call center world is currently “social media.” The very mention of the term seems to get call center personnel wound up like they’ve just swigged a triple-shot-chocolate-sugar-bomb-espressiato with a Red Bull chaser. Everyone wants to talk about it. The big call center conferences have been scrambling for the past two years to fill their keynotes and workshops full of social media gurus, how-tos, and software vendors. All the buzz has prompted great conversation with clients and colleagues.

For years, I’ve been advocating that every client listen to what customers are saying on the internet and through social media outlets. There is a huge leap, however, between keeping your ear open and diving into a full scale social media task force within your customer service team complete with the latest, greatest social media monitoring software. One of the questions that came up in the ACCP meeting last week was whether our group was doing Customer Satisfaction research for customers who use social media to contact a client company. The reality is that, for most of our clients, the number of customers using social media as a means of communication is still very small. So small, in fact, that they must be regarded as outliers and not representative of the general population of customers.

That does not mean that social media will not grow in importance and influence. It definitely is growing in importance and influence (But, how far will it grow? How influential will it become?). It does not mean that social media is not a critical piece of the marketing and customer service picture for some companies. I simply want to make the point that the time, energy and resources that an individual company invests in social media efforts should be considerate of how many customers they have actively engaged in the medium. Our group is helping some clients determine that very fact. By investing a little money in a survey to find out how important social media is to their customer population as a whole will help them wisely steward their resources when it comes to making an investment in their overall social media strategy. I begin to fear that clients will throw a lot of money and resources to engage a small number of customers in the social media arena when a much larger segment of customers are still encountering significant service issues through traditional channels (as boring and old school as those traditional channels may be).

In the meantime, I’m sure the social media buzz will continue unabated. In the call center industry there always seems to be a buzz where there is software, hardware and/or workshops to sell. Please do not misunderstand me. I’m not against social media in any way. I’m a blogger, tweeter, texter and Facebook junkie. I think social media is great and have led the charge in getting clients to “listen” to what customers are saying via social media. Social Media is here to stay and will continue to evolve. I am, however, dedicated to helping my clients make wise, measured decisions when it comes to their customers and their resources. So, when it comes the social media buzz, make mine decaf, please. Remember, there was a lot of buzz about betavision, too.

Creative Commons photo courtesy of Flickr and thetrial

Things You Learn Capturing Calls

As the QA provider for some of our clients, c wenger group employs a small group of dedicated specialists whose job it is to weed through all of the phone calls recorded by the client’s recording software, determine which calls are usable for analysis, and assign them to the appropriate call analysts. The process of using different people for capturing & assigning ensures that those tasked with analyzying the calls don’t give in to the temptation of selecting only shorter calls, good calls or easy calls for analysis (and thus bias the sample). Most companies confine the QA program to what happened within a phone call, but a quick analysis of the call sample for a given agent, or group of agents, can be very revealing. Here are a few examples of issues our call capturers have brought to our attention:

  • One CSR had an average number of calls for their position, but 95 percent of the calls were from family and friends.
  • Another agent who worked a territory of regular customers checked his voice mail several times an hour but rarely took a call or made a call. We suspected that he was choosing not to answer the phone, checking the voice mail, then responding to the customer via e-mail as a way of having to avoid actually talking to customers.
  • One group of sales agents simply weren’t making any of the sales calls with which they were tasked. Either the recording software wasn’t working or all of the cold calls they manually recorded on their daily sales call log were…well, you get the idea.

Sometimes the built in accountability which comes from a QA team simply trying to identify calls for analysis can provide R.O.I. in identifying opportunities to increase productivity, or at least identify and address the lack of productivity which you may discover.