Definition Documents Aren’t Always the Answer

Many QA teams go through the process of "simplifying" their QA form by reducing the number of items on the form, generalizing the items, then creating a "definition document" that expands and defines what is meant by each item. I have seen definition documents that have become expansive tomes reminiscent of the U.S. tax code.

Much like the tax code, the simplification process and resulting encyclopedia of definitions started out with the best of intentions. Nevertheless, the following problems often result:

  • Using the definition document becomes cumbersome, so analysts or supervisors score calls using their own gut understanding of what each item means rather than consulting the definition document. Elements are thus missed or each scorer analyzes calls a bit differently based on their recollection of the criteria outlined in the definition document. Objectivity is lost.
  • The QA team often spends long periods of time in various debates arguing each "jot and tittle" of the definition document. Because the creation of a QA court system to "interpret the law" is generally frowned upon by management, the QA team continues to mire itself in calibration. Productivity and efficiency are lost.
  • Scorers who actually use the definition document spend significant amounts of time pouring over the document, checking, double-checking, cross-referencing, and mentally debating their analysis. Productivity and efficiency are lost.

While replacing the U.S. tax code with a simple, flat-tax is an economic pipe-dream – it’s not too late to reconsider your QA options. We’ve witnessed some QA teams who have trashed the confusing definition document and turned to well defined, behavioral elements that are scored using a binary scoring methodology, which can make scoring more simple, efficient and productive for your entire team.

Creative Commons photo courtesy of Flickr and Hibri

2 thoughts on “Definition Documents Aren’t Always the Answer

  1. So, can you tie those simple binary behavioural elements (The CSR did or did not do ‘x’) to key business drivers? I fully agree that productivity and efficiency in call evaluation is critical, but they are irrelevant if they don’t produce a positive shift in elements like CSAT or cost per call.
    I really wonder if any binary behaviour system can usefully address customer satisfaction.

  2. Thanks for the comment and question, David. The answer is, YES! We have seen time and time again that a binary methodology can help modify behavior and drive CSAT when the proper conditions are met. You have to know what drives your customer’s sat when they call (which can be determined with valid CSAT research), and you have to be able to link and weight the binary elements to key drivers of CSAT.
    It’s actually very logical. If you know what your customers expect, and you define what behaviors in a phone call link to those drivers, then measuring and consistently applying those behaviors will positively affect CSAT.
    The caviat is that CSR performance is only part of the picture. If the CSRs are doing a great job but the service delivery system is not supporting their efforts, then CSAT is still going to be diminished. It requires a collaborative effort between the call center and the management team.
    Nevertheless, a well structured QA methodology can and will have a positive impact on CSAT. We see it work all the time.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s