Translate Research to Action with a VOC Table
Ask any group of satisfaction researchers and consumers of satisfaction research about the largest problem facing the research industry, most likely, the lack of actionability (or usefulness of the research) will be the most common concern raised. All too often, research is conducted, reports produced and bound into professional looking binders which end up gathering dust on a shelf some place, or if you’re like me, providing excellent use as a door stop.
What is missing is a strategy to transition research into action, and bring the various stakeholders into the research process.
Managers and researchers alike are faced with the difficult task of determining where to make investments, and predicting the relative return on such investments. One such tool for transforming research into action is the Voice of the Customer (VOC) table.
A VOC Table is an excellent tool to match key satisfaction dimensions and attributes with business processes, and allow managers to make informed judgments regarding which business process will have the most return in terms of satisfaction improvement.
A VOC Table supports this transition from listing the key survey elements on the vertical axis, sorting each attribute by an importance rating. On the horizontal axis, a complete list of business functions is listed. At this point, the researcher and manager match business process/functions with key survey elements and make judgments regarding the extent to which the business function influences key survey element (in the enclosed example, a dark filled-in square represents a strong influence, an unfilled square represents a moderate influence, while a triangle represents a slight influence.) A numeric value is assigned to each influence (typically, a value of ‘four’ for a strong influence, ‘two’ for a medium influence, and ‘one’ for a weak influence). For each cell in the table, a value is calculated by multiplying the strength of the influence by the importance rating of the survey element. Finally, the cell values are summed for each column (business function) to determine which business functions have the most influence on customer satisfaction.
Consider the enclosed example of a VOC table. In this example, a retail mortgage-lending firm has conducted a wave of customer satisfaction research, and intends to link this research to process improvement initiatives using the attached VOC Table. The satisfaction attributes and their relative importance, as determined in the survey, are listed in the far left column. Specific business processes from loan origination to closing are listed across the top of the table. For each cell, where satisfaction attributes and business process intersect, the researchers have made a judgment of the strength of the business process’s influence on the satisfaction attribute. For example, the researchers have determined proper document collection to have a strong influence on the firm’s ability to perform services right the first time, and a weak relationship for willingness to provide service. For each cell, the strength of the influence is multiplied by the importance. The sum of the values of each cell in each column determines the relative importance of each business process in influencing overall customer satisfaction.
In the example, the loan quote process and clearance of underwriting exemptions are the two parts of the lending process, which have the greatest influence on customer satisfaction, followed closely by an explanation of the loan process. The other three aspects of the loan process of significance are document collection, application, and preliminary approval. The least important are document recording and credit and title report ordering. The managers of this hypothetical lending institution now know what parts of the lending process to focus on to improve customer satisfaction. Furthermore, in addition to knowing which specific events to focus on, they also know, generally speaking, which improvements in the loan origination process will yield more return in terms of customer satisfaction than improvement in processing, underwriting, and closing. As all the loan origination elements have comparatively strong influence on satisfaction.
Not All Customer Experience Variation is Equal: Common Cause vs. Special Cause Variation
Variability in customer experience scores is common and normal. Be it a survey of customers, mystery shops, social listening or other customer experience measurement, a certain amount of random variation in the data is normal. As a result, managers need a means of interpreting any variation in their customer experience measurement to evaluate if the customer experience is truly changing, or if the variation they are seeing is simply random.
In a previous post, we proposed the use of control charts as a tool to track customer experience measurements within upper and lower quality control limits, giving managers a meaningful way to determine if any variation in their customer experience measurement reflects an actual change in the experience as opposed to random variation or chance.
Now, managers need to understand the causes of variation, specifically common and special cause variation. Common and special cause variation are six sigma concepts, while most commonly used in industrial production, they can be borrowed and employed to the customer experience.
Common Cause Variation: Much like variation in the roll of dice, common cause variation is natural variation within any system. Common cause variation is any variation constantly active within a system, and represents statistical “noise” within the system.
Examples of common cause variation in the customer experience are:
- Poorly defined, poorly designed, inappropriate policies or procedures
- Poor design or maintenance of computer systems
- Inappropriate hiring practices
- Insufficient training
- Measurement error
Special Cause Variation: Unlike the roll of the dice, special cause variation is not probabilistically predictable within the system, as a result it does not represent statistical “noise” within the system, but is the signal within the system.
Examples of special cause variation include:
- High demand/ high traffic
- Poor adjustment of equipment
- Just having a bad day
When measuring the customer experience it is helpful to consider everything within the context of the company-customer interface. Every time a sales or service interaction within this interface occurs the customer learns something from the experience and adjusts their behavior as a result of the experience. Managing the customer experience is the practice of managing what the customers learn from the experience and thus managing their behavior in profitable ways.
A key to managing customer behaviors is understanding common cause and special cause variation and their implications. Common cause variation is variation built into the system: policies, procedures, equipment, hiring practices, and training. Special cause variation is more or less how the human element and the system interact.
See earlier post:
Implications of CX Consistency for Researchers – Part 3 – Common Cause v Special Cause Variation
Previously, we discussed the implications of intra-channel consistency for researchers.
This post considers two types of variation in the customer experience: common and special cause variation, and their implications for customer researchers.
The concepts of common and special cause variation are derived from the process management discipline Six Sigma.
Common cause variation is normal or random variation within the system. It is statistical noise within the system. Examples of common cause variation in the customer experience are:
- Poorly defined, poorly designed, inappropriate policies or procedures
- Poor design or maintenance of computer systems
- Inappropriate hiring practices
- Insufficient training
- Measurement error
Special cause variation, on the other hand, is not random. It conforms to laws of probability. It is the signal within the system. Examples of special cause variation include:
- High demand/ high traffic
- Poor adjustment of equipment
- Just having a bad day
What are the implications of common and special cause variation for customer experience researchers?
Given the differences between common cause and special cause variation, researchers need a tool to help them distinguish between the two. Researchers need a means of determining if any observed variation in the customer experience is statistical noise or a signal within the system. Control charts are a statistical tool to make a determination if variation is noise or a signal.
Control charts track measurements within upper and lower quality control limits. These quality control limits define statistically significant variation overtime (typically at a 95% confidence), which means there is a 95% probability that the variation is the result of an actual change in the customer experience (special cause variation) not just normal common cause variation. Observed variation within these quality control limits are common cause variation. Variation which migrates outside these quality control limits is special cause variation.
To illustrate this concept, consider the following example of mystery shop results:
This chart depicts a set of mystery shop scores which both vary from month to month and generally appear to trend upward.
Customer experience researchers need to provide managers a means of determining if the month to month variation is statistical noise or some meaningful signal within the system. Turning this chart into a control chart by adding statistically defined upper and lower quality control limits will determine if the monthly variation is common or special cause.
To define quality control limits, the customer experience researcher needs to determine the count of observations for each month, the monthly standard deviation, and the average count of shops across all months.
The following table adds these three additional pieces of information into our example:
Month |
Count of Mystery Shops | Average Mystery Shop Scores | Standard Deviation of Mystery Shop Scores |
May |
510 | 83% | 18% |
June |
496 | 84% | 18% |
July |
495 | 82% | 20% |
Aug |
513 | 83% |
15% |
Sept | 504 | 83% |
15% |
Oct | 489 | 85% |
14% |
Nov | 494 | 85% |
15% |
Averages | 500 | 83.6% |
16.4% |
To define the upper and lower quality control limits (UCL and LCL, respectively), apply the following formula:
Where:
x = Grand Mean of the score
n = Mean sample size (number of shops)
SD = Mean standard deviation
These equations yield quality control limits at 95% confidence, which means there is a 95% probability any variation observed outside these limits is special cause variation, rather than normal common cause variation within the system
Calculating these quality control limits and applying them to the above chart produces the following control chart, with upper and lower quality control limits depicted in red:
This control chart now answers the question, what variation is common cause and what variation is special cause. The general trend upward appears to be statistically significant with the most recent month above the upper quality control limit. Additionally, this control chart identifies a period of special cause variation in July. With 95% confidence we know some special cause drove the scores below the lower control limit. Perhaps this special cause was employee turnover, perhaps a new system rollout, or perhaps a weather event that impacted the customer experience.
Implications of CX Consistency for Researchers – Part 2 – Intra-Channel Consistency
This post considers the implications of intra-channel consistency for customer experience researchers.
As with cross-channel consistency, intra-channel consistency, or consistency within individual channels requires the researcher to identify the causes of variation in the customer experience. The causes of intra-channel variation, is more often than not at the local level – the individual stores, branches, employees, etc. For example, a bank branch with large variation in customer traffic is more likely to experience variation in the customer experience.
Regardless of the source, consistency equals quality.
In our own research, Kinēsis conducted a mystery shop study of six national institutions to evaluate the customer experience at the branch level. In this research, we observed a similar relationship between consistency and quality. The branches in the top quartile in terms of consistency delivered customer satisfaction scores 15% higher than branches in the bottom quartile. But customer satisfaction is a means to an end, not an end goal in and of itself. In terms of an end business objective, such as loyalty or purchase intent, branches in the top quartile of consistency delivered purchase intent ratings 20% higher than branches in the bottom quartile.
Purchase intent and satisfaction with the experience were both measured on a 5-point scale.
Again, it is incumbent on customer experience researchers to identify the causes of inconsistency. A search for the root cause of variation in customer journeys must consider processes cause variation.
One tool to measure process cause variation is a Voice of the Customer (VOC) Table. VOC Tables have a two-fold purpose: First, to identify specific business processes which can cause customer experience variations, and second, to identify which business processes will yield the largest ROI in terms of improving the customer experience.
VOC Tables provide a clear road map to identify action steps using a vertical and horizontal grid. On the vertical axis, each customer experience attribute within a given channel is listed. For each of these attributes a judgment is made about the relative importance of each attribute. This importance is expressed as a numeric value. On the horizontal axis is a exhaustive list of business processes the customer is likely to encounter, both directly and indirectly, in the customer journey.
This grid design matches each business process on the horizontal axis to each service attribute on the vertical axis. Each cell created in this grid contains a value which represents the strength of the influence of each business process listed on the horizontal axis to each customer experience attribute.
Finally, a value is calculated at the bottom of each column which sums the values of the strength of influence multiplied by the importance of each customer experience attribute. This yields a value of the cumulative strength of influence of each business process on the customer experience weighted by its relative importance.
Consider the following example in a retail mortgage lending environment.
In this example, the relative importance of each customer experience attributes was determined by correlating these attributes to a “would recommend” question, which served as a loyalty proxy. This yields an estimate of importance based on the attribute’s strength of relationship to customer loyalty, and populates the far left column. Specific business processes for the mortgage process are listed across the top of this table. Within each cell, an informed judgment has been made regarding the relative strength of the business process’s influence on the customer experience attribute. This strength of influence has been assigned a value of 1 – 3. It is multiplied by the importance measure of each customer experience attribute and summed into a weighted strength of influence – weighted by importance, for each business process.
In this example, the business processes which will yield the highest ROI in terms of driving the customer experience are quote of loan terms (weighted strength of influence 23.9), clearance of exemptions (22.0), explanation of loan terms (20.2), loan application (18.9) and document collection (16.3).
Maximizing Response Rates: Get Respondent to Open the Email
In fielding surveys researchers must be aware of the concepts of error and bias and how they can creep into a survey, potentially making the survey unreliable in ways that cannot be predicted. For example, one source of error is statistical error, where not enough respondents are surveyed to make the results statistically reliable. Another source of error, or bias, is response bias caused by not having a random sample of the targeted population.
A key concept of survey research is randomness of sample selection, in essence to give each member of the targeted survey population an equal chance of being surveyed. Response rates are important in self administered surveys (such as an email surveys), because it is possible non-responders (people who for some reason choose not to complete the survey) have different opinions than those who choose to participate in the survey. As a result, the survey is not purely random. If non-responders are somehow different than responders, the survey results will reflect that difference – thus biasing the research. It is therefore incumbent on researchers to maximize the survey response rate.
Say for example, a bank wants to survey customers after they have completed an online transaction. If customers who love the bank’s online capabilities are more likely to participate in the survey than those who do not like the bank’s online capabilities, the survey results will be biased in favor of a positive view of the bank’s online offering because it is not a representative sample – it is skewed toward customers with the positive view.
It is, again, incumbent on researchers to maximize the response rate as much as possible in self-administered email surveys.
Pre-Survey Awareness Campaign
One strategy to maximize response rates (particularly in a customer survey context) is a pre-survey awareness campaign to make customers aware of the coming survey and encourage participation. Such a campaign can take many forms, such as:
- Letter on company letterhead, signed by a high profile senior executive.
- Statement or billing inserts
- Email in advance of the survey
Each of these three are excellent ways to introduce the survey to respondents and maximize response rates.
Email Content
The next steps in maximizing response rates in email surveys is passing SPAM filter tests, and prompting the recipient to open the email. The core concept here is credibility – to make the email appear as credible as possible.
The first step to maintaining credibility is to avoid getting caught in SPAM filter tests, the email content should avoid the following:
- Words common in SPAM, like “win” or “free”
- The use of ALL CAPS
- Excessive punctuation
- Special characters
Additionally, do not spoof emails. Spoofing is the forgery of an email header to make it appear it originated from a source other than the actual source. Send emails from your server. (Sometimes Kinesis has clients who want the email to appear to originate from their server. In such cases, we receive the sample from the client, append a unique identifier and send it back to the client to actually be mailed from their servers.)
Perhaps the best strategy to maintain the credibility of the email invite is to conform to Marketing Research Association (MRA) guidelines. These guidelines include:
- Clearly identify the researcher, including phone number, mailing address, and email
- Post privacy policies online and include a link to these policies
- Include a link to opt out of future emails
From and Subject Lines
Both the FROM and SUBJECT lines are critical in getting the respondent to open the email.
The FROM line has be as credible and recognizable as possible, avoiding vague or generic terms like “feedback”. For surveys of customers, the company name or the name of a recognizable representative of the company should be used.
The SUBJECT line must communicate the subject of the email in a credible way that will make the respondent want to open the email. Keep it brief (50 characters or less), clear, concise and credible.
Survey Timing
Not only is the content of the email important, but the timing of delivery plays a role in response rates. In our experience sending the survey invitation in the middle of the week (Tuesday – Thursday) during daytime hours increases the likelihood that the email will be noticed by the respondent.
Reminder Emails
After an appropriate amount of time (typically for our clients 5 days), reminder emails should be sent, politely reminding the respondent of the previous invitation, and highlighting the importance of their opinion. One, perhaps two, reminder emails are appropriate, but do not send more than two.
To maximize the probability that respondents will receive and open the email focus on sending a credible email mid-week, one which will pass SPAM filter tests, contain accurate credible and compelling SUBJECT and FROM lines, and send polite reminder emails to non-responders.
But opening the email is just the first step. The actual objective is to get the respondents to open and complete the survey. Click here for the next post in this series in prompting respondents to participate in the survey.
Best Practices in Bank Customer Experience Measurement Design: Customer Surveys
Many banks conduct periodic customer satisfaction research to assess the opinions and experiences of their customer base. While this information can be useful, it tends to be very broad in scope, offering little practical information to the front-line. A best practice is a more targeted, event-driven approach collecting feedback from customers about specific service encounters soon after the interaction occurs.
These surveys can be performed using a variety of data collection methodologies, including e-mail, phone, point-of-sale invite, web intercept, in-person intercept and even US mail. Fielding surveys using e-mail methodology with its immediacy and relatively low cost, offers the most potential for return on investment. Historically, there have been legitimate concerns about the representativeness of sample selection using email. However, as the incidence of email collection of banks increases, there is less concern about sample selection bias.
The process for fielding such surveys is fairly simple. On a daily basis, a data file (in research parlance “sample”) is generated containing the customers who have completed a service interaction across any channel. This data file should be deduped, cleaned against a do not contact list, and cleaned against customers who have been surveyed recently (typically three months depending on the channel). At this point, if you were to send the survey invitations, the bank would quickly exhaust the sample, potentially running out of eligible customers for future surveys. To avoid this, a target of the required number of completed surveys should be set per business unit, and a random selection process employed to select just enough customers to reach this target without surveying every customer. [1]
So what are some of the purposes banks use these surveys for? Generally, they fall into a number of broad categories:
Post-Transaction: Teller & Contact Center: Post-transaction surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey, targeting specific customers shortly after a service interaction. As the name implies, the purpose of this type of survey is to measure satisfaction with a specific transaction.
New Account & On-Boarding: New account surveys measure satisfaction with the account opening process, as well as determine the reasons behind new customers’ selection of the bank for a new deposit account or loan – providing valuable insight into new customer identification and acquisition.
Closed Account Surveys: Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention.
Call to Action
Research without a call to action may be informative, but not very useful. Call to action elements should be built into research design, which provide a road map for clients to maximize the ROI on customer experience measurement.
Finally, post-transaction surveys support other behavioral research tools. Properly designed surveys yield insight into customer expectations, which provide an opportunity for a learning feedback loop to support observational research, such as mystery shopping, where customer expectations are used to inform service standards which are in turn measured through mystery shopping.
For more posts in this series, click on the following links:
- Introduction: Best Practices in Bank Customer Experience Measurement Design
- Mystery Shopping: Best Practices in Bank Customer Experience Measurement Design
- Leverage Unrecognized Experts in the Customer Experience: Best Practices in Bank Customer Experience Measurement Design – Employee Surveys
- Filling in the White Spaces: Best Practices in Bank Customer Experience Measurement Design – Social Listening
- A New Look at Comment Cards: Best Practices in Bank Customer Experience Measurement Design – Customer Comments & Feedback
- Customer Experience Measurement Implications of Changing Branch Networks
[1] Kinesis uses an algorithm which factors in the targeted quota, response rate, remaining days in the month and number of surveys completed to select just enough customers to reach the quota without exhausting the sample.
Customer Experience Measurement Implications of Changing Branch Networks
The branch network is evolving based on banking’s changing economic model as well as changing customer expectations and behaviors. As the branch network evolves measurement of the customer experience within the branch channel will need to evolve as well to fit both the changing economic model and customer behaviors.
Deb Stewart’s recent article “The Branch Shrinks” in the June 2014 edition of ABA Bank Marketing and Sales used the experience of Sweden as an example of how the branch operating model in the US may evolve in response to these changes. Ms. Stewart describes Sweden’s branch operating model’s evolution in four primary ways:
- Branches will be less monolithic, with branches tailored to location and market;
- Branches will be much smaller and more flexible;
- Customer facing technology will be more prevalent; and
- Branch staffing both decline and change with increased use of “universal” associates who will conduct a wider range of functions, transforming tellers to sellers.
The article goes on to describe five case studies for innovative branch design in the United States.
Most commentary suggests branch networks will be redefined in three primary ways:
- Flagship Branches: Hubs to a hub and spoke model offering education, advice, and serving as sales centers.
- Community Centers: Branches smaller in scope focused on community outreach driving loyalty.
- Expanded ATMs: These will serve as transaction centers at in-store or other high traffic sites.
In short, there will be a variety of branch types, many staffed with fewer employees, each with a unique role, presenting three customer experience challenges:
- Consistently delivering on the brand promises despite disparate branch types – Does the customer experience reinforce the overall brand promise?
- Fidelity to each branch’s unique role within network – Does the customer experience fit the specific role and objectives of the branch?
- Huge challenges associated with a transformation of skills to universal associates – How do we conduct a massive transition of skills of tellers into financial advisors, fluent in all bank products, and manage these associates fewer less employees on site.
Flagship Branches
The customer experience at flagship branches will best be measured much like it is at traditional branches today with a mix of customer satisfaction surveys and mystery shopping. A random sampling across all interaction types will ensure that all of the services offered at these education and sales centers are evaluated. Mystery shopping should focus scenarios on sales scenarios across all retail product lines, evaluating sales effectiveness, quality of experience and compliance.
Community Centers
Community Center branches offer the greatest need to refine customer experience measurement, and opportunity to use it as a management tool. Universal associates, with broad skill requirements working in lightly staffed branches, mandate that the customer experience be monitored closely. Post-transaction surveys across all interaction types should be used to evaluate employee skill level, appropriate resolution of inquiry, and consistency of service with brand promise. An automated email or mobile survey will provide managers with a near real time view of the customer experience at fraction of the cost of other data collection methods. Mystery shopping across a broad range of scenarios will evaluate employee skill level and appropriate referral practices for mortgage and investment services to Flagship branches or Video Bankers. Fewer employees will allow for better tracking of the customer experience at the employee level, which will be a necessity given the increased expectations on these employees with less onsite management.
Expanded ATMs
As with the other branch types, a random sampling of all interaction types will yield a valid sample of transactions these branches perform. As with the other branch types, automated email or mobile surveys will provide a near real time view of the experience. Mystery shopping may be used to evaluate service interactions with video tellers, investment advisors or tellers.
Evolution of the branch network, particularly with changes in the staffing model, will require changes in how the customer experience is monitored. The good news is survey technology is evolving as well, and will give managers the opportunity to gather intelligence on the customer experience in a highly efficient and productive manner.
For more posts in this series, click on the following links:
- Introduction: Best Practices in Bank Customer Experience Measurement Design
- Customer Surveys: Best Practices in Bank Customer Experience Measurement Design
- Mystery Shopping: Best Practices in Bank Customer Experience Measurement Design
- Leverage Unrecognized Experts in the Customer Experience: Best Practices in Bank Customer Experience Measurement Design – Employee Surveys
- Filling in the White Spaces: Best Practices in Bank Customer Experience Measurement Design – Social Listening
- A New Look at Comment Cards: Best Practices in Bank Customer Experience Measurement Design – Customer Comments & Feedback
SERVQUAL Model: A Multi-Item Tool for Comparing Customer Perceptions vs. Expectations
Looking for a tried and true model to understand your service quality?
The SERVQUAL model is an empiric model that has been around for nearly 30 years. While not new, it is a foundation of many of the service quality and customer experience concepts in use today. It is a gap model designed to measure gaps between customer perceptions relative to customer expectations.
SERQUAL describes the customer experience in terms of five dimensions:
1. TANGIBLES – Appearance of physical facilities, equipment, personnel, and communication materials
2. RELIABILITY – Ability to perform the promised service dependably and accurately
3. RESPONSIVENESS – Willingness to help customers and provide prompt service
4. ASSURANCE – Knowledge and courtesy of employees and their ability to convey trust and confidence
5. EMPATHY – Caring, individualized attention the firm provides its customers
Each of these five dimensions is measured using a survey instrument consisting of individual attributes which role up into each dimension.
For example, each of the five dimensions may consist of the following individual attributes:
Tangibles
• Appearance/cleanliness of physical facilities
• Appearance/cleanliness of personnel
• Appearance/cleanliness of communication/marketing materials
• Appearance/cleanliness of equipment
Reliability
• Perform services as promised/right the first time
• Perform services on time
• Follow customer’s instructions
• Show interest in solving problems
Responsiveness
• Telephone calls/other inquiries answered promptly
• Willingness to help/answer questions
• Problems resolved quickly
Assurance
• Knowledgeable employees/job knowledge
• Employees instill confidence in customer
• Employee efficiency
• Employee recommendations
• Questioning to understand needs
Empathy
• Interest in helping
• Individualized/personal attention
• Ease of understanding/use understandable terms
• Understand my needs/recommending products to best fit my needs
• The employees have my best interests at heart
Call to Action
Research without a call to action may be informative, but not very useful. By measuring both customer perceptions and expectations, SERVQUAL gives managers the ability to prioritize investments in the customer experience based not only on their performance, but performance relative to customer expectations.
The first step in taking action on SERVQUAL results is to calculate a Gap Score by simply subtracting the expectation rating from the perception rating for each attribute (Gap Score = Perception – Expectation). This step alone will give you a basis for ranking each attribute based on its gap between customer perceptions and expectations.
Service Quality Score
In addition to ranking service attributes, the Gap Score can be used to calculate both a Service Quality Score based on the relative importance assigned by customers to each of the five service quality dimensions.
The first step in calculating a Service Quality Score is to average the Gap Score of each attribute within each dimension. This will give you the Gap Score for each dimension (GSD). Averaging the dimension Gap Scores will yield an Unweighted Service Quality Score.
From this unweighted score it is a three step process to calculate a Weighted Service Quality Score.
First, determine importance weights by asking customers to allocate a fixed number of points (typically 100) across each of the five dimensions based on how important the dimension is to them. This point allocation will yield a weight for each dimension based on its importance.
The second step is to multiply the Gap Score for each dimension (GSD) by its importance weight. The final step is to simply sum this product across all five dimensions; this will yield a Weighted Service Quality Score.
Click here for a more detailed step by step description of score calculation.
What does all this mean? See the following post for discussion of the implications of SERVQUAL for customer experience managers: The 5 Service Dimensions All Customers Care About.