This post considers two types of variation in the customer experience: common and special cause variation, and their implications for customer researchers.
The concepts of common and special cause variation are derived from the process management discipline Six Sigma.
Common cause variation is normal or random variation within the system. It is statistical noise within the system. Examples of common cause variation in the customer experience are:
- Poorly defined, poorly designed, inappropriate policies or procedures
- Poor design or maintenance of computer systems
- Inappropriate hiring practices
- Insufficient training
- Measurement error
Special cause variation, on the other hand, is not random. It conforms to laws of probability. It is the signal within the system. Examples of special cause variation include:
- High demand/ high traffic
- Poor adjustment of equipment
- Just having a bad day
What are the implications of common and special cause variation for customer experience researchers?
Given the differences between common cause and special cause variation, researchers need a tool to help them distinguish between the two. Researchers need a means of determining if any observed variation in the customer experience is statistical noise or a signal within the system. Control charts are a statistical tool to make a determination if variation is noise or a signal.
Control charts track measurements within upper and lower quality control limits. These quality control limits define statistically significant variation overtime (typically at a 95% confidence), which means there is a 95% probability that the variation is the result of an actual change in the customer experience (special cause variation) not just normal common cause variation. Observed variation within these quality control limits are common cause variation. Variation which migrates outside these quality control limits is special cause variation.
To illustrate this concept, consider the following example of mystery shop results:
This chart depicts a set of mystery shop scores which both vary from month to month and generally appear to trend upward.
Customer experience researchers need to provide managers a means of determining if the month to month variation is statistical noise or some meaningful signal within the system. Turning this chart into a control chart by adding statistically defined upper and lower quality control limits will determine if the monthly variation is common or special cause.
To define quality control limits, the customer experience researcher needs to determine the count of observations for each month, the monthly standard deviation, and the average count of shops across all months.
The following table adds these three additional pieces of information into our example:
|Count of Mystery Shops||Average Mystery Shop Scores||Standard Deviation of Mystery Shop Scores|
To define the upper and lower quality control limits (UCL and LCL, respectively), apply the following formula:
x = Grand Mean of the score
n = Mean sample size (number of shops)
SD = Mean standard deviation
These equations yield quality control limits at 95% confidence, which means there is a 95% probability any variation observed outside these limits is special cause variation, rather than normal common cause variation within the system
Calculating these quality control limits and applying them to the above chart produces the following control chart, with upper and lower quality control limits depicted in red:
This control chart now answers the question, what variation is common cause and what variation is special cause. The general trend upward appears to be statistically significant with the most recent month above the upper quality control limit. Additionally, this control chart identifies a period of special cause variation in July. With 95% confidence we know some special cause drove the scores below the lower control limit. Perhaps this special cause was employee turnover, perhaps a new system rollout, or perhaps a weather event that impacted the customer experience.
Previously, we discussed the implications of inter-channel consistency for researchers, and introduced a process for management to define a set of employee behaviors which will support the organization’s customer experience goals across multiple channels.
This post considers the implications of intra-channel consistency for customer experience researchers.
As with cross-channel consistency, intra-channel consistency, or consistency within individual channels requires the researcher to identify the causes of variation in the customer experience. The causes of intra-channel variation, is more often than not at the local level – the individual stores, branches, employees, etc. For example, a bank branch with large variation in customer traffic is more likely to experience variation in the customer experience.
Regardless of the source, consistency equals quality.
In our own research, Kinēsis conducted a mystery shop study of six national institutions to evaluate the customer experience at the branch level. In this research, we observed a similar relationship between consistency and quality. The branches in the top quartile in terms of consistency delivered customer satisfaction scores 15% higher than branches in the bottom quartile. But customer satisfaction is a means to an end, not an end goal in and of itself. In terms of an end business objective, such as loyalty or purchase intent, branches in the top quartile of consistency delivered purchase intent ratings 20% higher than branches in the bottom quartile.
Purchase intent and satisfaction with the experience were both measured on a 5-point scale.
Again, it is incumbent on customer experience researchers to identify the causes of inconsistency. A search for the root cause of variation in customer journeys must consider processes cause variation.
One tool to measure process cause variation is a Voice of the Customer (VOC) Table. VOC Tables have a two-fold purpose: First, to identify specific business processes which can cause customer experience variations, and second, to identify which business processes will yield the largest ROI in terms of improving the customer experience.
VOC Tables provide a clear road map to identify action steps using a vertical and horizontal grid. On the vertical axis, each customer experience attribute within a given channel is listed. For each of these attributes a judgment is made about the relative importance of each attribute. This importance is expressed as a numeric value. On the horizontal axis is a exhaustive list of business processes the customer is likely to encounter, both directly and indirectly, in the customer journey.
This grid design matches each business process on the horizontal axis to each service attribute on the vertical axis. Each cell created in this grid contains a value which represents the strength of the influence of each business process listed on the horizontal axis to each customer experience attribute.
Finally, a value is calculated at the bottom of each column which sums the values of the strength of influence multiplied by the importance of each customer experience attribute. This yields a value of the cumulative strength of influence of each business process on the customer experience weighted by its relative importance.
Consider the following example in a retail mortgage lending environment.
In this example, the relative importance of each customer experience attributes was determined by correlating these attributes to a “would recommend” question, which served as a loyalty proxy. This yields an estimate of importance based on the attribute’s strength of relationship to customer loyalty, and populates the far left column. Specific business processes for the mortgage process are listed across the top of this table. Within each cell, an informed judgment has been made regarding the relative strength of the business process’s influence on the customer experience attribute. This strength of influence has been assigned a value of 1 – 3. It is multiplied by the importance measure of each customer experience attribute and summed into a weighted strength of influence – weighted by importance, for each business process.
In this example, the business processes which will yield the highest ROI in terms of driving the customer experience are quote of loan terms (weighted strength of influence 23.9), clearance of exemptions (22.0), explanation of loan terms (20.2), loan application (18.9) and document collection (16.3).
Previously we discussed ways researchers can increase the likelihood of respondents opening an email survey invitation. Additionally, in a subsequent post we discussed how to get respondents to actually click on the survey link and participate in the survey.
This post is a discussion of ways to keep respondents motivated to complete the entire survey once they have entered it.
At its core, the key to completion rates is an easy to complete and credible survey that delivers on all promises offered in the invitation email.
From time to time various service providers of mine send me a survey invite, and I’m often surprised how many of them impose upon me, their customer, to complete a 30 or 40 minute survey. First of all, they never disclose the survey length in advance, which communicates a complete lack of respect for my time. In addition to just plain being an imposition, it is also a bad research practice. Ten minutes into the survey I’m either pressed for time, frustrated, or just plain bored, and either exit the survey or frivolously complete the remaining questions without any real consideration of my opinions on the questions they are asking – completely undermining the reliability of my responses. This is just simply a bad research practice, in addition to being inconsiderate of the end customer’s time.
We recommend keeping survey length short, no more than 10 to 12 minutes – in some cases such as a post-transaction survey – 5 minutes.
If research objectives require a long survey, rather than impose a ridiculously long survey on your customers producing frivolous results, break a 30 – 40 minute survey into two, or better yet, three parts fielding each part to a portion of your targeted sample frame.
Additionally, skip logic should be employed to avoid asking questions that are not applicable to a given respondent, thus decreasing the volume of questions you present to the end customer.
Finally, include a progress bar to keep respondents informed of how far along they are on the survey.
Ease of Completion
The last thing you want respondents feeling when they complete your survey is frustration. First of all, if the sample frame is made up of your customers, the primary thing you are accomplishing is upsetting your customers and damaging your brand. And also, creating bad research results because frustrated respondents are not in the proper mindset to give you well considered answers.
Frustration can come from awkward design, question wording, poor programming, and insufficient response choices. Survey wording and vocabulary should be simple and jargon free, response choices should be comprehensive, and of course the survey programming should be thoroughly proofed and pretested.
Pretesting is a process where the survey is prefielded to a portion of the sample frame to test how they respond to the survey, significant portions of the questionnaire unanswered or a high volume of “other” or “none of the above” responses could signal trouble with survey design.
Survey completion should be easy. Survey entry should work across a variety platforms, browsers and devices.
Additionally, respondents should be allowed to take the survey on their own time, even leaving the survey while saving their answers to date and allowing reentry when it is more convenient for them.
It is incumbent on researchers fielding self-administered surveys to maximize response rates. This reduces the potential for response bias, where the survey results may not accurately reflect the opinions of the entire population of targeted respondents. Previously we discussed ways researchers can increase the likelihood of respondents opening an email survey invitation. This post addresses how to get respondents to actually click on the survey link and participate in the survey.
Make the Invite Easy to Read
Don’t bury the lead. The opening sentence must capture the respondent’s attention and make the investment in effort to read the invitation. Keep in mind most people skim emails. Keep text of the invitation short, paying close attention to paragraph length. The email should be easy to skim.
Give a Reward
Offering respondents a reward for participation is an excellent way to motivate participation. Tangible incentives like a drawing, coupon, or gift card, if appropriate and within the budget, are excellent tools to maximize response rates. However, rewards do not necessarily need to be tangible. Intangible rewards can also prove to be excellent motivators. People, particularly customers who they have a relationship with the brand, want to be helpful. Expressing the importance of their option, and communicating how the brand will use the survey to improve its offering to customers like the respondent is an excellent avenue to leverage intangible rewards to motivate participation.
Intangible rewards are often sufficient if the respondent’s cost to participate in the survey is minimal. Perhaps the largest cost to a potential respondent is the time required to complete the survey. Give them an accurate estimate of the time it takes to complete the survey – and keep it short. We recommend no more than 10 minutes, more preferably five to six. If the research objectives require a longer survey instrument, break the survey into two or three shorter surveys and deliver them separately to different targeted respondents. Do not field excessively long surveys or mis-quote the estimated time to complete the survey – it is rude to impose on your respondents not to mention disastrous to your participation rates – and it’s unethical to mis-quote the survey length. As with getting the participants to open the email – creditability plays a critical role in getting them to click on the survey.
One of the best ways to garner credibility with the survey invite is to assure the participant confidentiality. This is particularly important for customer surveys, where the customers interact commonly with employees. For example, a community bank where customers may interact with bank employees not only in the context of banking but broadly in the community, must ensure customers that their survey response will be kept strictly confidential.
Personalizing the survey with appropriate merge fields is also an excellent way to garner credibility.
Make it as easy as possible for the participant to enter the survey. Program a link to the survey, and make sure it is both visible and presented early in the survey. Again, most people skim the contents of emails, so place the link in the top 1/3 of the email and make it clear that it is a link to enter the survey.
In designing survey invitations, remember to write short, concise, easy to read emails that both leverage respondent’s reward centers (tangible or intangible), and credibly estimate the short time required to complete the survey. This approach will help maximize response rates and avoid some of the pitfalls of response bias. Click here for the next post in this series in prompting respondents to complete the survey.
In fielding surveys researchers must be aware of the concepts of error and bias and how they can creep into a survey, potentially making the survey unreliable in ways that cannot be predicted. For example, one source of error is statistical error, where not enough respondents are surveyed to make the results statistically reliable. Another source of error, or bias, is response bias caused by not having a random sample of the targeted population.
A key concept of survey research is randomness of sample selection, in essence to give each member of the targeted survey population an equal chance of being surveyed. Response rates are important in self administered surveys (such as an email surveys), because it is possible non-responders (people who for some reason choose not to complete the survey) have different opinions than those who choose to participate in the survey. As a result, the survey is not purely random. If non-responders are somehow different than responders, the survey results will reflect that difference – thus biasing the research. It is therefore incumbent on researchers to maximize the survey response rate.
Say for example, a bank wants to survey customers after they have completed an online transaction. If customers who love the bank’s online capabilities are more likely to participate in the survey than those who do not like the bank’s online capabilities, the survey results will be biased in favor of a positive view of the bank’s online offering because it is not a representative sample – it is skewed toward customers with the positive view.
It is, again, incumbent on researchers to maximize the response rate as much as possible in self-administered email surveys.
Pre-Survey Awareness Campaign
One strategy to maximize response rates (particularly in a customer survey context) is a pre-survey awareness campaign to make customers aware of the coming survey and encourage participation. Such a campaign can take many forms, such as:
- Letter on company letterhead, signed by a high profile senior executive.
- Statement or billing inserts
- Email in advance of the survey
Each of these three are excellent ways to introduce the survey to respondents and maximize response rates.
The next steps in maximizing response rates in email surveys is passing SPAM filter tests, and prompting the recipient to open the email. The core concept here is credibility – to make the email appear as credible as possible.
The first step to maintaining credibility is to avoid getting caught in SPAM filter tests, the email content should avoid the following:
- Words common in SPAM, like “win” or “free”
- The use of ALL CAPS
- Excessive punctuation
- Special characters
Additionally, do not spoof emails. Spoofing is the forgery of an email header to make it appear it originated from a source other than the actual source. Send emails from your server. (Sometimes Kinesis has clients who want the email to appear to originate from their server. In such cases, we receive the sample from the client, append a unique identifier and send it back to the client to actually be mailed from their servers.)
Perhaps the best strategy to maintain the credibility of the email invite is to conform to Marketing Research Association (MRA) guidelines. These guidelines include:
- Clearly identify the researcher, including phone number, mailing address, and email
- Post privacy policies online and include a link to these policies
- Include a link to opt out of future emails
From and Subject Lines
Both the FROM and SUBJECT lines are critical in getting the respondent to open the email.
The FROM line has be as credible and recognizable as possible, avoiding vague or generic terms like “feedback”. For surveys of customers, the company name or the name of a recognizable representative of the company should be used.
The SUBJECT line must communicate the subject of the email in a credible way that will make the respondent want to open the email. Keep it brief (50 characters or less), clear, concise and credible.
Not only is the content of the email important, but the timing of delivery plays a role in response rates. In our experience sending the survey invitation in the middle of the week (Tuesday – Thursday) during daytime hours increases the likelihood that the email will be noticed by the respondent.
After an appropriate amount of time (typically for our clients 5 days), reminder emails should be sent, politely reminding the respondent of the previous invitation, and highlighting the importance of their opinion. One, perhaps two, reminder emails are appropriate, but do not send more than two.
To maximize the probability that respondents will receive and open the email focus on sending a credible email mid-week, one which will pass SPAM filter tests, contain accurate credible and compelling SUBJECT and FROM lines, and send polite reminder emails to non-responders.
But opening the email is just the first step. The actual objective is to get the respondents to open and complete the survey. Click here for the next post in this series in prompting respondents to participate in the survey.
Many banks conduct periodic customer satisfaction research to assess the opinions and experiences of their customer base. While this information can be useful, it tends to be very broad in scope, offering little practical information to the front-line. A best practice is a more targeted, event-driven approach collecting feedback from customers about specific service encounters soon after the interaction occurs.
These surveys can be performed using a variety of data collection methodologies, including e-mail, phone, point-of-sale invite, web intercept, in-person intercept and even US mail. Fielding surveys using e-mail methodology with its immediacy and relatively low cost, offers the most potential for return on investment. Historically, there have been legitimate concerns about the representativeness of sample selection using email. However, as the incidence of email collection of banks increases, there is less concern about sample selection bias.
The process for fielding such surveys is fairly simple. On a daily basis, a data file (in research parlance “sample”) is generated containing the customers who have completed a service interaction across any channel. This data file should be deduped, cleaned against a do not contact list, and cleaned against customers who have been surveyed recently (typically three months depending on the channel). At this point, if you were to send the survey invitations, the bank would quickly exhaust the sample, potentially running out of eligible customers for future surveys. To avoid this, a target of the required number of completed surveys should be set per business unit, and a random selection process employed to select just enough customers to reach this target without surveying every customer. 
So what are some of the purposes banks use these surveys for? Generally, they fall into a number of broad categories:
Post-Transaction: Teller & Contact Center: Post-transaction surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey, targeting specific customers shortly after a service interaction. As the name implies, the purpose of this type of survey is to measure satisfaction with a specific transaction.
New Account & On-Boarding: New account surveys measure satisfaction with the account opening process, as well as determine the reasons behind new customers’ selection of the bank for a new deposit account or loan – providing valuable insight into new customer identification and acquisition.
Closed Account Surveys: Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention.
Call to Action
Research without a call to action may be informative, but not very useful. Call to action elements should be built into research design, which provide a road map for clients to maximize the ROI on customer experience measurement.
Finally, post-transaction surveys support other behavioral research tools. Properly designed surveys yield insight into customer expectations, which provide an opportunity for a learning feedback loop to support observational research, such as mystery shopping, where customer expectations are used to inform service standards which are in turn measured through mystery shopping.
For more posts in this series, click on the following links:
- Introduction: Best Practices in Bank Customer Experience Measurement Design
- Mystery Shopping: Best Practices in Bank Customer Experience Measurement Design
- Leverage Unrecognized Experts in the Customer Experience: Best Practices in Bank Customer Experience Measurement Design – Employee Surveys
- Filling in the White Spaces: Best Practices in Bank Customer Experience Measurement Design – Social Listening
- A New Look at Comment Cards: Best Practices in Bank Customer Experience Measurement Design – Customer Comments & Feedback
- Customer Experience Measurement Implications of Changing Branch Networks
 Kinesis uses an algorithm which factors in the targeted quota, response rate, remaining days in the month and number of surveys completed to select just enough customers to reach the quota without exhausting the sample.