This post considers two types of variation in the customer experience: common and special cause variation, and their implications for customer researchers.
The concepts of common and special cause variation are derived from the process management discipline Six Sigma.
Common cause variation is normal or random variation within the system. It is statistical noise within the system. Examples of common cause variation in the customer experience are:
- Poorly defined, poorly designed, inappropriate policies or procedures
- Poor design or maintenance of computer systems
- Inappropriate hiring practices
- Insufficient training
- Measurement error
Special cause variation, on the other hand, is not random. It conforms to laws of probability. It is the signal within the system. Examples of special cause variation include:
- High demand/ high traffic
- Poor adjustment of equipment
- Just having a bad day
What are the implications of common and special cause variation for customer experience researchers?
Given the differences between common cause and special cause variation, researchers need a tool to help them distinguish between the two. Researchers need a means of determining if any observed variation in the customer experience is statistical noise or a signal within the system. Control charts are a statistical tool to make a determination if variation is noise or a signal.
Control charts track measurements within upper and lower quality control limits. These quality control limits define statistically significant variation overtime (typically at a 95% confidence), which means there is a 95% probability that the variation is the result of an actual change in the customer experience (special cause variation) not just normal common cause variation. Observed variation within these quality control limits are common cause variation. Variation which migrates outside these quality control limits is special cause variation.
To illustrate this concept, consider the following example of mystery shop results:
This chart depicts a set of mystery shop scores which both vary from month to month and generally appear to trend upward.
Customer experience researchers need to provide managers a means of determining if the month to month variation is statistical noise or some meaningful signal within the system. Turning this chart into a control chart by adding statistically defined upper and lower quality control limits will determine if the monthly variation is common or special cause.
To define quality control limits, the customer experience researcher needs to determine the count of observations for each month, the monthly standard deviation, and the average count of shops across all months.
The following table adds these three additional pieces of information into our example:
|Count of Mystery Shops||Average Mystery Shop Scores||Standard Deviation of Mystery Shop Scores|
To define the upper and lower quality control limits (UCL and LCL, respectively), apply the following formula:
x = Grand Mean of the score
n = Mean sample size (number of shops)
SD = Mean standard deviation
These equations yield quality control limits at 95% confidence, which means there is a 95% probability any variation observed outside these limits is special cause variation, rather than normal common cause variation within the system
Calculating these quality control limits and applying them to the above chart produces the following control chart, with upper and lower quality control limits depicted in red:
This control chart now answers the question, what variation is common cause and what variation is special cause. The general trend upward appears to be statistically significant with the most recent month above the upper quality control limit. Additionally, this control chart identifies a period of special cause variation in July. With 95% confidence we know some special cause drove the scores below the lower control limit. Perhaps this special cause was employee turnover, perhaps a new system rollout, or perhaps a weather event that impacted the customer experience.
Previously, we discussed the implications of inter-channel consistency for researchers, and introduced a process for management to define a set of employee behaviors which will support the organization’s customer experience goals across multiple channels.
This post considers the implications of intra-channel consistency for customer experience researchers.
As with cross-channel consistency, intra-channel consistency, or consistency within individual channels requires the researcher to identify the causes of variation in the customer experience. The causes of intra-channel variation, is more often than not at the local level – the individual stores, branches, employees, etc. For example, a bank branch with large variation in customer traffic is more likely to experience variation in the customer experience.
Regardless of the source, consistency equals quality.
In our own research, Kinēsis conducted a mystery shop study of six national institutions to evaluate the customer experience at the branch level. In this research, we observed a similar relationship between consistency and quality. The branches in the top quartile in terms of consistency delivered customer satisfaction scores 15% higher than branches in the bottom quartile. But customer satisfaction is a means to an end, not an end goal in and of itself. In terms of an end business objective, such as loyalty or purchase intent, branches in the top quartile of consistency delivered purchase intent ratings 20% higher than branches in the bottom quartile.
Purchase intent and satisfaction with the experience were both measured on a 5-point scale.
Again, it is incumbent on customer experience researchers to identify the causes of inconsistency. A search for the root cause of variation in customer journeys must consider processes cause variation.
One tool to measure process cause variation is a Voice of the Customer (VOC) Table. VOC Tables have a two-fold purpose: First, to identify specific business processes which can cause customer experience variations, and second, to identify which business processes will yield the largest ROI in terms of improving the customer experience.
VOC Tables provide a clear road map to identify action steps using a vertical and horizontal grid. On the vertical axis, each customer experience attribute within a given channel is listed. For each of these attributes a judgment is made about the relative importance of each attribute. This importance is expressed as a numeric value. On the horizontal axis is a exhaustive list of business processes the customer is likely to encounter, both directly and indirectly, in the customer journey.
This grid design matches each business process on the horizontal axis to each service attribute on the vertical axis. Each cell created in this grid contains a value which represents the strength of the influence of each business process listed on the horizontal axis to each customer experience attribute.
Finally, a value is calculated at the bottom of each column which sums the values of the strength of influence multiplied by the importance of each customer experience attribute. This yields a value of the cumulative strength of influence of each business process on the customer experience weighted by its relative importance.
Consider the following example in a retail mortgage lending environment.
In this example, the relative importance of each customer experience attributes was determined by correlating these attributes to a “would recommend” question, which served as a loyalty proxy. This yields an estimate of importance based on the attribute’s strength of relationship to customer loyalty, and populates the far left column. Specific business processes for the mortgage process are listed across the top of this table. Within each cell, an informed judgment has been made regarding the relative strength of the business process’s influence on the customer experience attribute. This strength of influence has been assigned a value of 1 – 3. It is multiplied by the importance measure of each customer experience attribute and summed into a weighted strength of influence – weighted by importance, for each business process.
In this example, the business processes which will yield the highest ROI in terms of driving the customer experience are quote of loan terms (weighted strength of influence 23.9), clearance of exemptions (22.0), explanation of loan terms (20.2), loan application (18.9) and document collection (16.3).
This post considers the implications of cross-channel consistency for customer experience researchers. The first research implication of inter-channel consistency is to understand that researchers must investigate service delivery consistency at its cause.
The range of choices available to customers here in the 21-st century is incredible. Gone are the Henry Ford days when you, as he put it, “could have any color you want as long as it’s black.” Modern customers have an array of choices available to them not only in the brands but in delivery channels. Modern brands must serve channels in the channel of the customer’s choice, be it on-line, mobile, contact center, or in-person. As customer choice expands cross-channel consistency has become more and more important.
The problem for customer experience researchers is that this channel expansion requires a broad tool box of research techniques, as different channels require unique systems and processes appropriate to the channel. Systems and processes for on-line channels are different than those for in-person channels. These different systems and processes often lead to the siloing of channels, which may help make individual channels more efficient, but run the risk in inconsistencies in the customer experience from one channel to the other.
Customers, however, don’t look at a brand as a collection of siloed channels. Customers do not care about organizational charts. They expect a consistent customer experience regardless of channels. Customers expect cross-channel consistency.
If senior management has defined the customer experience organization-wide, the researcher’s role in coordinating research tools is much easier. If management has not defined the customer experience organization-wide, the researcher’s role is nearly impossible.
The first step in defining the customer experience organization-wide is writing a clear customer experience mission statement which clearly communicates how customers should experience the brand, and how management wants customers to feel as a result of the experience. Next, the customer experience should be defined in terms of broad dimensions and specific attributes which constitute the desired customer experience and emotional reaction to the brand.
For illustration, let’s consider the following example:
A bank may define their customer experience with four broad dimensions, which can be described as:
- Relationship Building
- Sales Process
- Product Knowledge
- Customer Knowledge
Next, the customer experience leadership of this bank must define each of these broad dimensions in terms of specific attributes which combine to make up the dimensions. For example, each of the above four dimensions may be defined by the following attributes:
|Relationship Building||Establish trust
Commitment to customer needs
Perceived as trusted advisor
|Sales Process||Referral to appropriate partner|
|Product Knowledge||Understanding of a range of products
Understand features and benefits
Explain benefits in ways that are meaningful to customers
|Customer Knowledge||Needs analysis|
Once each of the above dimensions has been defined in terms of specific attributes, the next step in translating the customer experience definition to action is to define a set of empirical behaviors which support each attribute.
For example, establishing trust is an attribute of relationship building.
Relationship Building –> Establish Trust
Under this example, a set of behaviors is defined which are designed to establish trust. For example, these behaviors may be:
- Maintain eye contact
- Speak clearly
- Maintain smile
- Thank for business
- Ask “What else may we assist you with today?”
- Encourage future business
Now, each of these six behaviors is mapped across each channel. So, for example, this bank may map these behaviors across channels as follows:
Behaviors Which Support Establishing Trust:
|New Accounts||Teller||Contact Center|
|Maintain eye contact||Maintain eye contact||—|
|Speak clearly||Speak clearly||Speak clearly|
|Maintain smile||Maintain smile||Sound as if they were smiling through the phone|
|Thank for business||Thank for business||Thank for business|
|Ask “What else may we assist you with today?”||Ask “What else may we assist you with today?”||Ask “What else they could do to assist you today?”|
|Encourage future business||Encourage future business||Encourage future business|
Repeating this process of mapping behaviors to each of the attributes will produce a complete list of employee behaviors appropriate to each channel in support of management’s broader customer experience objectives.
Research without call to action may be interesting, but in the end, not very useful.
This is particularly true with customer experience research. It is incumbent on customer experience researchers to give management research tools which will identify clear call to action items –items in which investments will yield the highest return on investment (ROI) in terms of meeting management’s customer experience objectives. This post introduces a simple intuitive mystery shopping analysis technique that identifies the service behaviors with the highest potential for ROI in terms of achieving these objectives.
Mystery shopping gap analysis is a simple three-step analytical technique.
Step 1: Identify the Key Objective of the Customer Experience
The first step is to identify the key objective of the customer experience. Ask yourself, “How do we want the customer to think, feel or act as a result of the customer experience?”
- Do you want the customer to have increased purchase intent?
- Do you want the customer to have increased return intent?
- Do you want the customer to have increased loyalty?
Let’s assume the key objective is increased purchase intent. At the conclusion of the customer experience you want the customer to have increased purchase intent.
Next draft a research question to serve as a dependent variable measuring the customer’s purchase intent. Dependent variables are those which are influenced or dependent on the behaviors measured in the mystery shop.
Step 2: Determine Strength of the Relationship of this Key Customer Experience Objective
After fielding the mystery shop study, and collecting a statistically significant number of shops, the next step is to determine the strength of the relationship between this key customer experience measure (the dependent variable) and each behavior or service attribute measured (independent variable). There are a number of ways to determine the strength of the relationship, perhaps the easiest is a simple cross-tabulation of the results. Cross tabulation groups all the shops with positive purchase intent and all the shops with negative purchase intent together and makes comparisons between the two groups. The greater the difference in the frequency of a given behavior or service attribute between shops with positive purchase intent compared to negative, the stronger the relationship to purchase intent.
The result of this cross-tabulation yields a measure of the importance of each behavior or service attribute. Those with stronger relationships to purchase intent are deemed more important than those with weaker relationships to purchase intent.
Step 3: Plot the Performance of Each Behavior Relative to Its Relationship to the Key Customer Experience Objective
The third and final step in this analysis to plot the importance of each behavior relative to the performance of each behavior together on a 2-dimensional quadrant chart, where one axis is the importance of the behavior and the other is its performance or the frequency with which it is observed.
Interpreting the results of this quadrant analysis is fairly simple. Behaviors with above average importance and below average performance are the “high potential” behaviors. These are the behaviors with the highest potential for return on investment (ROI) in terms of driving purchase intent. These are the behaviors to prioritize investments in training, incentives and rewards. These are the behaviors which will yield the highest ROI.
The rest of the behaviors are prioritized as follows:
Those with the high importance and high performance are the next priority. They are the behaviors to maintain. They are important and employees perform them frequently, so invest to maintain their performance.
Those with low importance are low performance are areas to address if resources are available.
Finally, behaviors or service attributes with low importance yet high performance are in no need of investment. They are performed with a high degree of frequency, but not very important, and will not yield an ROI in terms of driving purchase intent.
Research without call to action may be interesting, but in the end, not very useful.
This simple, intuitive gap analysis technique will provide a clear call to action in terms of identifying service behaviors and attributes which will yield the most ROI in terms of achieving your key objective of the customer experience.
Mystery shopping not in pursuit of an overall customer experience objective may be interesting, it may be successful in motivating certain service behaviors, but ultimately will fail in maximizing return on investment.
Consider the following proposition:
“Every time a customer interacts with a brand, the customer learns something about the brand, and based on what they learn, adjust their behavior in either profitable or unprofitable ways.”
These behavioral adjustments could be profitable: positive word of mouth, complain less, less expensive channel use, increased wallet share, loyalty, or purchase intent, etc.. Or…these adjustments could be unprofitable: negative word of mouth, more complaints, decreased wallet share, purchase intent or loyalty, etc.
There is power in this proposition. Understanding it is the key to managing the customer experience in a profitable way. Unlocking this power gives managers a clear objective for the customer experience in terms of what you want the customer to learn from it and react to it. Ultimately, it becomes a guidepost for all aspects of customer experience management – including customer experience measurement.
In designing customer experience measurement tools, ask yourself:
- What is the overall objective of the customer experience?
- How do you want the customer to feel as a result of the experience?
- How do you want the customer to act as a result of the experience?
- Do you want the customer to have increased purchase intent?
- Do you want the customer to have increased return intent?
- Do you want the customer to have increased loyalty?
The answer to the above series of questions will become the guideposts for designing a customer experience which will achieve your objectives.
The answers to the above questions will serve as a basis for evaluating the customer experience against your objectives. In research terms, the answer to this question or questions will become the dependent variable(s) of your customer experience research – the variables influenced or dependent on the specific attributes of the customer experience.
For example, let’s assume your objective of the customer experience is increased return intent. As part of a mystery shopping program, ask a question designed to capture return intent – a question like, “Had this been an actual visit, how did the experience during this shop influence your intent to return for another transaction?” This is the dependent variable.
The next step is to determine the relationship between every service behavior or attribute and the dependent variable (return intent). The strength of this relationship is a measure of the importance of each behavior or attribute in terms of driving return intent. It provides a basis from which to make informed decisions as to which behaviors or attributes deserve more investment in terms of training, incentives, and rewards.
This is what Kinesis calls Key Driver Analysis, an analysis technique designed to identify service behaviors and attributes which are key drivers of your key objectives of the customer experience. In the end, providing an informed basis for which to make decisions about investments in the customer experience.
Mystery shop programs measure human interactions; interactions with other humans and increasingly human interactions with automated machines. Given that humans are on one or both sides of the equation, it is not surprising that variation in the customer experience exists.
When designing a mystery shop program, a central decision is the number of shops to deploy. This decision is dependent on a number of issues including: desired reliability, number of customer interactions, and the budgetary resources available for the program. However, one additional and very important consideration, which frankly doesn’t get much attention, is the amount of variation expected in the customer experience to be measured.
The level of variation in the customer experience is an important consideration. Consistent customer experience processes require less mystery shops than those with a high degree of variation. To illustrate this, consider the following:
Assume a customer experience process is 100% consistent with zero variation from experience to experience. Such a process would require only one shop to accurately describe the experience as a whole. Now, consider a customer experience process with an infinite level of variation in the experience. Such a process would require far more than one shop. In fact, assuming an infinite level of variation, 400 shops would be required to achieve a margin of error of plus or minus five percent.
Obviously, the variation of most customer experience processes reside somewhere between perfect consistency and infinite variation. So how do managers determine the level of variation in their process? The answer to this question will probably be more qualitative than quantitative. Ask yourself:
- Do you have a set of standardized customer experience expectations?
- Are these expectations clearly communicated to employees?
- Other than mystery shopping, do you have any processes in place to monitor the customer experience? If so, are the results of these monitoring tools consistent from month-to-month or quarter-to-quarter?
To make it easy, I always ask new clients to give a qualitative estimate of the level of variation in their customer experience from: high, medium to low. The answer to this question will also be considered along with the level of statistical reliability desired and budgetary resources available for the program in determining the appropriate number of shops.
So – ask yourself; how much variation can we expect in our customer experience?
Customer experience researchers are constantly looking for ways to make their observations relevant, to turn observations into insight. Observing a behavior or service attribute is one thing, linking observations to insight that will maximize return on customer experience investments is another. One way to link customer experience observations to insights that will drive ROI is to explore the influence of customer experience attributes to key business outcomes such as loyalty and wallet share.
The first step is to gather impressions of a broad array of customer experience attributes, such as: accuracy, cycle time, willingness to help, etc. Make this list as long as you reasonably can without making the survey instrument too long.
For additional thoughts on survey length and research design, see the following blog posts:
The next step is to explore the relationship of these service attributes to loyalty and share of wallet.
Two Questions – Lots of Insight
In our experience, two questions: a “would recommend” and primary provider question, yield valuable insight into the relative importance of specific service attributes. Together, these two questions form the foundation of a two-dimensional analytical framework to determine the relative importance of specific service attributes in driving loyalty and wallet share.
Research has determined the business attribute with the highest correlation to profitability is customer loyalty. Customer loyalty lowers sales and acquisition costs per customer by amortizing these costs across a longer lifetime – leading to some extraordinary financial results.
Measuring customer loyalty in the context of a survey is difficult. Surveys best measure attitudes and perceptions. Loyalty is a behavior not an attitude. Survey researchers therefore need to find a proxy measurement to determine customer loyalty. A researcher might measure customer tenure under the assumption that length of relationship predicts loyalty. However, customer tenure is a poor proxy. A customer with a long tenure may leave, or a new customer may be very satisfied and highly loyal.
Likelihood of referral captures a measurement of the customer’s likelihood to refer a brand to a friend, relative or colleague. It stands to reason, if one is going to refer others to a brand, they will remain loyal as well, because customers who are promoters of a brand are putting their reputational risk on the line. This willingness to put their reputational risk on the line is founded on a feeling of loyalty and trust.
Any likelihood of referral question can be used, depending on the specifics of your objectives. Kinesis has had success with both a “yes/no” question, “Would you refer us to a friend, relative or colleague?” and the Net Promoter methodology. The Net Promoter methodology asks for a rating of the likelihood of referral to a friend, relative or colleague on an 11-point (0-10) scale. Customers with a likelihood of 0-6 are labeled “detractors,” those with ratings of 7 and 8 and identified as “passive referrers,” while those who assign a rating of 9 and 10 are labeled “promoters.”
In our experience asking the “yes/no” question: “Would you refer us to a friend, relative or colleague?” produces starker differences in this two-dimensional analysis making it easier to identify which service attributes have a stronger relationship to both loyalty and engagement.
Similar to loyalty, customer engagement or wallet share can lead to some extraordinary financial results. Wallet share is the percentage of what a customer spends with a given brand over a specific period of time.
Also similar to loyalty, measuring engagement or wallet share in a survey is difficult. There are several ways to measure engagement: one methodology is to use some formula such as the Wallet Allocation Rule which uses customer responses to rank brands in the same product category and employs this rank to estimate wallet share, or to use a simple yes/no primary provider question.
Using these loyalty and engagement measures together, we can now cross tabulate the array of service attribute ratings by these two measures. This cross tabulation groups the responses into four segments: 1) Engaged & Loyal, 2) Disengaged yet Loyal, 3) Engaged yet Disloyal, 4) Disengaged & Disloyal. We can now make comparisons of the responses by these four segments to gain insight into how each of these four segments experience their relationship with the brand.
These four segments represent: the ideal, opportunity, recovery and attrition.
Ideal – Engaged Promoters: This is the ideal customer segment. These customers rely on the brand for the majority of their in category purchases and represent lower attrition risk. In short, they are perfectly positioned to provide the financial benefits of customer loyalty. Comparing attribute ratings for customers in this segment to the others will identify both areas of strength, but at the same time, identify attributes which are less important in terms of driving this ideal state, informing future decisions on investment in these attributes.
Opportunity – Disengaged Promoter: This customer segment represents an opportunity. These customers like the brand and are willing to put their reputation at risk for it. However, there is an opportunity for cross-sell to improve share of wallet. Comparing attribute ratings of the opportunity segment to the ideal will identify service attributes with the highest potential for ROI in terms of driving wallet share.
Recovery – Engaged Detractor: This segment represents significant risk. The combination of above average share of wallet, and low commitment to put their reputational risk on the line is flat out dangerous as it puts profitable share of wallet at risk. Comparing attribute ratings of customers in the recovery segment to both the ideal and the opportunity segments will identify the service attributes with the highest potential for ROI in terms of improving loyalty.
Attrition – Disengaged Detractor: This segment represents the greatest risk of attrition. With no willingness to put reputational risk on the line, and little commitment to placing share of wallet with the brand, retention strategies may be too late for them. Additionally, they most likely are unprofitable. Comparing the service attributes of customers in this segment to the others will identify elements of the customer experience which drive attrition and may warrant increased investment, as well as, elements that do not appear to matter very much in terms driving runoff, and may not warrant investment.
By making comparisons across each of these segments, researchers give managers a basis to make informed decisions about which service attributes have the strongest relationship to loyalty and engagement. Thus identifying which behaviors have the highest potential for ROI in terms of driving customer loyalty and engagement. This two-dimensional analysis is one way to turn customer experience observations into insight.