This post considers two types of variation in the customer experience: common and special cause variation, and their implications for customer researchers.
The concepts of common and special cause variation are derived from the process management discipline Six Sigma.
Common cause variation is normal or random variation within the system. It is statistical noise within the system. Examples of common cause variation in the customer experience are:
- Poorly defined, poorly designed, inappropriate policies or procedures
- Poor design or maintenance of computer systems
- Inappropriate hiring practices
- Insufficient training
- Measurement error
Special cause variation, on the other hand, is not random. It conforms to laws of probability. It is the signal within the system. Examples of special cause variation include:
- High demand/ high traffic
- Poor adjustment of equipment
- Just having a bad day
What are the implications of common and special cause variation for customer experience researchers?
Given the differences between common cause and special cause variation, researchers need a tool to help them distinguish between the two. Researchers need a means of determining if any observed variation in the customer experience is statistical noise or a signal within the system. Control charts are a statistical tool to make a determination if variation is noise or a signal.
Control charts track measurements within upper and lower quality control limits. These quality control limits define statistically significant variation overtime (typically at a 95% confidence), which means there is a 95% probability that the variation is the result of an actual change in the customer experience (special cause variation) not just normal common cause variation. Observed variation within these quality control limits are common cause variation. Variation which migrates outside these quality control limits is special cause variation.
To illustrate this concept, consider the following example of mystery shop results:
This chart depicts a set of mystery shop scores which both vary from month to month and generally appear to trend upward.
Customer experience researchers need to provide managers a means of determining if the month to month variation is statistical noise or some meaningful signal within the system. Turning this chart into a control chart by adding statistically defined upper and lower quality control limits will determine if the monthly variation is common or special cause.
To define quality control limits, the customer experience researcher needs to determine the count of observations for each month, the monthly standard deviation, and the average count of shops across all months.
The following table adds these three additional pieces of information into our example:
|Count of Mystery Shops||Average Mystery Shop Scores||Standard Deviation of Mystery Shop Scores|
To define the upper and lower quality control limits (UCL and LCL, respectively), apply the following formula:
x = Grand Mean of the score
n = Mean sample size (number of shops)
SD = Mean standard deviation
These equations yield quality control limits at 95% confidence, which means there is a 95% probability any variation observed outside these limits is special cause variation, rather than normal common cause variation within the system
Calculating these quality control limits and applying them to the above chart produces the following control chart, with upper and lower quality control limits depicted in red:
This control chart now answers the question, what variation is common cause and what variation is special cause. The general trend upward appears to be statistically significant with the most recent month above the upper quality control limit. Additionally, this control chart identifies a period of special cause variation in July. With 95% confidence we know some special cause drove the scores below the lower control limit. Perhaps this special cause was employee turnover, perhaps a new system rollout, or perhaps a weather event that impacted the customer experience.
Previously, we discussed the implications of inter-channel consistency for researchers, and introduced a process for management to define a set of employee behaviors which will support the organization’s customer experience goals across multiple channels.
This post considers the implications of intra-channel consistency for customer experience researchers.
As with cross-channel consistency, intra-channel consistency, or consistency within individual channels requires the researcher to identify the causes of variation in the customer experience. The causes of intra-channel variation, is more often than not at the local level – the individual stores, branches, employees, etc. For example, a bank branch with large variation in customer traffic is more likely to experience variation in the customer experience.
Regardless of the source, consistency equals quality.
In our own research, Kinēsis conducted a mystery shop study of six national institutions to evaluate the customer experience at the branch level. In this research, we observed a similar relationship between consistency and quality. The branches in the top quartile in terms of consistency delivered customer satisfaction scores 15% higher than branches in the bottom quartile. But customer satisfaction is a means to an end, not an end goal in and of itself. In terms of an end business objective, such as loyalty or purchase intent, branches in the top quartile of consistency delivered purchase intent ratings 20% higher than branches in the bottom quartile.
Purchase intent and satisfaction with the experience were both measured on a 5-point scale.
Again, it is incumbent on customer experience researchers to identify the causes of inconsistency. A search for the root cause of variation in customer journeys must consider processes cause variation.
One tool to measure process cause variation is a Voice of the Customer (VOC) Table. VOC Tables have a two-fold purpose: First, to identify specific business processes which can cause customer experience variations, and second, to identify which business processes will yield the largest ROI in terms of improving the customer experience.
VOC Tables provide a clear road map to identify action steps using a vertical and horizontal grid. On the vertical axis, each customer experience attribute within a given channel is listed. For each of these attributes a judgment is made about the relative importance of each attribute. This importance is expressed as a numeric value. On the horizontal axis is a exhaustive list of business processes the customer is likely to encounter, both directly and indirectly, in the customer journey.
This grid design matches each business process on the horizontal axis to each service attribute on the vertical axis. Each cell created in this grid contains a value which represents the strength of the influence of each business process listed on the horizontal axis to each customer experience attribute.
Finally, a value is calculated at the bottom of each column which sums the values of the strength of influence multiplied by the importance of each customer experience attribute. This yields a value of the cumulative strength of influence of each business process on the customer experience weighted by its relative importance.
Consider the following example in a retail mortgage lending environment.
In this example, the relative importance of each customer experience attributes was determined by correlating these attributes to a “would recommend” question, which served as a loyalty proxy. This yields an estimate of importance based on the attribute’s strength of relationship to customer loyalty, and populates the far left column. Specific business processes for the mortgage process are listed across the top of this table. Within each cell, an informed judgment has been made regarding the relative strength of the business process’s influence on the customer experience attribute. This strength of influence has been assigned a value of 1 – 3. It is multiplied by the importance measure of each customer experience attribute and summed into a weighted strength of influence – weighted by importance, for each business process.
In this example, the business processes which will yield the highest ROI in terms of driving the customer experience are quote of loan terms (weighted strength of influence 23.9), clearance of exemptions (22.0), explanation of loan terms (20.2), loan application (18.9) and document collection (16.3).
This post considers the implications of cross-channel consistency for customer experience researchers. The first research implication of inter-channel consistency is to understand that researchers must investigate service delivery consistency at its cause.
The range of choices available to customers here in the 21-st century is incredible. Gone are the Henry Ford days when you, as he put it, “could have any color you want as long as it’s black.” Modern customers have an array of choices available to them not only in the brands but in delivery channels. Modern brands must serve channels in the channel of the customer’s choice, be it on-line, mobile, contact center, or in-person. As customer choice expands cross-channel consistency has become more and more important.
The problem for customer experience researchers is that this channel expansion requires a broad tool box of research techniques, as different channels require unique systems and processes appropriate to the channel. Systems and processes for on-line channels are different than those for in-person channels. These different systems and processes often lead to the siloing of channels, which may help make individual channels more efficient, but run the risk in inconsistencies in the customer experience from one channel to the other.
Customers, however, don’t look at a brand as a collection of siloed channels. Customers do not care about organizational charts. They expect a consistent customer experience regardless of channels. Customers expect cross-channel consistency.
If senior management has defined the customer experience organization-wide, the researcher’s role in coordinating research tools is much easier. If management has not defined the customer experience organization-wide, the researcher’s role is nearly impossible.
The first step in defining the customer experience organization-wide is writing a clear customer experience mission statement which clearly communicates how customers should experience the brand, and how management wants customers to feel as a result of the experience. Next, the customer experience should be defined in terms of broad dimensions and specific attributes which constitute the desired customer experience and emotional reaction to the brand.
For illustration, let’s consider the following example:
A bank may define their customer experience with four broad dimensions, which can be described as:
- Relationship Building
- Sales Process
- Product Knowledge
- Customer Knowledge
Next, the customer experience leadership of this bank must define each of these broad dimensions in terms of specific attributes which combine to make up the dimensions. For example, each of the above four dimensions may be defined by the following attributes:
|Relationship Building||Establish trust
Commitment to customer needs
Perceived as trusted advisor
|Sales Process||Referral to appropriate partner|
|Product Knowledge||Understanding of a range of products
Understand features and benefits
Explain benefits in ways that are meaningful to customers
|Customer Knowledge||Needs analysis|
Once each of the above dimensions has been defined in terms of specific attributes, the next step in translating the customer experience definition to action is to define a set of empirical behaviors which support each attribute.
For example, establishing trust is an attribute of relationship building.
Relationship Building –> Establish Trust
Under this example, a set of behaviors is defined which are designed to establish trust. For example, these behaviors may be:
- Maintain eye contact
- Speak clearly
- Maintain smile
- Thank for business
- Ask “What else may we assist you with today?”
- Encourage future business
Now, each of these six behaviors is mapped across each channel. So, for example, this bank may map these behaviors across channels as follows:
Behaviors Which Support Establishing Trust:
|New Accounts||Teller||Contact Center|
|Maintain eye contact||Maintain eye contact||—|
|Speak clearly||Speak clearly||Speak clearly|
|Maintain smile||Maintain smile||Sound as if they were smiling through the phone|
|Thank for business||Thank for business||Thank for business|
|Ask “What else may we assist you with today?”||Ask “What else may we assist you with today?”||Ask “What else they could do to assist you today?”|
|Encourage future business||Encourage future business||Encourage future business|
Repeating this process of mapping behaviors to each of the attributes will produce a complete list of employee behaviors appropriate to each channel in support of management’s broader customer experience objectives.
Inconsistent treatment based on certain demographic characteristics is illegal. The Civil Rights Act of 1964 prohibits discrimination in almost all privately owned service industries based on race, color, religion, gender, or national origin. Other industries, such as retail banking, have additional regulatory requirements.
Beyond this legal risk, managers must be aware of the significant risk to the reputation of the brand posed by discriminatory practices.
Managers may seek comfort in the knowledge that their company’s policies and procedures are not to refuse service to anyone. However, this overt discrimination is just a small part of the risk associated with discrimination. Beyond overt discrimination, which is extremely rare, there are two other categories of discriminatory practices: disparate impact and disparate treatment.
Disparate impact is the result of policies or business practices which have an unequal impact. A restaurant with a policy to require prepayment for meals from one demographic group and not another is an example of disparate impact.
Disparate treatment is differences in treatment that originate at the customer-employee interface. Disparate treatment does not necessarily need to be a conscious act. It can be an unconscious pattern or practice of different treatment that the employee is not even aware of. The use of name, offering promotional material to a customer of one group as opposed to a customer on another group are all examples of disparate treatment.
Now, observing differences is treatment is not necessarily proof of discrimination. Human behavior, after all, is variable. There is a certain amount of normal variation in all service encounters. The trick is to determine if disparate treatment observed represents a pattern or practice of discrimination. Fortunately statistics has the answer, we use statistical tests of significance to determine both if observed differences in treatment are the result of actual discriminatory practices and the likelihood that any one member of a protected class will be treated differently than a member of another protected class. It should be noted, however, that regulatory agencies set the bar much higher. Many do not necessarily rely on statistical testing. In their view, any single case of disparate treatment is evidence of discrimination.
In a future post we will discuss the implications for customer experience researchers in testing for disparate treatment.
Inconsistent customer experiences are a significant threat to customer loyalty. In a previous post, we observed the casual relationship between consistency in the customer experience and feelings of trust and loyalty.
Consistency drives satisfaction. It is extremely common to see a correlation between intra-channel consistency and performance. Consider the following scatter plot from Kinesis’ research, which plots bank branch customer satisfaction by the variation in branch customer satisfaction:
As this plot demonstrates, consistency correlates with quality. Branches with higher customer satisfaction ratings are also the most consistent. In our customer experience research proactive we see this time and time again.
Additionally, this plot also demonstrates that top-line averages of customer satisfaction can be misleading. The bank in this plot had an average customer satisfaction rating of 93%. However, many branches fall well below this top-line average, resulting in an incomplete picture of the customer experience. Customers do not experience top-line averages; they experience the customer experience one interaction at a time at the local business unit level.
What are the implications for managers of the customer experience?
The first implication for managers is the above observation that top-line averages can mislead. Top-line averages hide individual business units with both low and inconsistent customer satisfaction. Top-line averages come between management and customers, distancing managers from how customers actually experience the brand.
Secondly, variation must be managed at the cause. Intra-channel variation is almost always at the local business unit level. For example, a store with a high degree of variation in customer traffic will experience a high degree of variation in the customer experience if management does not mitigate the effects of the variation in traffic.
How to manage for consistency:
- Manage inconsistency at the cause
- Write a clear mission statement
- Use appropriate analytics
- Don’t silo analytics by channel
- Meet regularly with employees to share problems and potential solutions
- Focus on customer journey
Intra-channel consistency needs to be managed at the local level – individual stores and agents. Tools need to be available deep into the organization to allow managers at the lowest level of each channel to deliver a consistent experience.
In the next post we will explore demographic consistency, treating all customers the same regardless of their demographic profile.
The modern customer experience environment is constituted of an ever expanding variety of delivery channels, with no evidence of the slowing of the pace of channel expansion. As channel expansion continues, customer empowerment is increasing with customer choice. Customer relationships with brands are not derived from individuals’ discrete interactions. Rather, customer relationships are defined by clusters of interactions, clusters of interactions across the entire life cycle of the relationships, and across all channels. Inter-channel consistency defines the customer relationship.
McKinsey and Company concluded in their 2014 report, The Three Cs of Customer Satisfaction: Consistency, Consistency, Consistency, demonstrated, in a retail banking context, a link between cross-channel consistency and bank performance.
In customers’ minds, all channels belong to the same brand. Customers do not consider management silos or organizational charts – to them all channels are the same. Customers expect consistent experiences regardless of channel. In their minds, an agent at a call center should have the same information and training as in-person agents.
What are the implications for managers of the customer experience?
The primary management issue in aligning disparate channels is to manage inconsistency at its cause. The most common cause of inconsistencies across channels is the result of siloed management, where managers’ jurisdiction is limited to their channel. Inter-channel consistency is increasingly important as advances in technology expand customer choice. Brands need to serve customers in the channel of their choice. Therefore, the cause of inter-channel inconsistency must be managed higher up in the organization at the lowest level where lines of authority across channels converge, or through some kind of cross-functional authority.
The implications for management are not limited to senior management and cross-functional teams. Customer experience managers should be aware that top-line averages can mislead. Improvement opportunities are rarely found in top-line averages, but at the local level. Again, the key is to manage inconsistency at the cause. Inconsistency at the local level almost always has a local cause; as a result, variability in performance must be managed at the local level as well.
Business Case and Implications for Consistency – Part 4 – Consistency and the Outsized Influence of Poor Experiences
This post continues to explore the business case for consistency by considering the influence of poor experiences.
To start, let’s consider the following case study:
Assume a brand’s typical customer has 5 service interactions per year. Also assume, the brand has a relatively strong 95% satisfaction rate. Given these assumptions, the typical customer has a 25% probability each year of having a negative experience, and in four years, in theory, every customer will have a negative experience.
As this case study illustrates, customer relationships with brands are not defined by individual, discrete customer experiences but by clusters of interactions across the lifecycle of the customer relationship. The influence of individual experiences is far less important than the cumulative effect of these clusters of customer experiences.
Consistency reduces the likelihood of negative experiences contaminating the clusters of experiences which make up the whole of the customer relationship. Negative experiences, regardless of how infrequent, have a particularly caustic effect on the customer relationship. A variety of research, including McKiney’s The Three Cs of Customer Satisfaction: Consistency, Consistency, Consistency, has concluded that negative experiences have three to four times the influence on the customer as positive experiences – three to four times the influence on the customer’s emotional reaction to the brand – three to four times the influence on loyalty, purchase intent and social sharing within their network.