Measure the Customer Experience in an Integrated Cross Channel Environment
Success in retail banking requires meeting customers with the correct channel for the customer’s waypoint in that journey.
A waypoint is a point of reference when navigating a journey.
Not all waypoints are equal. Customers prefer different channels based on the waypoint in their customer journey. As a result, different channels have assumed different roles in the customer journey. The challenge for customer experience managers is to provide an integrated customer experience across all waypoints.
Kinesis’ research has identified specific roles for each integrated channel in the customer journey:
Channel | Preferred Role |
Mobile | – Transaction Tool |
Web | – Primary Role: Research Tool – Secondary Role: Sales & Transfers |
Contact Center | – Help Center – Source of Advice |
Branch | – Sales Center – Source of Advice |
The mobile channel is seen by customers as a transaction tool; the website’s role is broader, as a research, transaction and sales channel; contact centers are primarily a help center; and the branch is primarily a sales and advice channel.
This post offers a framework to measure individual channels in a way that will provide both channel specific direction in managing the experience, as well as benchmarking each channel against each other using consistent measurements.
Two CX Risks: Exposure and Moments of Truth
In designing a customer experience measurement program, it is instructive to think of the omni-channel experience in terms of two risks: exposure and moments of truth.
Exposure Risk
Exposure risk is the frequency of customer interactions within each channel. Poor experiences in channels with high frequencies are replicated across more customers resulting in exposing more customers to poor experiences. Mobile apps are the most frequently used channel. According to our research, customers use mobile banking apps 24 times more frequently than visiting a branch. Mobile banking has most exposure risk. Websites are used by banking customers 16 times more frequently than a branch; followed contact centers, used 2.3 times more frequently than branches.
Moments of Truth
Moments of truth are critical experiences with more individual importance. Poor experiences in a moment of truth interaction lead to negative customer emotions, with similarly negative impacts on customer profitability and word of mouth.

Routine transactions, like transfers or deposits, represent low moment of truth risk, problem resolution or account opening are significant moments of truth.
Exposure & Moment of Truth Risk by Channel
Different channels represent exposure and moment of truth risk is different ways.

The mobile channel’s role is primarily a transaction tool. According to our research the mobile channel is the preferred channel for both transfers (58%) and deposits (53%). It, therefore, has the highest exposure risk and lowest moment of truth risk.
The website is a mixed channel between research, transactions and opening accounts. A plurality of customers (40%) consider the website their preferred channel to get information, followed by transfers (33%) and opening accounts (31%). As a result, the web channel has a mix of exposure and moment of truth risk.
The contact center is primarily viewed as a channel for problem resolution (51%), followed by an advice and information source (27% and 23%, respectively). It represents low exposure risk and elevated moment of truth risk.
Finally, the branch is the primary a source for advice and account opening (53% and 51%, respectively). With infrequent use and high impact customer experiences, the branch has very low exposure risk, and significant moment of truth risk.
Understanding Exposure and Moments of Truth Risk to Inform CX Measurement
This concept of risk, along exposure and moments of truth, provides an excellent framework for informing customer experience measurement.
Digital channels with high exposure risk should be tested thoroughly with usability, focus groups, ethnography and other qualitative research to ensure features meet customer needs and are programmed correctly. Once programmed and tested, they need to be monitored with ongoing audits.
Channels with higher moment of truth risk are best monitored with post-transaction surveys, mystery shopping and the occasional focus group.
Exposure Risk | Moments of Truth |
Design Focus Groups Usability Tests Ongoing Audits | Post Transaction Surveys Mystery Shopping Focus Groups |
Integrated CX Measurement Design
When measuring the customer experience across multiple channels in an integrated manner, we recommend gathering both consistent measures across all channels, as well as measures specific to each channel. Each channel has their own specific needs; however, consistent measures across all channels provide context and a point of comparison.
Consistent Measures
Cross-channel consistency is key to the customer experience. Inconsistent experiences confuse and frustrate customers, and risk erosion of the brand value.
The consistent cross-channel measures Kinesis prefers to use are measures of the brand personality and efficacy of the customer experience.
Brand Personality | Efficacy of the Experience |
Brand Adjectives Brand Statements | Purchase Intent Likelihood of Referral Customer Advocacy |
Brand Personality: To measure brand personality, Kinesis asks clients to list five adjectives that describes their brand personality. Then we simply ask customers if each adjective described the customer experience. We also ask clients to give us five statements that describe their desired brand, and measure the experience with an agreement scale. For example, a client may desire their brand to be described by the statements: We are committed to the community. We would then ask respondents the extent to which they are in agreement with the statement: We are committed to the community. These measures of brand adjectives and brand statements provide managers of the customer experience a clear benchmark from which to evaluate how each channel reflects the desired brand personality.
Efficacy of the Experience: Ultimately, the goal of the customer experience is to produce the intended result – results like loyalty, increased wallet share, or lower transaction costs. Kinesis has had success using three measures to evaluate the efficacy of the customer experience:
- Purchase Intent: Purchase intent is an excellent measure of efficacy of the experience. To measure purchase intent we ask respondents how the experience influenced their intention to either open an account or maintain an existing relationship with the financial institution.
- Likelihood of Referral: The use of measures of likelihood of referral, like NPS, as a proxy for customer loyalty is almost universally accepted, and as a result, is often an excellent measure of efficacy of the experience.
- Customer Advocacy: Beyond likelihood of referral, agreement with the statement, My bank cares about me, not just the bottom line, is an excellent predictor of customer loyalty.
Channel Specific Attributes
In addition to consistent cross-channel measurements, it is important to focus on channel specific customer experience attributes. While consistent measures across channels provide a benchmark to brand objectives, measuring specific service attributes provides actionable information about how to improve the customer experience in each specific channel.
In designing channel specific research features, ask yourself what specific service attributes or behaviors do you expect from each channel. The answer to these questions will depend on the channel and your brand objectives. In general, they typically roll up to the following broad dimensions of the customer experience:
Specific Channel Dimensions
Digital Channels | Personal Channels |
Appeal Identity Navigation Content/ Presentation Value Trust | Reliability Responsiveness Empathy Competence Tangibles |
For digital channels, the best specific attributes to measure are ones associated with appeal, identity, navigation, content/ presentation, value, trust. For personal channels, such as contact centers and branches, we find the best attributes are associated with dimensions of reliability, responsiveness, empathy, competence, and tangibles.
Not all waypoints in the customer journey are equal. Customer experience researchers need to consider the role of each channel in the customer journey and design measurement tools with both channel specific observations, as well consistent measures across all channels.
Integrated Digital First CX Model: Implications for CX Managers
In previous posts to this five-part series on building an integrated digital-first service model we discussed:
- Matching different waypoints of the customer journey to the channels best suited for the specific waypoint;
- Customer preferences for a financial service provider; and
- What customers want from digital channels.
A waypoint is a point of reference when navigating a journey. Not only do the customer journeys take place across a multiple channels, but they take place across multiple transactions or waypoints.
An integrated digital channel strategy must be founded on understanding how specific channels match up to specific waypoints in the customer journey. In the first installment of these series there is a discussion of this issue. The understanding that different transactions match different channels is the whole point of an integrated strategy.

Currently, not every digital channel is a match for every customer need. Digital channels with higher a frequency of visits are increasingly the day-to-day face of the institution. The exposure risk of these channels is high, and managers of the customer experience must make sure digital channels are well programmed and tested to manage this exposure risk. Currently, however, customers prefer to match digital channels for low moment of truth interactions such as transfers, deposits, and researching information. In terms of satisfaction, these digital channels outperform the non-digital; however, they play on a very different playing field. Customers interact with branches and contact centers much less frequently, and assign lower satisfaction ratings to these channels. But when they do use these channels, it is much more important. Customers match these non-digital channels to high moment-of-truth interactions such as seeking advice, problem resolution, and opening an account.
Advances in artificial intelligence will no doubt close some of the moment of truth gaps between digital and non-digital channels, but for now, there still is a role for branches and contact centers. Closing these gaps between digital and non-digital channels will require attention to both personalization and trust. Again, people want banks to care about their needs and have the ability to meet their needs and solve their problems.
What do customers want from a bank?
Overall, customers value efficiency and personalized service from their primary financial institution. As we’ve seen, the most appealing service attributes to customers are:
• Online and mobile services
• Quick and efficient service
• Fast resolution of any issues
• Ability to manage my accounts in ways that suit me
• Polite and knowledgeable staff
It is important to note, this list includes both digital and personal channels. Customers value an integrated approach.
ROI Potential of Digital Banking Attributes
Investments in timely information, financial value, and cyber security assistance have the most potential for return on investment. The digital banking attributes with the highest potential for ROI in terms of appeal to customers and increasing their trust are:
• Alerts about upcoming direct debits
• Alerts about upcoming overdrafts
• Offers and perks from places shopped often
• Cyber security assistance
Further, investments in personalized information have the highest potential for fostering customer trust. Customers do not find the following attributes particularly appealing relative to other attributes; however, they do offer high ROI potential in terms of increasing customer trust:
• Analytics/dashboards
• Budget information
• Savings tips
• Balance updates
Ultimately, success or failure of any integrated digital first strategy will require banks to achieve something that has eluded them so far – that is to scale personalization.
Video Banking
Video banking seems an obvious solution to scale personalization. However, while the potential of their adoption of this channel, in the age of Zoom, is delayed. According to our research only 4% of bank customers have used video banking. However, of those consumers who have used video banking, all of them trust their primary financial institution, and felt it looked after their financial wellbeing. This suggests video banking could be well received, and deepen the overall relationship with the customers.
Customer Waypoints & Channel Preferences in and Integrated Digital CX Delivery Model
What started decades ago as a migration away from the branch channel has accelerated during the Covid-19 pandemic – aided by technological advances that were not available just a few years ago. This confluence of the pandemic and technical advances is culminating in an age where it is possible to deliver a seamless integrated digital first retail banking delivery model. Such an integrated delivery model is based on the understanding that customers have different needs at different moments in their customer journey. This delivery model matches channels strategically to these different needs at the correct moment for the customer.
Customer Journey Waypoints
A waypoint is a point of reference when navigating a journey. Not only do the customer journeys take place across a multiple channels, but they take place across multiple transactions or waypoints as well. To investigate how customers navigate digital and personal channels, Kinēsis researched customer channel preferences for six different customer journey waypoints: opening an account, problem resolution, seeking advice, getting information, making a deposit, and transferring funds.
Two CX Risks: Exposure & Moments of Truth
Business is often a process of balancing risks. The customer experience is no different. Managers of an integrated delivery model should be aware of the two primary risks they face: exposure and moments of truth. Exposure risk is the sheer frequency of customer encounters in the channel. Poor experiences in a high exposure channels spread this poor experience across more customers. Moments of truth are critical experiences with more individual importance of the waypoint. Poor experiences in a moment of truth interaction lead to negative customer emotions, with similarly negative impacts on customer profitability and word of mouth.

Waypoints and Channel Preferences
The foundation of an integrated digital first CX model is based on matching the best suited channels based on the needs of both the customer and the institution. Customer channel choice is not uniform. Rather, customers select channels they deem appropriate based on the waypoint of the customer journey they find themselves. For customers conducting a transfer or deposit, mobile apps are the most popular channel (preferred by 58% and 53% of the customers, respectively). Customers seeking information have a broader range of preferred channels, but a plurality (40%) prefers to seek information via the website. The contact center’s preferred role is problem resolution (51%); while the branch is preferred to both seek advice and open an account.
The following table illustrates these different channel preferences for different waypoints in the journey, as well as overlaying channel use, satisfaction and the moment of truth potential for each waypoint:

The above table illustrates the current state of the integrated digital first business model. The digital channels, with the most exposure risk, are the primary customer choice for waypoints which represent low moment of truth risk.
• Automated transactions such as transfers and deposits are preferred with an app.
• The website serves as both an information center, and to a lesser extent transactional center.
• The contact center’s primary role is problem resolution, and as a result carries significant risk in terms of moments of truth.
• The branch is where customers come to seek advice as well as initiate or deepen a banking relationship by opening an account.
Again, managers of the customer experience should be cognizant of both their risk in terms of exposure and moments of truth. With an average of nearly one visit every other day, poorly executed mobile experiences represent significant exposure risk, yet the nature of these transactions represent low moment of truth risk. Fortunately, 88% of customers are satisfied with their financial institution’s app, with a near super majority of customers describing themselves as very satisfied. The branch and contact center have the opposite risk profile. With, respectively, an average of just 7 and 17 visits annually, they do not represent a significant exposure risk. However, both the contact center and the branch represent significant risk with respect to encountering moments of truth. While digital channels are the daily face of the institution, when faced with a moment of truth, customers appear to prefer a see a real face, or hear a comforting voice. Customers interact with branches and contact centers much less frequently – but when they do – it is important. In this light, the relative dissatisfaction with these channels (average satisfaction 4.2 and 4.0, respectively) relative to apps and websites (average satisfaction 4.5 and 4.4, respectively) is cause for concern. The computers appear to be out performing the people – but they perform on an easier playing field.
The current environment with pandemic-related disruptions has pushed most customers into accelerating digital adoption. However, there is an element of trust missing with digital delivery. As most customers shy away from digital channels when their need advances up the moment of truth scale. Trust will be key in deepening digital relationships.
Conclusion
Momentum toward digital banking has been building for decades as emergent technologies, aided by the pandemic, increase the utility and use of digital channels. This confluence of the pandemic and technical advances is culminating in an age where it is possible to deliver a seamless digital first integrated retail banking business model.
Such an integrated delivery model is based on the understanding that customers have different needs at different moments of their customer journey. This delivery model matches channels strategically to these different needs at the correct moment for the customer.
Neither size nor a focus on technology provides an advantage in terms of the overall customer experience. The evidence strongly suggests, being closer to the customer, and matching different waypoints of the customer journey to the channels best suited for the specific waypoint is the best CX model:
• Automated transactions are preferred with an app.
• The website is best positioned as an information center, and to a lesser extent transactional center.
• Problem resolution in customers’ minds is the contact center’s primary role.
• Customers visit a branch to seek advice or open an account.
In future installments of this five-part series, we will:
A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Planned Interactions
Part 2: Research Tools to Monitor Planned Interactions through the Customer Lifecycle
As we explored in an earlier post, Three Types of Customer Experiences CX Managers Must Understand, there are three types of customer interactions: Planned, Stabilizing, and Critical.
Planned interactions are intended to increase customer profitability through the customer lifecycle by engaging customers with relevant planned interactions and content in an integrated omni-channel environment. Planned interactions will continue to grow in importance as the financial service industry shifts to an integrated digital first model.
These planned interactions are frequently triggered by changes in account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action toward planned interactions. Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating their effectiveness – regardless of the channel.
The key to an effective strategy for planned interactions is relevance. Triggered requests for increased engagement must be made in the context of the customer’s needs and with their permission; otherwise, the requests will come off as clumsy and annoying, and give the impression the bank is not really interested in the customer’s individual needs. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and relevant approach to planned interactions.
Research Plan for Planned Interactions
The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign. Ask yourself, what customer interactions are planned through these layers of integrated channels. Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.
For example, after acquisition and onboarding, assume a bank has a campaign to trigger planned interactions based on triggers from past engagement. These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.

Engagement Phase
Often it is instructive to think of customer experience research in terms of the bank-customer interface, employing different research tools to study the customer experience from both sides of this interface.
In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:
Customer Side | Brand Side |
Post-Event Surveys These post-experience surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey. They can be performed across all channels, digital, contact center and in-person. As the name implies, the purpose of this type of survey is to measure experience with a specific customer experience. |
Employee Surveys Ultimately, employees are at the center of the integrated customer experience model. Employee surveys often measure employee satisfaction and engagement. However, there is far more value to be gleaned from employees. We employ them to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information. They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identify perceptual gaps between management and frontline personnel. |
Overall Satisfaction Surveys Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction. They give managers valuable insight into overall satisfaction, engagement, image and positioning across the entire customer base, not just active customers. |
Digital Delivery Channel Shopping Be it a website or mobile app, digital mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these digital channels. |
Transactional Mystery Shopping Mystery shopping is about alignment. It is an excellent tool to align the customer experience to the brand. Best-in-class mystery shopping answers the question: is our customer experience consistent with our brand objectives? Historically, mystery shopping has been in the in-person channel, however we are seeing increasing mystery shopping to contact center agents. |
Growth Phase
In the growth phase, we measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:
Customer Side | Brand Side |
Awareness Surveys Awareness of the brand, its products and services, is central to planned service interactions. Managers need to know how awareness and attitudes change as a result of these planned experiences. |
Cross-Sell Mystery Shopping In these unique mystery shops, mystery shoppers are seeded into the lead/referral process. The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction. These shops work very well in planned sales interactions within the contact center environment. |
Wallet Share Surveys These surveys are used to evaluate customer engagement with and loyalty to the institution. Specifically, they determine if customers consider the institution their primary provider of financial services, and identify potential road blocks to wallet share growth. |
Retention Phase
Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:
Customer Side | Brand Side |
Critical Incident Technique (CIT) CIT is a qualitative research methodology designed to uncover details surrounding a service encounter that a customer found particularly satisfying or dissatisfying. This research technique identifies these common critical incidents, their impact on the customer experience, and customer engagement, giving managers an informed perspective upon which to prepare employees to recognize moments of truth, and respond in ways that will lead to positive outcomes. |
Employee Surveys Employees observe firsthand the relationship with the customer. They are a valuable resource of customer experience information, and can provide a lot of context into the types of bad experiences customers frequently experience. |
Lost Customer Surveys Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention. |
Life Cycle Mystery Shopping If an integrated channel approach is the objective, one should measure the customer experience in an integrated manner. In lifecycle shops, shoppers interact with the bank over a period of time, across multiple touch points (digital, contact center and in-person). This lifecycle approach provides broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across all channels. |
Comment Listening Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction. |
Call to Action – Make the Most of the Research
For customer experience surveys, we recommend testing the effectiveness of planned interactions by benchmarking three loyalty attitudes:
- Would Recommend: The likelihood of the customer recommending the bank to a friend, relative or colleague.
- Customer Advocacy: The extent to which the customer agrees with the statement, “My bank cares about me, not just the bottom line?”
- Primary Provider: Does the customer consider the institution their primary provider for financial services?
For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.
As the integrated digital first business model accelerates, planned interactions will continue to grow in importance, and managers of the customer experience should build customer experience monitoring tools to evaluate the efficacy of these planned experiences in terms of driving desired customer attitudes and behaviors.

Critical Incident Technique: A Tool to Identify and Prepare for Your Moments of Truth
As we explored in an earlier post, 3 Types of Customer Interactions Every Customer Experience Manager Must Understand, there are three types of customer interactions: Stabilizing, Critical, and Planned.
The second of these, “critical” interactions are service encounters which are out of the ordinary (a complaint, question, special request, an employee going the extra mile). The outcomes of these critical incidents can be either positive or negative, depending on how they are responded to; however, they rarely are neutral. Because they are memorable and unusual, critical interactions tend to have a powerful effect on the relationship with the customer, they are “moments of truth” where the brand has an opportunity to solidify the relationship or risk defection.
Customer experience strategies need to include systems for identifying common or potential moments of truth, analyzing trends and patterns, and feeding that information back to the organization. Employees can then be trained to recognize critical opportunities, and empowered to respond to them in such a way that they will lead to positive outcomes and desired customer behaviors. One way to identify potential moments of truth and gauge the efficacy of service recovery strategies is a research technique called Critical Incident Technique (CIT).
Critical Incident Technique
CIT is a qualitative research methodology designed to uncover details surrounding a service encounter that a customer found particularly satisfying or dissatisfying. There is plenty of room for freedom in study design, but basically what we are trying to find out is what happened, what the customer did in response to the incident (positive or negative), what recovery strategy was used for negative incidents, and how effective was this recovery strategy.
Again, there is a lot of freedom here, but roughly study design looks like this:
First, ask the research participant to recall a recent experience in your industry that was particularly satisfying or dissatisfying. Now, ask open-ended probing questions to gather the who, what, when, why and how surrounding that experience, questions like:
- When did the incident happen?
- What caused the incident? What are the specific circumstances that led to the incident or situation?
- Why did you feel the incident was particularly satisfying or dissatisfying?
- How did the provider respond to the incident? How did they correct it?
- What action(s) did you take as a result of the incident?
The analysis of CIT interviews consists of classifying these incidents into well defined, mutually exclusive categories and sub-categories of increasing specificity. For example, the researcher may classify incidents into the following categories:
- Service Delivery System Failures
- Unavailable Service
- Unreasonably Slow Service
- Other Core Service Failures
- Customer Needs and Requests
- Special Customer Needs
- Customer Preferences
- Unprompted and Unsolicited Actions
- Attention Paid to Customer
- Truly Out of the Ordinary Employee Behavior/Performance
- Holistic Evaluation
- Performance Under Adverse Circumstances
A similar classification technique should be used to group both recovery strategies and their effectiveness. As well as classifying the attitudinal and behavioral result on the customer, identifying in what ways the customer changed their behavior toward or relationship with the brand based on the incident, such as, did they purchase more or less, tell others about the experience directly or via social media, call for support more or less often, use different channels, change providers, etc.
The end result of this analysis will produce a list of common moments of truth within your industry, how customers change their behavior in either profitable ways or unprofitable ways as a result of this moment of truth and an evaluation of the effectiveness of recovery strategies, giving managers an informed perspective upon which to prepare employees to recognize moments of truth, and respond in ways that will lead to positive outcomes.
For additional perspectives on moments of truth, see the post: 4 Ways to Understand & Manage Moments of Truth.
Translate Research to Action with a VOC Table
Ask any group of satisfaction researchers and consumers of satisfaction research about the largest problem facing the research industry, most likely, the lack of actionability (or usefulness of the research) will be the most common concern raised. All too often, research is conducted, reports produced and bound into professional looking binders which end up gathering dust on a shelf some place, or if you’re like me, providing excellent use as a door stop.
What is missing is a strategy to transition research into action, and bring the various stakeholders into the research process.
Managers and researchers alike are faced with the difficult task of determining where to make investments, and predicting the relative return on such investments. One such tool for transforming research into action is the Voice of the Customer (VOC) table.
A VOC Table is an excellent tool to match key satisfaction dimensions and attributes with business processes, and allow managers to make informed judgments regarding which business process will have the most return in terms of satisfaction improvement.
A VOC Table supports this transition from listing the key survey elements on the vertical axis, sorting each attribute by an importance rating. On the horizontal axis, a complete list of business functions is listed. At this point, the researcher and manager match business process/functions with key survey elements and make judgments regarding the extent to which the business function influences key survey element (in the enclosed example, a dark filled-in square represents a strong influence, an unfilled square represents a moderate influence, while a triangle represents a slight influence.) A numeric value is assigned to each influence (typically, a value of ‘four’ for a strong influence, ‘two’ for a medium influence, and ‘one’ for a weak influence). For each cell in the table, a value is calculated by multiplying the strength of the influence by the importance rating of the survey element. Finally, the cell values are summed for each column (business function) to determine which business functions have the most influence on customer satisfaction.
Consider the enclosed example of a VOC table. In this example, a retail mortgage-lending firm has conducted a wave of customer satisfaction research, and intends to link this research to process improvement initiatives using the attached VOC Table. The satisfaction attributes and their relative importance, as determined in the survey, are listed in the far left column. Specific business processes from loan origination to closing are listed across the top of the table. For each cell, where satisfaction attributes and business process intersect, the researchers have made a judgment of the strength of the business process’s influence on the satisfaction attribute. For example, the researchers have determined proper document collection to have a strong influence on the firm’s ability to perform services right the first time, and a weak relationship for willingness to provide service. For each cell, the strength of the influence is multiplied by the importance. The sum of the values of each cell in each column determines the relative importance of each business process in influencing overall customer satisfaction.
In the example, the loan quote process and clearance of underwriting exemptions are the two parts of the lending process, which have the greatest influence on customer satisfaction, followed closely by an explanation of the loan process. The other three aspects of the loan process of significance are document collection, application, and preliminary approval. The least important are document recording and credit and title report ordering. The managers of this hypothetical lending institution now know what parts of the lending process to focus on to improve customer satisfaction. Furthermore, in addition to knowing which specific events to focus on, they also know, generally speaking, which improvements in the loan origination process will yield more return in terms of customer satisfaction than improvement in processing, underwriting, and closing. As all the loan origination elements have comparatively strong influence on satisfaction.
Not All Customer Experience Variation is Equal: Use Control Charts to Identify Actual Changes in the Customer Experience
Variability in customer experience scores is common and normal. Be it a survey of customers, mystery shops, social listening or other customer experience measurement, a certain amount of random variation in the data is normal. As a result, managers need a means of interpreting any variation in their customer experience measurement to evaluate if the customer experience is truly changing, or if the variation they are seeing is simply random.
One solution to this need is control charts. Control charts are a statistical tool commonly used in Six Sigma programs to measure variation. They track customer experience measurements within upper and lower quality control limits. When measurements fall outside either limit, the trend indicates an actual change in the customer experience rather than just random variation.
To illustrate this concept, consider the following example of mystery shop results:
In this example the general trend of the mystery shop scores is up, however, from month to month there is a bit of variation. Managers of this customer experience need to know if July was a particularly bad month, conversely, is the improved performance of in October and November something to be excited about. Does it represent a true change in the customer experience?
To answer these questions, there are two more pieces of information we need to know beyond the average mystery shop scores: the sample size or count of shops for each month and the standard deviation in shop scores for each month.
The following table adds these two additional pieces of information into our example:
Month | Count of Mystery Shops | Average Mystery Shop Scores | Standard Deviation of Mystery Shop Scores |
May | 510 | 83% | 18% |
June | 496 | 84% | 18% |
July | 495 | 82% | 20% |
Aug | 513 | 83% | 15% |
Sept | 504 | 83% | 15% |
Oct | 489 | 85% | 14% |
Nov | 494 | 85% | 15% |
Averages | 500 | 83.6% | 16.4% |
Now, in order to determine if the variation in shops scores is significant or not, we need to calculate upper and lower quality control limits, where any variation above or below these limits is significant, reflecting an actual change in the customer experience.
The upper and lower quality control limits (UCL and LCL, respectively), at a 95% confidence level, are calculated according to the following formulas:
Where:
x = Grand Mean of the score
n = Mean sample size (number of shops)
SD = Mean standard deviation
Applying these equations to the data in the above table, produces the following control chart, where the upper and lower quality control limits are depicted in red.
This control chart tells us that, not only is the general trend of the mystery shop scores positive, and that November’s performance has improved above the upper control limit, but it also reveals that something unusual happened in July, where performance slipped below the lower control limit. Maybe employee turnover caused the decrease, or something external such as a weather event was the cause, but we know with 95% confidence the attributes measured in July were less present relative to the other months. All other variation outside of November or July is not large enough to be considered statistically significant.
So…what this control chart gives managers is a meaningful way to determine if any variation in their customer experience measurement reflects an actual change in the experience as opposed to random variation or chance.
In the next post, we will look to the causes of this variation.
Next post:
Not All Customer Experience Variation is Equal: Common Cause vs. Special Cause Variation
Not All Customer Experience Variation is Equal: Common Cause vs. Special Cause Variation
Variability in customer experience scores is common and normal. Be it a survey of customers, mystery shops, social listening or other customer experience measurement, a certain amount of random variation in the data is normal. As a result, managers need a means of interpreting any variation in their customer experience measurement to evaluate if the customer experience is truly changing, or if the variation they are seeing is simply random.
In a previous post, we proposed the use of control charts as a tool to track customer experience measurements within upper and lower quality control limits, giving managers a meaningful way to determine if any variation in their customer experience measurement reflects an actual change in the experience as opposed to random variation or chance.
Now, managers need to understand the causes of variation, specifically common and special cause variation. Common and special cause variation are six sigma concepts, while most commonly used in industrial production, they can be borrowed and employed to the customer experience.
Common Cause Variation: Much like variation in the roll of dice, common cause variation is natural variation within any system. Common cause variation is any variation constantly active within a system, and represents statistical “noise” within the system.
Examples of common cause variation in the customer experience are:
- Poorly defined, poorly designed, inappropriate policies or procedures
- Poor design or maintenance of computer systems
- Inappropriate hiring practices
- Insufficient training
- Measurement error
Special Cause Variation: Unlike the roll of the dice, special cause variation is not probabilistically predictable within the system, as a result it does not represent statistical “noise” within the system, but is the signal within the system.
Examples of special cause variation include:
- High demand/ high traffic
- Poor adjustment of equipment
- Just having a bad day
When measuring the customer experience it is helpful to consider everything within the context of the company-customer interface. Every time a sales or service interaction within this interface occurs the customer learns something from the experience and adjusts their behavior as a result of the experience. Managing the customer experience is the practice of managing what the customers learn from the experience and thus managing their behavior in profitable ways.
A key to managing customer behaviors is understanding common cause and special cause variation and their implications. Common cause variation is variation built into the system: policies, procedures, equipment, hiring practices, and training. Special cause variation is more or less how the human element and the system interact.
See earlier post:
Customer Experience Measurement in the Coronavirus Age
Perhaps the most important way brands can respond to the moment of truth presented by this crisis is showing true care for: customers, employees, and the community.
Additionally, it is imperative that customers feel safe. Based on current science, in-person interactions can be relatively safe if followed within CDC and public health guidance including risk mitigation efforts such as: physical distancing, masks, ventilation, length of exposure, and hand washing & sanitizer.
Using these previous posts as a foundation, we can now address the implications of the pandemic on customer experience measurement.
So…. what does all this mean in terms of customer experience measurement?
First, I like to think of the customer experience measurement in terms of the brand-customer interface where customers interact with the brand. At the center of the customer experience are the various channels which form the interface between the customer and institution. Together, these channels define the brand more than any external messaging. Best-in-class customer experience research programs monitor this interface from multiple directions across all channels to form a comprehensive view of the customer experience.

Customers and front-line employees are the two stakeholders who interact most commonly with each other in the customer-institution interface. As a result, a best practice in understanding this interface is to monitor it directly from each direction: surveying customers from one side, gathering observations from employees on the brand side, and testing for the presence and timing of customer experience attributes through observational research such as mystery shopping.
Measure Customer Comfort and Confidence
First, fundamentally, the American economy is a consumer confidence driven economy. Consumers need to feel confident in public spaces to participate in public commerce. Customer experience researchers would be well served by testing for consumer confidence with respect to safety and mitigation strategies. These mitigation strategies are quickly becoming consumer requirements in terms of confidence in public commerce.
Along the same lines, given the centrality of consumer confidence in our economy, measuring how customers feel about the mitigation strategies put in place by the brand is extremely important. Such measurements would include measures of appropriateness, effectiveness, and confidence in the mitigation strategies employed. We recommend two measurements: how customers feel about the safety of the brand’s in-person channel in general, and how they feel about the safety relative to other brands they interact with during the pandemic. The first is an absolute measure of comfort, the other attempts to isolate the variable of the pandemic, just measuring the brand’s response.
The pandemic is changing consumer behavior. This much is clear. As such customer experience researchers should endeavor to identify and understand how consumer behavior is changing so they can adjust the customer experience delivery mix to align with these changes.
Testing Mitigation Strategies
Drilling down from broader research issues to mystery shopping specifically, there are several research design issues that should be continued in response to the COVID-19 pandemic.
Measure Customer Confidence in Post-Transaction Surveys with Alerts to Failures: First, as economic activity waxes and wanes through this coronavirus mitigation effort, consumer confidence will drive economic activity both on a macro and micro-economic level. Broadly, consumers as a whole will not participate in the in-person economy until they are confident the risk of infection is contained. Pointedly, at the individual business level, customers will not return to a business if they feel unsafe. Therefore, market researchers should build measures of comfort or confidence into the post-transaction surveys to measure how the customer felt as a result of the experience. This will alert managers to potential unsafe practices which must be addressed. It will also serve as a means of directly measuring the return on investment (ROI) of customer confidence and safety initiatives in terms of the customer experience.
Measure Customer Perception of Mitigation Strategies: Coronavirus mitigation strategies will become typical attributes of the customer experience. Beyond simply testing for the presence of these mitigation strategies, customer experience managers should determine customer perceptions of their appropriateness, efficacy, and perhaps most importantly, their confidence in these mitigation strategies.
Gather Employee Observations of Mitigation Strategies: Frontline employees spend nearly all their time in the brand customer interface. As such, they have always been a wealth of information about the customer experience, and can be surveyed very efficiently. The post-pandemic customer experience is no exception.
First, as we discussed previously, employees have the same personal safety concerns as customers. Surveys of employees should endeavor to evaluate employees’ confidence in and comfort with coronavirus mitigation strategies.
Secondly, frontline employees being placed in the middle of the brand-customer interface are in perfect position to give feedback regarding the efficacy of mitigation strategies and the extent to which it fits into the desired customer experience – providing managers with valuable insight into adjustments which may make mitigation strategies fit more precisely into overall the customer experience objectives.
Independently Test for the Presence of Mitigation Strategies: All in-person channels across all industries will require the adoption of coronavirus mitigation strategies. Mystery shopping is the perfect tool to test for the presence of mitigation strategies – evaluating such strategies as: designed physical distancing, physical barriers between POS personnel and customers, mask compliance, sanitization, and duration of contact.
Alternative Research Sources for Behavioral Observations: Some customer experience managers may not want unnecessary people within their in-person channel. So the question arises, how can employee behaviors be measured without the use of mystery shoppers? One solution is to solicit behavioral observations directly from actual customers shortly after the in-person service interaction. Customers can be recruited onsite to provide their observations through the use of QR codes, or in certain industries after the event via e-mail. The purpose of these surveys is behavioral – asking the customers to recall if a specific behavior or service attribute was present during the encounter. From a research design standpoint, this practice is a little suspect, as asking people to recall the specifics about an event after the fact, without prior knowledge, is problematic. Customers are not prepared or prompted to look for and recall specific events. However, given the unique nature of the circumstances we are under, in some cases there is an argument that the benefits of this approach outweigh the research limitations.
Test Channel Performance and Alignment
The instantaneous need for alternative delivery channels has significantly raised the stakes in cross-channel alignment. As sales volume shifts to these alternative channels, customer experience researchers need to monitor the customer experience within all channels to measure the efficacy of the experience, as well as alignment of each channel to both each other and the overall brand objectives.
Finally, as more customers migrate to less in-person channels, customer experience researchers should endeavor to measure the customer experience within each channel. As more late adopters are forced by the pandemic to migrate to these channels, they may bring with them a completely different set of expectations relative to early adopters, therefore managers would be well served to understand the expectations of these newcomers to the alternative channels so they can adjust the customer experience to meet these new customers’ expectations.
As commerce migrates away from conventional in-person channels to alternative delivery channels, the importance of these channels will increase. As a result, the quality and consistency of delivery in these channels will need to be measured through the use of mystery shoppers. Some industries are going to be problematic, as their current economics do not currently support alternative delivery. With time however, economic models will evolve to support alternative channels.
Conclusion
This is a difficult time. It will be the defining event of our generation.
The pandemic, and our reaction to it, is dramatically changing how humans interact with each other, and the customer experience is no exception. There is reason to suggests this difficult time could become a new normal. Managers of the customer experience need to understand the implications of the customer experience in the post-Covid environment, as the implications of the pandemic may never fully subside. Customer experience managers must consider the implications of this new normal, not only on the customer experience, but on customer experience measurement.

Integrated Digital First CX Model: Implications for CX Researchers
In previous posts to this five-part series on building an integrated digital-first service model we discussed
An integrated delivery channel requires an integration of research methodologies to measure the customer experience. Researchers should think in terms of exposure and moments of truth as they monitor each waypoint in the customer experience.
Understanding Exposure & Moments of Truth Risks
Digital waypoints with high exposure risk should be tested thoroughly with usability, focus groups, ethnography and other qualitative research to ensure features meet customer needs and are programmed correctly. Once programmed and tested, they need to be monitored with ongoing audits.
Waypoints with higher moment of truth risk are best monitored with post-transaction surveys, mystery shopping and the occasional focus group.
Usability Tests
Ongoing Audits
Mystery Shopping
Focus Groups
Integrated Channel CX Measurement
When measuring the customer experience across multiple channels in an integrated manner, it is important to both gather consistent measures across all channels, as well as measures specific to each channel. Each channel has their own specific needs; however, consistent measures across all channels provide context and a point of comparison.
Here is what an integrated omni-channel research plan may look like:
Kinēsis recommends measuring each channel against a set of consistent brand attribute measurements. Brands have personality, and it is incumbent on CX researchers to evaluate each channel against the overall desired brand personality objectives. A channel disconnected from the institution’s brand objectives can do a lot of damage to the institution’s perceived image.
Kinēsis uses brand adjectives and agreement statements to measure customer impressions of the brand. Ask yourself, what 5 or 6 adjectives you would like customers to describe your institution. Then simply take these adjectives and ask customers if the adjectives describe the customer experience.
Next, ask yourself, what statements you would like the customer to describe their perception of the brand as a result of any interaction. Statements such as:
• We are easy to do business with.
• We are knowledgeable.
• We are interested in members as people, and concerned for their individual needs.
• We are committed to the community.
These statements can be incorporated into the research by asking customers the extent they agree with each of the statements.
Again, brands have personality. Brand adjectives and agreement statements are an excellent way to tie disparate research across multiple channels together with consistent measures of perceptions of the brand personality as a result of the experience.
Channel Specific Dimensions
Different channels have different service attributes; therefore, it is important to provide each channel manager with specific research relevant to their channel. Digital channels, for example, may require measures around: appeal, identity, navigation, content, value and trust. Non-digital managers may require measures such as: reliability, responsiveness, competence, empathy and the physical environment.
Efficacy of the Experience
Regardless of channel, all research should contain consistent measures of the efficacy of the experience. The efficacy of the experience is the institution’s ultimate objective of every customer experience. Ask yourself, how do we want the customer to feel or think as a result of the interaction?
Some examples of efficacy measurements include:
• Purchase Intent/ Return Intent: Kinēsis has a long history using this dependent variable, using purchase intent.
• Likelihood of Referral: Likelihood of referral measures (like Net Promoter Score) are generally accepted as a reliable proxy measure for customer loyalty.
• Member/ Customer Advocacy: The extent to which the financial institution is an advocate for the customer is best measured by using an agreement scale to measure the agreement with the following statement, “This bank cares about me, not just the bottom line.” Agreement with this statement is also an excellent proxy measurement for loyalty.