A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Planned Interactions
Part 2: Research Tools to Monitor Planned Interactions through the Customer Lifecycle
As we explored in an earlier post, Three Types of Customer Experiences CX Managers Must Understand, there are three types of customer interactions: Planned, Stabilizing, and Critical.
Planned interactions are intended to increase customer profitability through the customer lifecycle by engaging customers with relevant planned interactions and content in an integrated omni-channel environment. Planned interactions will continue to grow in importance as the financial service industry shifts to an integrated digital first model.
These planned interactions are frequently triggered by changes in account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action toward planned interactions. Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating their effectiveness – regardless of the channel.
The key to an effective strategy for planned interactions is relevance. Triggered requests for increased engagement must be made in the context of the customer’s needs and with their permission; otherwise, the requests will come off as clumsy and annoying, and give the impression the bank is not really interested in the customer’s individual needs. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and relevant approach to planned interactions.
Research Plan for Planned Interactions
The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign. Ask yourself, what customer interactions are planned through these layers of integrated channels. Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.
For example, after acquisition and onboarding, assume a bank has a campaign to trigger planned interactions based on triggers from past engagement. These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.

Engagement Phase
Often it is instructive to think of customer experience research in terms of the bank-customer interface, employing different research tools to study the customer experience from both sides of this interface.
In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:
Customer Side | Brand Side |
Post-Event Surveys These post-experience surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey. They can be performed across all channels, digital, contact center and in-person. As the name implies, the purpose of this type of survey is to measure experience with a specific customer experience. |
Employee Surveys Ultimately, employees are at the center of the integrated customer experience model. Employee surveys often measure employee satisfaction and engagement. However, there is far more value to be gleaned from employees. We employ them to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information. They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identify perceptual gaps between management and frontline personnel. |
Overall Satisfaction Surveys Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction. They give managers valuable insight into overall satisfaction, engagement, image and positioning across the entire customer base, not just active customers. |
Digital Delivery Channel Shopping Be it a website or mobile app, digital mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these digital channels. |
Transactional Mystery Shopping Mystery shopping is about alignment. It is an excellent tool to align the customer experience to the brand. Best-in-class mystery shopping answers the question: is our customer experience consistent with our brand objectives? Historically, mystery shopping has been in the in-person channel, however we are seeing increasing mystery shopping to contact center agents. |
Growth Phase
In the growth phase, we measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:
Customer Side | Brand Side |
Awareness Surveys Awareness of the brand, its products and services, is central to planned service interactions. Managers need to know how awareness and attitudes change as a result of these planned experiences. |
Cross-Sell Mystery Shopping In these unique mystery shops, mystery shoppers are seeded into the lead/referral process. The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction. These shops work very well in planned sales interactions within the contact center environment. |
Wallet Share Surveys These surveys are used to evaluate customer engagement with and loyalty to the institution. Specifically, they determine if customers consider the institution their primary provider of financial services, and identify potential road blocks to wallet share growth. |
Retention Phase
Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:
Customer Side | Brand Side |
Critical Incident Technique (CIT) CIT is a qualitative research methodology designed to uncover details surrounding a service encounter that a customer found particularly satisfying or dissatisfying. This research technique identifies these common critical incidents, their impact on the customer experience, and customer engagement, giving managers an informed perspective upon which to prepare employees to recognize moments of truth, and respond in ways that will lead to positive outcomes. |
Employee Surveys Employees observe firsthand the relationship with the customer. They are a valuable resource of customer experience information, and can provide a lot of context into the types of bad experiences customers frequently experience. |
Lost Customer Surveys Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention. |
Life Cycle Mystery Shopping If an integrated channel approach is the objective, one should measure the customer experience in an integrated manner. In lifecycle shops, shoppers interact with the bank over a period of time, across multiple touch points (digital, contact center and in-person). This lifecycle approach provides broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across all channels. |
Comment Listening Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction. |
Call to Action – Make the Most of the Research
For customer experience surveys, we recommend testing the effectiveness of planned interactions by benchmarking three loyalty attitudes:
- Would Recommend: The likelihood of the customer recommending the bank to a friend, relative or colleague.
- Customer Advocacy: The extent to which the customer agrees with the statement, “My bank cares about me, not just the bottom line?”
- Primary Provider: Does the customer consider the institution their primary provider for financial services?
For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.
As the integrated digital first business model accelerates, planned interactions will continue to grow in importance, and managers of the customer experience should build customer experience monitoring tools to evaluate the efficacy of these planned experiences in terms of driving desired customer attitudes and behaviors.

A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Three Types of Customer Experiences
Part 1: Three Types of Customer Experiences CX Managers Must Understand
COVID-19 Crisis Accelerating Change
The transformation began decades ago. Like a catalyst in a chemical reaction, the COVID-19 crisis has accelerated the transformation away from in-person channels. Recognizing paradigm shifts in the moment is often difficult, however – a long coming paradigm shift appears to be upon us.
Shifts away from one thing require a shift toward another. A shift away from an in-person first approach is toward a digital first approach with increasing integrated layers of engagement and expertise.

Digital apps allow for a near continuous engagement with customers. Apps now sit in customer’s pockets and are available to the customer on demand when and where they need them. This communication actually works both ways with the customer providing information to the bank, and the bank informing the customer. Managers of the customer experience can now deliver contextually relevant information directly to the customer. Automated advice and expertise is in its infancy, and shows promise. Chat bots and other preprogrammed help and advice can start the process of delivering help and expertise when requested.
Contact centers are the next logical layer of this integrated customer experience. Contact centers are an excellent channel to deliver general customer service and advice, as well as expert advice for more sophisticated financial needs. Kinesis has clients with Series 7 representatives and wealth managers providing expert financial advice via video conference.
The role of the branch obviously includes providing expert advice. Branches will continue to become smaller, more flexible, less monolithic and, tailored to the location and market. Small community centers will focus on community outreach, while larger flagship branches sit at the center of an integrated hub and spoke model – a model that includes digital and contact centers.
Three Types of Experiences
Every time a customer interacts with a bank, regardless of channel, they learn something about the bank, and adjust their behavior based on what they learn. This is the core component of customer experience management – to teach customers to behave in profitable ways. It is incumbent on managers of the customer experience to understand the different types of customer experiences, and their implications for managing the customer experience in this manner. Customer experiences come in a variety of forms; however there are three types of experiences customer experience managers should be alert to. These three are: planned, stabilizing, and critical experiences.
Planned
Planned interactions are intended to increase customer profitability by engaging customers in meaningful conversations in an integrated omni-channel environment. These interactions can be triggered by changes in the customers’ purchasing patterns, account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action. Customer experience managers should have a process to record and analyze the quality of execution of planned interactions, with the objective of evaluating their performance.
The key to an effective strategy for planned interactions is appropriateness. Triggered requests for increased spending must be made in the context of the customer’s needs and with their permission; otherwise the requests will come off as clumsy, annoying, and not customer centric. By aligning information about execution quality (cause) and customer actions (effect), customer experience managers can build a more effective and appropriate approach to planned interactions.
In future posts, we will look at planned experiences and consider their implications in light of this shift toward a digital first approach.
Stabilizing
Stabilizing interactions promote customer retention, particularly in the early stages of the relationship.
New customers are at the highest risk of defection. Long-term customers know what to expect from their bank, and due to self-selection, their expectations tend to be aligned with their experience. New customers are more likely to experience disappointment, and thus more likely to defect. Turnover by new customers is particularly unprofitable because many defections occur prior to the break-even point of customer acquisition costs, resulting in a net loss on the customer. Thus, experiences that stabilize the customer relationship early ensure a higher proportion of customers will reach positive profitability.
The keys to an effective stabilizing strategy are education, consistency, and competence. Education influences expectation and helps customers develop realistic expectations. It goes beyond simply informing customers about the products and services offered. It systematically informs new customers how to use the bank’s services more effectively and efficiently: how to obtain assistance, how to complain, and what to expect as the relationship progresses. For an integrated digital first business model to work, customers need to learn how to use self-administered channels and know how, and when, to access the deeper layers offering more engagement and expertise.
In future posts, we will look at stabilizing experiences and consider their implications in light of this shift toward a digital first approach.
Critical
Critical interactions are events that lead to memorable customer experiences. While most customer experiences are routine, from time to time a situation arises that is out of the ordinary: a complaint, a question, a special request, a chance for an employee to go the extra mile. Today, many of these critical experiences occur amidst the underlying stresses of the COVID-19 crisis. The outcomes of these critical incidents can be either positive or negative, depending upon the way the bank responds to them; however, they are seldom neutral. The longer a customer remains with a financial institution, the greater the likelihood that one or more critical experiences will occur – particularly in a time of crisis, like the pandemic.
Because they are memorable and unusual, critical interactions tend to have a powerful effect on the customer relationship. We often think of these as “moments of truth” where the institution has an opportunity to solidify the relationship earning a loyal customer, or risking the customer’s defection. Positive outcomes lead to “customer delight” and word-of-mouth endorsements, while negative outcomes lead to customer defections, diminished share of wallet and unfavorable word-of-mouth.
The key to an effective critical interaction strategy is opportunity. Systems and processes must be in a position to react to these critical moments of truth. An effective customer experience strategy should include systems for recording critical interactions, analyzing trends and patterns, and feeding that information back to management. This can be particularly challenging in an integrated Omni-channel environment. Holistic customer profiles need to be available across channels, and employees must be trained to recognize critical opportunities and empowered to respond to them in such a way that they will lead to positive outcomes and desired customer behaviors.
In future posts, we will look at critical experiences and consider their implications in light of this shift toward a digital first approach.

Critical Incident Technique: A Tool to Identify and Prepare for Your Moments of Truth
As we explored in an earlier post, 3 Types of Customer Interactions Every Customer Experience Manager Must Understand, there are three types of customer interactions: Stabilizing, Critical, and Planned.
The second of these, “critical” interactions are service encounters which are out of the ordinary (a complaint, question, special request, an employee going the extra mile). The outcomes of these critical incidents can be either positive or negative, depending on how they are responded to; however, they rarely are neutral. Because they are memorable and unusual, critical interactions tend to have a powerful effect on the relationship with the customer, they are “moments of truth” where the brand has an opportunity to solidify the relationship or risk defection.
Customer experience strategies need to include systems for identifying common or potential moments of truth, analyzing trends and patterns, and feeding that information back to the organization. Employees can then be trained to recognize critical opportunities, and empowered to respond to them in such a way that they will lead to positive outcomes and desired customer behaviors. One way to identify potential moments of truth and gauge the efficacy of service recovery strategies is a research technique called Critical Incident Technique (CIT).
Critical Incident Technique
CIT is a qualitative research methodology designed to uncover details surrounding a service encounter that a customer found particularly satisfying or dissatisfying. There is plenty of room for freedom in study design, but basically what we are trying to find out is what happened, what the customer did in response to the incident (positive or negative), what recovery strategy was used for negative incidents, and how effective was this recovery strategy.
Again, there is a lot of freedom here, but roughly study design looks like this:
First, ask the research participant to recall a recent experience in your industry that was particularly satisfying or dissatisfying. Now, ask open-ended probing questions to gather the who, what, when, why and how surrounding that experience, questions like:
- When did the incident happen?
- What caused the incident? What are the specific circumstances that led to the incident or situation?
- Why did you feel the incident was particularly satisfying or dissatisfying?
- How did the provider respond to the incident? How did they correct it?
- What action(s) did you take as a result of the incident?
The analysis of CIT interviews consists of classifying these incidents into well defined, mutually exclusive categories and sub-categories of increasing specificity. For example, the researcher may classify incidents into the following categories:
- Service Delivery System Failures
- Unavailable Service
- Unreasonably Slow Service
- Other Core Service Failures
- Customer Needs and Requests
- Special Customer Needs
- Customer Preferences
- Unprompted and Unsolicited Actions
- Attention Paid to Customer
- Truly Out of the Ordinary Employee Behavior/Performance
- Holistic Evaluation
- Performance Under Adverse Circumstances
A similar classification technique should be used to group both recovery strategies and their effectiveness. As well as classifying the attitudinal and behavioral result on the customer, identifying in what ways the customer changed their behavior toward or relationship with the brand based on the incident, such as, did they purchase more or less, tell others about the experience directly or via social media, call for support more or less often, use different channels, change providers, etc.
The end result of this analysis will produce a list of common moments of truth within your industry, how customers change their behavior in either profitable ways or unprofitable ways as a result of this moment of truth and an evaluation of the effectiveness of recovery strategies, giving managers an informed perspective upon which to prepare employees to recognize moments of truth, and respond in ways that will lead to positive outcomes.
For additional perspectives on moments of truth, see the post: 4 Ways to Understand & Manage Moments of Truth.
Not All Customer Experience Variation is Equal: Use Control Charts to Identify Actual Changes in the Customer Experience
Variability in customer experience scores is common and normal. Be it a survey of customers, mystery shops, social listening or other customer experience measurement, a certain amount of random variation in the data is normal. As a result, managers need a means of interpreting any variation in their customer experience measurement to evaluate if the customer experience is truly changing, or if the variation they are seeing is simply random.
One solution to this need is control charts. Control charts are a statistical tool commonly used in Six Sigma programs to measure variation. They track customer experience measurements within upper and lower quality control limits. When measurements fall outside either limit, the trend indicates an actual change in the customer experience rather than just random variation.
To illustrate this concept, consider the following example of mystery shop results:
In this example the general trend of the mystery shop scores is up, however, from month to month there is a bit of variation. Managers of this customer experience need to know if July was a particularly bad month, conversely, is the improved performance of in October and November something to be excited about. Does it represent a true change in the customer experience?
To answer these questions, there are two more pieces of information we need to know beyond the average mystery shop scores: the sample size or count of shops for each month and the standard deviation in shop scores for each month.
The following table adds these two additional pieces of information into our example:
Month | Count of Mystery Shops | Average Mystery Shop Scores | Standard Deviation of Mystery Shop Scores |
May | 510 | 83% | 18% |
June | 496 | 84% | 18% |
July | 495 | 82% | 20% |
Aug | 513 | 83% | 15% |
Sept | 504 | 83% | 15% |
Oct | 489 | 85% | 14% |
Nov | 494 | 85% | 15% |
Averages | 500 | 83.6% | 16.4% |
Now, in order to determine if the variation in shops scores is significant or not, we need to calculate upper and lower quality control limits, where any variation above or below these limits is significant, reflecting an actual change in the customer experience.
The upper and lower quality control limits (UCL and LCL, respectively), at a 95% confidence level, are calculated according to the following formulas:
Where:
x = Grand Mean of the score
n = Mean sample size (number of shops)
SD = Mean standard deviation
Applying these equations to the data in the above table, produces the following control chart, where the upper and lower quality control limits are depicted in red.
This control chart tells us that, not only is the general trend of the mystery shop scores positive, and that November’s performance has improved above the upper control limit, but it also reveals that something unusual happened in July, where performance slipped below the lower control limit. Maybe employee turnover caused the decrease, or something external such as a weather event was the cause, but we know with 95% confidence the attributes measured in July were less present relative to the other months. All other variation outside of November or July is not large enough to be considered statistically significant.
So…what this control chart gives managers is a meaningful way to determine if any variation in their customer experience measurement reflects an actual change in the experience as opposed to random variation or chance.
In the next post, we will look to the causes of this variation.
Next post:
Not All Customer Experience Variation is Equal: Common Cause vs. Special Cause Variation
Not All Customer Experience Variation is Equal: Common Cause vs. Special Cause Variation
Variability in customer experience scores is common and normal. Be it a survey of customers, mystery shops, social listening or other customer experience measurement, a certain amount of random variation in the data is normal. As a result, managers need a means of interpreting any variation in their customer experience measurement to evaluate if the customer experience is truly changing, or if the variation they are seeing is simply random.
In a previous post, we proposed the use of control charts as a tool to track customer experience measurements within upper and lower quality control limits, giving managers a meaningful way to determine if any variation in their customer experience measurement reflects an actual change in the experience as opposed to random variation or chance.
Now, managers need to understand the causes of variation, specifically common and special cause variation. Common and special cause variation are six sigma concepts, while most commonly used in industrial production, they can be borrowed and employed to the customer experience.
Common Cause Variation: Much like variation in the roll of dice, common cause variation is natural variation within any system. Common cause variation is any variation constantly active within a system, and represents statistical “noise” within the system.
Examples of common cause variation in the customer experience are:
- Poorly defined, poorly designed, inappropriate policies or procedures
- Poor design or maintenance of computer systems
- Inappropriate hiring practices
- Insufficient training
- Measurement error
Special Cause Variation: Unlike the roll of the dice, special cause variation is not probabilistically predictable within the system, as a result it does not represent statistical “noise” within the system, but is the signal within the system.
Examples of special cause variation include:
- High demand/ high traffic
- Poor adjustment of equipment
- Just having a bad day
When measuring the customer experience it is helpful to consider everything within the context of the company-customer interface. Every time a sales or service interaction within this interface occurs the customer learns something from the experience and adjusts their behavior as a result of the experience. Managing the customer experience is the practice of managing what the customers learn from the experience and thus managing their behavior in profitable ways.
A key to managing customer behaviors is understanding common cause and special cause variation and their implications. Common cause variation is variation built into the system: policies, procedures, equipment, hiring practices, and training. Special cause variation is more or less how the human element and the system interact.
See earlier post:
Customer Experience Measurement in the Coronavirus Age
Perhaps the most important way brands can respond to the moment of truth presented by this crisis is showing true care for: customers, employees, and the community.
Additionally, it is imperative that customers feel safe. Based on current science, in-person interactions can be relatively safe if followed within CDC and public health guidance including risk mitigation efforts such as: physical distancing, masks, ventilation, length of exposure, and hand washing & sanitizer.
Using these previous posts as a foundation, we can now address the implications of the pandemic on customer experience measurement.
So…. what does all this mean in terms of customer experience measurement?
First, I like to think of the customer experience measurement in terms of the brand-customer interface where customers interact with the brand. At the center of the customer experience are the various channels which form the interface between the customer and institution. Together, these channels define the brand more than any external messaging. Best-in-class customer experience research programs monitor this interface from multiple directions across all channels to form a comprehensive view of the customer experience.

Customers and front-line employees are the two stakeholders who interact most commonly with each other in the customer-institution interface. As a result, a best practice in understanding this interface is to monitor it directly from each direction: surveying customers from one side, gathering observations from employees on the brand side, and testing for the presence and timing of customer experience attributes through observational research such as mystery shopping.
Measure Customer Comfort and Confidence
First, fundamentally, the American economy is a consumer confidence driven economy. Consumers need to feel confident in public spaces to participate in public commerce. Customer experience researchers would be well served by testing for consumer confidence with respect to safety and mitigation strategies. These mitigation strategies are quickly becoming consumer requirements in terms of confidence in public commerce.
Along the same lines, given the centrality of consumer confidence in our economy, measuring how customers feel about the mitigation strategies put in place by the brand is extremely important. Such measurements would include measures of appropriateness, effectiveness, and confidence in the mitigation strategies employed. We recommend two measurements: how customers feel about the safety of the brand’s in-person channel in general, and how they feel about the safety relative to other brands they interact with during the pandemic. The first is an absolute measure of comfort, the other attempts to isolate the variable of the pandemic, just measuring the brand’s response.
The pandemic is changing consumer behavior. This much is clear. As such customer experience researchers should endeavor to identify and understand how consumer behavior is changing so they can adjust the customer experience delivery mix to align with these changes.
Testing Mitigation Strategies
Drilling down from broader research issues to mystery shopping specifically, there are several research design issues that should be continued in response to the COVID-19 pandemic.
Measure Customer Confidence in Post-Transaction Surveys with Alerts to Failures: First, as economic activity waxes and wanes through this coronavirus mitigation effort, consumer confidence will drive economic activity both on a macro and micro-economic level. Broadly, consumers as a whole will not participate in the in-person economy until they are confident the risk of infection is contained. Pointedly, at the individual business level, customers will not return to a business if they feel unsafe. Therefore, market researchers should build measures of comfort or confidence into the post-transaction surveys to measure how the customer felt as a result of the experience. This will alert managers to potential unsafe practices which must be addressed. It will also serve as a means of directly measuring the return on investment (ROI) of customer confidence and safety initiatives in terms of the customer experience.
Measure Customer Perception of Mitigation Strategies: Coronavirus mitigation strategies will become typical attributes of the customer experience. Beyond simply testing for the presence of these mitigation strategies, customer experience managers should determine customer perceptions of their appropriateness, efficacy, and perhaps most importantly, their confidence in these mitigation strategies.
Gather Employee Observations of Mitigation Strategies: Frontline employees spend nearly all their time in the brand customer interface. As such, they have always been a wealth of information about the customer experience, and can be surveyed very efficiently. The post-pandemic customer experience is no exception.
First, as we discussed previously, employees have the same personal safety concerns as customers. Surveys of employees should endeavor to evaluate employees’ confidence in and comfort with coronavirus mitigation strategies.
Secondly, frontline employees being placed in the middle of the brand-customer interface are in perfect position to give feedback regarding the efficacy of mitigation strategies and the extent to which it fits into the desired customer experience – providing managers with valuable insight into adjustments which may make mitigation strategies fit more precisely into overall the customer experience objectives.
Independently Test for the Presence of Mitigation Strategies: All in-person channels across all industries will require the adoption of coronavirus mitigation strategies. Mystery shopping is the perfect tool to test for the presence of mitigation strategies – evaluating such strategies as: designed physical distancing, physical barriers between POS personnel and customers, mask compliance, sanitization, and duration of contact.
Alternative Research Sources for Behavioral Observations: Some customer experience managers may not want unnecessary people within their in-person channel. So the question arises, how can employee behaviors be measured without the use of mystery shoppers? One solution is to solicit behavioral observations directly from actual customers shortly after the in-person service interaction. Customers can be recruited onsite to provide their observations through the use of QR codes, or in certain industries after the event via e-mail. The purpose of these surveys is behavioral – asking the customers to recall if a specific behavior or service attribute was present during the encounter. From a research design standpoint, this practice is a little suspect, as asking people to recall the specifics about an event after the fact, without prior knowledge, is problematic. Customers are not prepared or prompted to look for and recall specific events. However, given the unique nature of the circumstances we are under, in some cases there is an argument that the benefits of this approach outweigh the research limitations.
Test Channel Performance and Alignment
The instantaneous need for alternative delivery channels has significantly raised the stakes in cross-channel alignment. As sales volume shifts to these alternative channels, customer experience researchers need to monitor the customer experience within all channels to measure the efficacy of the experience, as well as alignment of each channel to both each other and the overall brand objectives.
Finally, as more customers migrate to less in-person channels, customer experience researchers should endeavor to measure the customer experience within each channel. As more late adopters are forced by the pandemic to migrate to these channels, they may bring with them a completely different set of expectations relative to early adopters, therefore managers would be well served to understand the expectations of these newcomers to the alternative channels so they can adjust the customer experience to meet these new customers’ expectations.
As commerce migrates away from conventional in-person channels to alternative delivery channels, the importance of these channels will increase. As a result, the quality and consistency of delivery in these channels will need to be measured through the use of mystery shoppers. Some industries are going to be problematic, as their current economics do not currently support alternative delivery. With time however, economic models will evolve to support alternative channels.
Conclusion
This is a difficult time. It will be the defining event of our generation.
The pandemic, and our reaction to it, is dramatically changing how humans interact with each other, and the customer experience is no exception. There is reason to suggests this difficult time could become a new normal. Managers of the customer experience need to understand the implications of the customer experience in the post-Covid environment, as the implications of the pandemic may never fully subside. Customer experience managers must consider the implications of this new normal, not only on the customer experience, but on customer experience measurement.

Customer Experience Measurement in the Coronavirus Age: Implications for Customer Experience
In summary, the most common cause of spread is believed to be airborne by inhaling virus particles exhaled into the environment. The infectious dose of a virus is the amount of virus a person needs to be exposed to in order to establish an infection. We currently do not know the infectious dose for SARS-CoV-2. Estimates range from a few hundred to a few thousand virus particles.[1] One virus particle will not cause an infection. To be infected one must exceed the infectious dose by either being exposed to a cough or a sneeze. Absent coughs or sneezes, under normal activity one must be exposed to the virus over time to exceed the infectious dose.
This post draws ocorn the foundation of the first to discuss the implications of the pandemic on the customer experience.
Modern day customer experiences exist in a finely tuned ecosystem, where the dramatic changes as a result of the pandemic have off set the delicate balance, causing problems from supply chain disruptions to an immediate shift away from in-person channels.
Furthermore, the pandemic represents what I call a moment of truth regarding the relationship with customers. Moments of truth are specific experiences of high importance, where a customer either forms or changes their opinion of a brand in meaningful and lasting ways. How brands respond to moments of truth, particularly in this time of global crisis, will strengthen or weaken the customers’ relationship to the brand.
Moments of truth are specific experiences of high
importance, where a customer either forms or changes
their opinion of a brand in meaningful or lasting ways.
Customers are stressed. They feel uncertainty, fear and, frankly, exhaustion. Ongoing concern for personal safety, education of children, and the well being of loved ones is exhausting. This uncertainty and fear drives customers to seek shelter from resources they trust. Brands which become a trusted resource, which provide comfort, true comfort, in the face of this crisis have an opportunity to not only do the right thing, but cement their customers’ relationship with the brand. On the other hand, brands which fail to do so, risk destruction of their customer relationships.
Care for all Stakeholders
Perhaps the most important way brands can respond to the moment of truth presented by this crisis is showing true care for stakeholders in the brand: customers, employees, and the community.
Care for Customers
Brands must communicate care for customers. Drawing on a personal example, March of 2020 was a particularly worrisome time for me. At that time, the Seattle area was considered one of the epicenters of the outbreak, mandatory stay at home orders where being introduced – fear ruled – fear driven by uncertainty; uncertainty with respect to the safety of myself and loved ones; uncertainty with respect to the financial future; uncertainty with respect to the state of the entire globe.
Amidst all this uncertainty and fear I received an email from Citigroup entitled “Covid-19. Let us know if we can help.” It communicated personal care for me, encouraged alternative channel use: online, mobile and 24/7 contact center assistance, and contained links to CDC guidance.
A week later the campaign continued with an update on the actions Citigroup was implementing based on the pandemic; again, educating me to digital tools available, offering personal assistance if needed.
Two and a half months later, in June, I received an email expressing “heartfelt thanks” for adapting to changes and remaining loyal. It described ways Citigroup was assisting with a variety of COVID-19 relief, specifically introducing a partnership with celebrity chef Jose Anres’ World Central Kitchen Campaign distributing meals in low-income neighborhoods in big cities like New York, and monitoring the globe for food shortages elsewhere. This not only demonstrated care for me personally, but care for the community.
Care for Communities
Citigroup’s donations to the World Central Kitchen campaign is one example of care for our communities. There are countless examples of brands offering community support.
- A beer brewery, Brewdog, shifted production away from beer to hand sanitizer.
- A Spanish sports retailer donated scuba masks to hospitals.
- EBay offered free services to small business forced to switch from brick-and-mortar to ecommerce to keep their small business afloat – pledging $100 million in support of this endeavor.
Care for Employees
Employees are important. They animate the brand and drive customer loyalty – particularly in moments of truth like these. Research has determined that in many retail and service environments, there is a positive correlation between employee satisfaction and employee retention as well as customer loyalty. They are not immune from the fear and the stress of this crisis. Additionally, frontline employees spend all their time in the brand-customer interface. They are the personal representatives of the brand.
Additionally, given these front-line employees spend the majority of their time in the brand-customer interface, they tend to have a level of understanding about the customer experience that management often misses.
As a result, it is incumbent on brands to attend to the stresses employees are under, demonstrate concern, and develop communication channels for employees to feed customer experience intelligence to management.
Delivery Channels
I’ve always been an advocate of meeting customers in their preferred channel; meeting them where they are today and delivering a seamless experience. Obviously, over the recent decades there has been a migration from in-person channels to increasing self-directed, alternative channels. The pandemic has immediately accelerated this shift. Be it telehealth, online banking, in-home instruction of our children, or a restaurant delivering through UberEats, providers of all types now face increasing pressure to bring their business to their customers’ homes.
Emotional Well Being
As observed earlier, this pandemic is a moment of truth between many brands and their customers. In our experience, customers primarily want three things from a provider: 1) empathy, 2) care/concern for their needs, and 3) competence. We see this constantly. Customers want to do business with brands that empathize with them, care about their needs, and are capable of satisfying those needs in a competent manner. Brands that seek to attend to the emotional needs of their customers during this moment of truth will earn the loyalty and positive word-of-mouth of their customers.
In-Person Precautions and Mitigation Strategies
While the pandemic has accelerated an ongoing transition to alternative channels, some industries require an in-person experience. Based on current science, in-person interactions can be relatively safe if followed within CDC and public health guidance outlined in the first part of this series:
- Physical Distancing: Estimates of exposure time all assume close personal contact. Physical distancing decreases the likelihood of receiving an infectious dose by putting space between ourselves and others – current recommendations are 6 feet.
Furthermore, many in-person transactions can now be done touch free. I recently had to rent a car, and was pleased to meet the rental attendant outside holding a tablet. The attendant took down all my information, I never had to touch or sign anything. In a different transaction, requiring a signature, I was offered a single use pen to keep.
- Masks: Masks are a core tool to provide physical distancing between individuals. Masks do not primarily act as a filter for the wearer, but suppress the amount of droplets an infected person can spread into the space around them. This reduces the risk that others will exceed the infectious dose of the virus.
- Ventilation: Well ventilated areas disperse virus particles making it less likely a dose exceeds the infectious limits. Like my car rental agency, brands should endeavor to provide well ventilated spaces for employees and customers to interact – not only to protect customers but employees as well.
- Length of Exposure: Finally, brands should design service encounters to be as time efficient as possible. Again, the CDC advises a 15-minute exposure limit for close personal contact. Social distancing through physical distance, masks, and ventilation should increase this safe exposure limit. However, strategies should be implemented to make service encounters as brief as possible. For example, if you require information from your customers as part of the service interaction, collect this required information online or over the phone prior to an appointment. This could help to make customers and employees safer and more comfortable.
- Hand Washing & Sanitizer: Hand washing and sanitization is the primary defense against transfer infections.
Putting it All Together
Putting all this together, let’s look at an industry Kinesis has the most experience with. Kinesis’ largest practice is in the banking and financial services industry. Recently the American Bankers Association (ABA) released the results of an industry survey regarding publically announced responses of US banks to the pandemic. [2]
Many banks are applying some of the concepts discussed above in creative ways. A review of a random selection of banks reveals the following responses ranked from most common to least common:
- Enhanced deep cleaning and disinfecting of work spaces;
- Implementing social distancing in work spaces, including branches;
- Encouraging use of alternative delivery channels, such as mobile and internet banking;
- Personalized assistance to customers negatively impacted by the pandemic;
- Increased donations to charity/ partnering with the local community to mitigate the effects of the pandemic;
- Allowing employees to work remotely if possible;
- Limiting access to branches (closing branch lobbies, limiting hours, appointment only banking);
- Paid time off for employees to self-quarantine or to care of school age children;
- Rotating schedules of customer-facing staff to reduce risk (one institution has applied a 10 days on 10 days off policy); and
- Educating customers of pandemic related fraud/scams.
[1] Geddes, Linda. “Does a high viral load or infectious dose make covid-19 worse?” newscientist.com, March 27, 2020. Web May 14, 2020.
[2] “America’s Banks Are Here to Help: The Industry Responds to the Coronavirus.” ABA.com, April 29, 2020. Web. May 19 2020.
Customer Experience Measurement in the Coronavirus Age: The Mechanism and Risk of Infection
Introduction
From Zoom happy hours, canceled events, concerns over how best to educate our children, economic disruption, and caring for the victims, the SARS-CoV-2 pandemic, and the resulting public heath requirements are changing our lives in ways both big and small, superficial and tragic. The customer experience is certainly no exception. Writing about effects of the pandemic, while it unfolds, is a unique challenge – as we are learning more about the virus, its health effects, mitigation strategies, and overall effects on society in real time. Things change daily and we are all learning on the fly here. This series of blog posts is an early attempt to discuss the effects of the pandemic on customer experience research.
Before we begin, let me stress one thing. I am a market researcher who specializes in evaluating the customer experience. I am not an epidemiologist or doctor, and I have no training or experience in public health. As a result, I will refrain from expressing scientific or medical theories or opinions of my own. Any virus related conclusions or opinions expressed in this series of posts will be from credible sources and cited in footnotes. If at any point it appears I am drawing medical or scientific conclusions of my own, it is unintentional, and should not be regarded as such.
The need for managers of the customer experience to understand the implications of post-SARS-CoV-2 environment will most likely survive the immediate pandemic. Changes in customer experience management will probably assume a more permanent nature. First, this novel coronavirus may never go completely away, but rather become endemic in our society, meaning it could be a constant presence.[1] Second, recent history suggests SARS-CoV-2 is not the only novel-corona virus we are going to face in the coming decades. Currently there are seven know coronaviruses that infect humans – prior to 2003 there were only four. In a relatively short period of time, three new coronaviruses have jumped from animals to humans.[2] The number of known coronaviruses to which humans are susceptible has nearly doubled in 17 years, so it does not require a great leap of the imagination to conclude this is not the last novel virus we will need to deal with.
This pandemic and its predicted aftermath represent a moment of truth for customers and their relationship to the brand. In an uncertain and risky environment, customers will be even more likely to build relationships with brands they trust. Forward thinking managers of the customer experience will respond by building more mechanisms to monitor customer perceptions of safety within the in-person channel and fulfillment via expanded alternative channels.
Mechanism of Infection
What we know now is the virus appears to spread primarily through person-to-person contact, via people in close contact with each other or to a lesser extent secondary transfer off contaminated surfaces.
SARS-CoV-2 survives on most surfaces. Touching an infected surface and touching your eye, nose or mouth represents a risk of infection by transfer.[3] Although, recent guidance from the CDC suggests transfer is not a significance mode of transmission.[4] That being said, high touch surfaces such as door handles, elevator buttons, POS machines, and bathroom surfaces, should still be considered a potential risk for transfer infection. However, the main mechanism of infection is via close personal contact.
When an infected person coughs, sneezes, talks or performs any other activity exhaling air, respiratory droplets are produced. These droplets can land on the mouths or noses of people nearby, or in some circumstances, hang in the air in an aerosol form and be inhaled into the lungs.[5] Current evidence suggests most individuals with mild to moderate symptoms can be infectious up to 10 days after symptom onset. Further complicating this picture, it appears individuals without symptoms can be infectious even without knowing they are infected themselves.[6]
In order for customer experience managers to make informed choices about the customer experience in the post-Covid age, it is important to understand the mechanism of infection. The infectious dose of a virus is the amount of virus a person needs to be exposed to in order to establish an infection. The infectious dose varies depending on the virus (the flu can cause infection after exposure to as few as 10 virus particles, others require exposure to thousands of particles to establish an infection). Currently, the infectious dose of SARS-CoV-2 is not understood with any precision; however, some experts estimate it at a few hundred to a few thousand virus particles.[7]
Like fire needs three things to burn (oxygen, fuel and heat), in my layman’s expression, three factors dictate Covid-19 transmission: activity, duration and proximity.
Different activities release different amounts of virus particles into the environment. On the far end of the spectrum, a cough or sneeze releases about 200-million virus particles. Furthermore, the force of a cough or sneeze can aerosolize these particles (thus allowing them to hang in the air for a long time), or travel across a room in an instant. On the other end of the spectrum, breathing normally releases about 20 virus particles per minute, but with less force than a cough or sneeze. As a result, the particles expelled by breathing will tend to be expelled at a slower speed and travel a shorter distance. Speaking releases about 200 viral particles per minute.[8]
These rates of exposure are important in terms of understanding the time required to exceed the infectious dose threshold. Consider the following formula:

The time required to be infected, assuming close proximity with no precautions, is the infectious dose divided by the rate the virus particles are expelled.
Assuming an infectious dose of 1,000 virus particles, very close proximity to someone speaking (close enough to inhale all the particles released by the speaker) would require 5 minutes to exceed the infectious dose:

Similarly, very close proximity to someone breathing normally would require a ten-fold increase in exposure (50 minutes):

Obviously, a single cough or sneeze with 200-million virus particles will instantly exceed the 1,000 particle threshold.
Again, currently, we do not know the infectious dose – estimates range from a few hundred to a few thousand virus particles. Therefore, the data is insufficient to determine the exact duration of time to acquire an infection. However, public health authorities do provide guidance.
Risk of Infection
The Centers for Disease Control and Prevention (CDC) advises, that for close contact with an individual in a non-healthcare setting, 15 minutes can be used as a threshold for the time to acquire an infectious dose (note: subsequent to the date of this blog post, the CDC’s guidance has been updated from 15 consecutive minutes to 15 non-consecutive minutes in total over a 24-hour period).[9]
Since currently we do not know SAR-CoV-2’s infectious dose, the key take away is an individual is not going to be infected by a single virus particle. However, we are not free from risk. We, as a society, are going to need to weigh the risks. This will take the form of everyday people making everyday decisions about the risks they are willing to accept – both to themselves personally, and to society as a whole. “Nothing is without risk, but you can weigh the risks. . . . It’s going to be a series of judgment calls people will make every day,” as Dr. William Petri a professor of infectious disease at the University of Virginia Medical School, told the Washington Post. [10]
Forward-thinking customer experience brands will consider how individuals and society as a whole weigh these risks and build customer experiences around both customer expectations and responsible civic commitment. The pandemic represents a moment of truth between brands and their customers. Building responsible and safe customer experiences will become a core driver of trust in the brand.
Some factors individual consumers and customer experience managers will need to consider as we weigh these risks include: [11]
Distance: At a minimum the environment and activity should allow for 6 feet separation to be maintained.
Duration: The duration of the activity should be short enough to minimize infection risk, considering the specific activity (breathing, talking, singing, etc) and other mitigation efforts (distance, masks, ventilation, etc).
Ventilation: Indoor venues should be well ventilated. Outdoor venues are naturally well ventilated and, therefore, safer.
Masks: Mask wearing by individuals will inhibit the spread of virus particles in the air. The CDC recommends wearing cloth face coverings in public settings where other social distancing measures are difficult to maintain (e.g., grocery stores and pharmacies). Masks are less of a filter to protect the wearer, but they inhibit the spread of virus droplets in the air by the wearer – masks protect others.[12]
Transfer Risk: Customers and employees should avoid unnecessary contact with high touch objects or surfaces, disinfecting surfaces and hands with hand sanitizer.

[1] “Nothing Like SARS: Researchers Warn The Coronavirus Will Not Fade Away Anytime Soon” npr.org, June 11, 202. Web. August 13, 2020.
[2] Fred Hutchinson Cancer Research Center. Dr. Amitabha Gupta “Fred Hutch and Covid-19.” August 4, 2020. Video, 10:15. https://www.youtube.com/watch?v=iaa40DflvOk&feature=youtu.be.
[3] Skinner, Michael. “expert reaction to questions about COVID-19 and viral load” sciencemediacentre.org, March 26, 2020. Web. May 13, 2020.
[4] “How COVID-19 Spreads.” CDC.gov, May 21, 2020. Web. May 21, 2020.
[5] “How COVID-19 Spreads.” CDC.gov, May 21, 2020. Web. May 21, 2020.
[6] “Transmission of SARS-CoV-2: implications for infection prevention precautions.” who.int, July 9, 2020. Web. August 13, 2020.
[7] Geddes, Linda. “Does a high viral load or infectious dose make covid-19 worse?” newscientist.com, March 27, 2020. Web May 14, 2020.
[8] Bromage, Eric. “The Risks – Know Them – Avoid Them.” Erinbromage.com, May 6, 2020. Web. May 13 2020.
[9] “Public Health Recommendations for Community-Related Exposure.” CDC.gov, March 30, 2020. Web. May 15 2020.
[10] Shaver, Katherine. “Wondering what’s safe as states start to reopen? Here’s what some public health experts say.” Washingtonpost.com, May 15, 2020. Web. May 15, 2020.
[11] Shaver, Katherine. “Wondering what’s safe as states start to reopen? Here’s what some public health experts say.” Washingtonpost.com, May 15, 2020. Web. May 15, 2020.
[12] “About Masks.” CDC.gov, August 6, 2020. Web. August 14 2020.
Implications of CX Consistency for Researchers – Part 3 – Common Cause v Special Cause Variation
Previously, we discussed the implications of intra-channel consistency for researchers.
This post considers two types of variation in the customer experience: common and special cause variation, and their implications for customer researchers.
The concepts of common and special cause variation are derived from the process management discipline Six Sigma.
Common cause variation is normal or random variation within the system. It is statistical noise within the system. Examples of common cause variation in the customer experience are:
- Poorly defined, poorly designed, inappropriate policies or procedures
- Poor design or maintenance of computer systems
- Inappropriate hiring practices
- Insufficient training
- Measurement error
Special cause variation, on the other hand, is not random. It conforms to laws of probability. It is the signal within the system. Examples of special cause variation include:
- High demand/ high traffic
- Poor adjustment of equipment
- Just having a bad day
What are the implications of common and special cause variation for customer experience researchers?
Given the differences between common cause and special cause variation, researchers need a tool to help them distinguish between the two. Researchers need a means of determining if any observed variation in the customer experience is statistical noise or a signal within the system. Control charts are a statistical tool to make a determination if variation is noise or a signal.
Control charts track measurements within upper and lower quality control limits. These quality control limits define statistically significant variation overtime (typically at a 95% confidence), which means there is a 95% probability that the variation is the result of an actual change in the customer experience (special cause variation) not just normal common cause variation. Observed variation within these quality control limits are common cause variation. Variation which migrates outside these quality control limits is special cause variation.
To illustrate this concept, consider the following example of mystery shop results:
This chart depicts a set of mystery shop scores which both vary from month to month and generally appear to trend upward.
Customer experience researchers need to provide managers a means of determining if the month to month variation is statistical noise or some meaningful signal within the system. Turning this chart into a control chart by adding statistically defined upper and lower quality control limits will determine if the monthly variation is common or special cause.
To define quality control limits, the customer experience researcher needs to determine the count of observations for each month, the monthly standard deviation, and the average count of shops across all months.
The following table adds these three additional pieces of information into our example:
Month |
Count of Mystery Shops | Average Mystery Shop Scores | Standard Deviation of Mystery Shop Scores |
May |
510 | 83% | 18% |
June |
496 | 84% | 18% |
July |
495 | 82% | 20% |
Aug |
513 | 83% |
15% |
Sept | 504 | 83% |
15% |
Oct | 489 | 85% |
14% |
Nov | 494 | 85% |
15% |
Averages | 500 | 83.6% |
16.4% |
To define the upper and lower quality control limits (UCL and LCL, respectively), apply the following formula:
Where:
x = Grand Mean of the score
n = Mean sample size (number of shops)
SD = Mean standard deviation
These equations yield quality control limits at 95% confidence, which means there is a 95% probability any variation observed outside these limits is special cause variation, rather than normal common cause variation within the system
Calculating these quality control limits and applying them to the above chart produces the following control chart, with upper and lower quality control limits depicted in red:
This control chart now answers the question, what variation is common cause and what variation is special cause. The general trend upward appears to be statistically significant with the most recent month above the upper quality control limit. Additionally, this control chart identifies a period of special cause variation in July. With 95% confidence we know some special cause drove the scores below the lower control limit. Perhaps this special cause was employee turnover, perhaps a new system rollout, or perhaps a weather event that impacted the customer experience.
Integrated Digital First CX Model: Implications for CX Researchers
In previous posts to this five-part series on building an integrated digital-first service model we discussed
An integrated delivery channel requires an integration of research methodologies to measure the customer experience. Researchers should think in terms of exposure and moments of truth as they monitor each waypoint in the customer experience.
Understanding Exposure & Moments of Truth Risks
Digital waypoints with high exposure risk should be tested thoroughly with usability, focus groups, ethnography and other qualitative research to ensure features meet customer needs and are programmed correctly. Once programmed and tested, they need to be monitored with ongoing audits.
Waypoints with higher moment of truth risk are best monitored with post-transaction surveys, mystery shopping and the occasional focus group.
Usability Tests
Ongoing Audits
Mystery Shopping
Focus Groups
Integrated Channel CX Measurement
When measuring the customer experience across multiple channels in an integrated manner, it is important to both gather consistent measures across all channels, as well as measures specific to each channel. Each channel has their own specific needs; however, consistent measures across all channels provide context and a point of comparison.
Here is what an integrated omni-channel research plan may look like:
Kinēsis recommends measuring each channel against a set of consistent brand attribute measurements. Brands have personality, and it is incumbent on CX researchers to evaluate each channel against the overall desired brand personality objectives. A channel disconnected from the institution’s brand objectives can do a lot of damage to the institution’s perceived image.
Kinēsis uses brand adjectives and agreement statements to measure customer impressions of the brand. Ask yourself, what 5 or 6 adjectives you would like customers to describe your institution. Then simply take these adjectives and ask customers if the adjectives describe the customer experience.
Next, ask yourself, what statements you would like the customer to describe their perception of the brand as a result of any interaction. Statements such as:
• We are easy to do business with.
• We are knowledgeable.
• We are interested in members as people, and concerned for their individual needs.
• We are committed to the community.
These statements can be incorporated into the research by asking customers the extent they agree with each of the statements.
Again, brands have personality. Brand adjectives and agreement statements are an excellent way to tie disparate research across multiple channels together with consistent measures of perceptions of the brand personality as a result of the experience.
Channel Specific Dimensions
Different channels have different service attributes; therefore, it is important to provide each channel manager with specific research relevant to their channel. Digital channels, for example, may require measures around: appeal, identity, navigation, content, value and trust. Non-digital managers may require measures such as: reliability, responsiveness, competence, empathy and the physical environment.
Efficacy of the Experience
Regardless of channel, all research should contain consistent measures of the efficacy of the experience. The efficacy of the experience is the institution’s ultimate objective of every customer experience. Ask yourself, how do we want the customer to feel or think as a result of the interaction?
Some examples of efficacy measurements include:
• Purchase Intent/ Return Intent: Kinēsis has a long history using this dependent variable, using purchase intent.
• Likelihood of Referral: Likelihood of referral measures (like Net Promoter Score) are generally accepted as a reliable proxy measure for customer loyalty.
• Member/ Customer Advocacy: The extent to which the financial institution is an advocate for the customer is best measured by using an agreement scale to measure the agreement with the following statement, “This bank cares about me, not just the bottom line.” Agreement with this statement is also an excellent proxy measurement for loyalty.