A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Planned Interactions
Part 2: Research Tools to Monitor Planned Interactions through the Customer Lifecycle
As we explored in an earlier post, Three Types of Customer Experiences CX Managers Must Understand, there are three types of customer interactions: Planned, Stabilizing, and Critical.
Planned interactions are intended to increase customer profitability through the customer lifecycle by engaging customers with relevant planned interactions and content in an integrated omni-channel environment. Planned interactions will continue to grow in importance as the financial service industry shifts to an integrated digital first model.
These planned interactions are frequently triggered by changes in account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action toward planned interactions. Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating their effectiveness – regardless of the channel.
The key to an effective strategy for planned interactions is relevance. Triggered requests for increased engagement must be made in the context of the customer’s needs and with their permission; otherwise, the requests will come off as clumsy and annoying, and give the impression the bank is not really interested in the customer’s individual needs. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and relevant approach to planned interactions.
Research Plan for Planned Interactions
The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign. Ask yourself, what customer interactions are planned through these layers of integrated channels. Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.
For example, after acquisition and onboarding, assume a bank has a campaign to trigger planned interactions based on triggers from past engagement. These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.

Engagement Phase
Often it is instructive to think of customer experience research in terms of the bank-customer interface, employing different research tools to study the customer experience from both sides of this interface.
In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:
Customer Side | Brand Side |
Post-Event Surveys These post-experience surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey. They can be performed across all channels, digital, contact center and in-person. As the name implies, the purpose of this type of survey is to measure experience with a specific customer experience. |
Employee Surveys Ultimately, employees are at the center of the integrated customer experience model. Employee surveys often measure employee satisfaction and engagement. However, there is far more value to be gleaned from employees. We employ them to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information. They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identify perceptual gaps between management and frontline personnel. |
Overall Satisfaction Surveys Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction. They give managers valuable insight into overall satisfaction, engagement, image and positioning across the entire customer base, not just active customers. |
Digital Delivery Channel Shopping Be it a website or mobile app, digital mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these digital channels. |
Transactional Mystery Shopping Mystery shopping is about alignment. It is an excellent tool to align the customer experience to the brand. Best-in-class mystery shopping answers the question: is our customer experience consistent with our brand objectives? Historically, mystery shopping has been in the in-person channel, however we are seeing increasing mystery shopping to contact center agents. |
Growth Phase
In the growth phase, we measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:
Customer Side | Brand Side |
Awareness Surveys Awareness of the brand, its products and services, is central to planned service interactions. Managers need to know how awareness and attitudes change as a result of these planned experiences. |
Cross-Sell Mystery Shopping In these unique mystery shops, mystery shoppers are seeded into the lead/referral process. The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction. These shops work very well in planned sales interactions within the contact center environment. |
Wallet Share Surveys These surveys are used to evaluate customer engagement with and loyalty to the institution. Specifically, they determine if customers consider the institution their primary provider of financial services, and identify potential road blocks to wallet share growth. |
Retention Phase
Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:
Customer Side | Brand Side |
Critical Incident Technique (CIT) CIT is a qualitative research methodology designed to uncover details surrounding a service encounter that a customer found particularly satisfying or dissatisfying. This research technique identifies these common critical incidents, their impact on the customer experience, and customer engagement, giving managers an informed perspective upon which to prepare employees to recognize moments of truth, and respond in ways that will lead to positive outcomes. |
Employee Surveys Employees observe firsthand the relationship with the customer. They are a valuable resource of customer experience information, and can provide a lot of context into the types of bad experiences customers frequently experience. |
Lost Customer Surveys Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention. |
Life Cycle Mystery Shopping If an integrated channel approach is the objective, one should measure the customer experience in an integrated manner. In lifecycle shops, shoppers interact with the bank over a period of time, across multiple touch points (digital, contact center and in-person). This lifecycle approach provides broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across all channels. |
Comment Listening Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction. |
Call to Action – Make the Most of the Research
For customer experience surveys, we recommend testing the effectiveness of planned interactions by benchmarking three loyalty attitudes:
- Would Recommend: The likelihood of the customer recommending the bank to a friend, relative or colleague.
- Customer Advocacy: The extent to which the customer agrees with the statement, “My bank cares about me, not just the bottom line?”
- Primary Provider: Does the customer consider the institution their primary provider for financial services?
For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.
As the integrated digital first business model accelerates, planned interactions will continue to grow in importance, and managers of the customer experience should build customer experience monitoring tools to evaluate the efficacy of these planned experiences in terms of driving desired customer attitudes and behaviors.

A New Look at Comment Cards: Best Practices in Bank Customer Experience Measurement Design – Customer Comments & Feedback
Customer comment tools provide financial institutions a valuable tool to identify and reply to customers who have had a negative service experience and may be at risk for attrition or spreading negative word of mouth.
Beyond randomly surveying customers who have recently conducted a service interaction at a branch or call center, banks should also provide an avenue for self-selected customer feedback, feedback from customers who have not been selected to participate in a survey, but want to comment on the experience.
In the past, this vehicle for collecting this unsolicited feedback would be the good old fashioned comment card. Today, the Internet offers a much more efficient means of collecting this feedback. For the branch channel, invitations to provide feedback with a URL to an online comment form can be printed on transaction receipts. For call centers, customers can be directed to IVR systems to capture voice feedback from customers. Website and mobile users can be offered online comment forms as well.
Unsolicited feedback tools are not surveys, and should not be used as surveys. In fact, they make terrible customer satisfaction surveys. Many institutions try to turn them into surveys by asking customers to rate such things as service, convenience and product selection. But these comment channels do not give reliable information because they do not come from typical customers. The people who fill out the cards tend to fall into one of four groups:
- Extremely happy customers
- Extremely unhappy customers
- Extremely bored customers
- Customers with requests (for products, new store locations, etc.)
Notice the operative word in the first three categories: extreme. If a customer is satisfied with the product or service, why bother to give feedback? Customers expect to be satisfied. Having your expectations met is not something to write about. In research parlance, the sample is self-selected, and the people who provide such feedback are not likely to be representative of the general population of customers. It therefore makes no sense to ask these people to provide ratings that are going to be tabulated and averaged. The results will be useless at best and completely misleading at worst.
A better approach is to design them as letters to the bank president. They look something like,
“Dear [President’s name]:
Here is something I would like you to know . . .
[Lots of white space]
Sincerely yours,”
[Space for name, address and phone number]
Additionally, the check box can be included asking the customer if they would like someone to contact them as a result of their feedback.
This type of feedback tool will deliver valuable qualitative data about the experience that prompted the customer to provide the feedback.
It is essential that a system for analyzing and responding to the feedback be put into place. First, sort the comments according to if the customer wants a reply to their feedback. There are ways to streamline this process, but to ignore it is to make matters worse, because customers (the angry ones, at least) will expect a reply. On the other hand, responding to customer concerns makes comment tools exceptionally valuable. First, they provide a method to identify and reply to customers who have had a negative service experience and may be at risk for attrition or undermine the brand with negative word of mouth, and even worse social media commentary. Second, they Minimize negative word-of-mouth advertising that would undermine marketing efforts; and increase positive word-of-mouth advertising (customers who have had a problem fixed are famous for becoming vocal advocates of a company). The flip-side is that customers who have had a positive experience can be thanked for their feedback, which encourages customer loyalty.
The next step in acting on the qualitative feedback is to reduce it into quantifiable themes through the process of coding, where comments are grouped by theme. For instance, 18% of comments may have referred to “slow service” and 14% to “lack of job knowledge”. Now, we can monitor the frequency of various themes by business unit and over time.
Comment tools are not new, but with modern technology can be employed as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.
Finally, the unsolicited nature of customer comments offer a unique opportunity to feed themes identified in customer comments back into customer survey design, allowing managers to determine if issues uncovered are broadly present across all customers.
For more posts in this series, click on the following links:
- Introduction: Best Practices in Bank Customer Experience Measurement Design
- Customer Surveys: Best Practices in Bank Customer Experience Measurement Design
- Mystery Shopping: Best Practices in Bank Customer Experience Measurement Design
- Leverage Unrecognized Experts in the Customer Experience: Best Practices in Bank Customer Experience Measurement Design – Employee Surveys
- Filling in the White Spaces: Best Practices in Bank Customer Experience Measurement Design – Social Listening
- Customer Experience Measurement Implications of Changing Branch Networks
Integrated Digital First CX Model: Implications for CX Researchers
In previous posts to this five-part series on building an integrated digital-first service model we discussed
An integrated delivery channel requires an integration of research methodologies to measure the customer experience. Researchers should think in terms of exposure and moments of truth as they monitor each waypoint in the customer experience.
Understanding Exposure & Moments of Truth Risks
Digital waypoints with high exposure risk should be tested thoroughly with usability, focus groups, ethnography and other qualitative research to ensure features meet customer needs and are programmed correctly. Once programmed and tested, they need to be monitored with ongoing audits.
Waypoints with higher moment of truth risk are best monitored with post-transaction surveys, mystery shopping and the occasional focus group.
Usability Tests
Ongoing Audits
Mystery Shopping
Focus Groups
Integrated Channel CX Measurement
When measuring the customer experience across multiple channels in an integrated manner, it is important to both gather consistent measures across all channels, as well as measures specific to each channel. Each channel has their own specific needs; however, consistent measures across all channels provide context and a point of comparison.
Here is what an integrated omni-channel research plan may look like:
Kinēsis recommends measuring each channel against a set of consistent brand attribute measurements. Brands have personality, and it is incumbent on CX researchers to evaluate each channel against the overall desired brand personality objectives. A channel disconnected from the institution’s brand objectives can do a lot of damage to the institution’s perceived image.
Kinēsis uses brand adjectives and agreement statements to measure customer impressions of the brand. Ask yourself, what 5 or 6 adjectives you would like customers to describe your institution. Then simply take these adjectives and ask customers if the adjectives describe the customer experience.
Next, ask yourself, what statements you would like the customer to describe their perception of the brand as a result of any interaction. Statements such as:
• We are easy to do business with.
• We are knowledgeable.
• We are interested in members as people, and concerned for their individual needs.
• We are committed to the community.
These statements can be incorporated into the research by asking customers the extent they agree with each of the statements.
Again, brands have personality. Brand adjectives and agreement statements are an excellent way to tie disparate research across multiple channels together with consistent measures of perceptions of the brand personality as a result of the experience.
Channel Specific Dimensions
Different channels have different service attributes; therefore, it is important to provide each channel manager with specific research relevant to their channel. Digital channels, for example, may require measures around: appeal, identity, navigation, content, value and trust. Non-digital managers may require measures such as: reliability, responsiveness, competence, empathy and the physical environment.
Efficacy of the Experience
Regardless of channel, all research should contain consistent measures of the efficacy of the experience. The efficacy of the experience is the institution’s ultimate objective of every customer experience. Ask yourself, how do we want the customer to feel or think as a result of the interaction?
Some examples of efficacy measurements include:
• Purchase Intent/ Return Intent: Kinēsis has a long history using this dependent variable, using purchase intent.
• Likelihood of Referral: Likelihood of referral measures (like Net Promoter Score) are generally accepted as a reliable proxy measure for customer loyalty.
• Member/ Customer Advocacy: The extent to which the financial institution is an advocate for the customer is best measured by using an agreement scale to measure the agreement with the following statement, “This bank cares about me, not just the bottom line.” Agreement with this statement is also an excellent proxy measurement for loyalty.