The migration away from the branch channel that started decades ago has accelerated in recent years – aided by the confluence of the pandemic and technical advances. Banks now operate in an age where it is possible to deliver a seamless integrated digital first retail banking delivery model. Such a digital first business model needs to accomplish two objectives in the digital space: it must foster trust and deliver personalization.
Kinēsis’ research into issues of trust and personalization, suggests there is much work financial institutions need to do to achieve the objective of fostering trust and delivering personalization. Nearly as many customers do not trust their primary financial institution as those who do. Meanwhile digital channels exhibit a weaker relationship between trust and satisfaction compared to non-digital channels.
Customer perceptions of trust and satisfaction with their primary financial institution vary greatly based on what they value in the relationship with a financial institution. Importantly, for managers of the customer experience, what customers value in a relationship with a financial institution also reflects channel preference and selection.
Understanding customer segments should inform channel management.
In our research, Kinēsis segmented survey respondents into four groups based on their answers to a battery of questions regarding which retail financial services are most important to them. This segmentation grouped respondents into four attitudinal segments which we describe as follows:
|Show Me The Money|
These customers seek financial value from financial institutions. They are significantly more likely than other respondents to feel value for money and competitive rates and fees are most important when doing business with a financial service provider.
These customers value personalized service from their financial service providers. They tend to feel the most important attributes of a financial service provider are personalized service with polite and knowledgeable staff, providing quick and efficient service and fast resolution to any issues.
Customers who grouped into this segment are were more likely to value a broad range of products, loyalty programs, ethical and sustainable business practices, recommendations and an appealing brand.
|Don’t’ Make Me Wait|
These customers value efficiency. Almost exclusively they respond quick and efficient service and fast resolution to any issues are most important to them.
Let’s explore how customer attitudes and values can reflect themselves in trust, satisfaction and behavior.
Trust in Financial Institution
Customers who value products and services have stronger trust in their primary financial institution, while those who value efficiency display weaker trust.
When asked the extent to which they agreed with the following statement, “My primary financial institution looks after my long-term financial wellbeing.” Customers who value products and services professed the strongest agreement with this statement, and were significantly more likely to agree with this statement compared to customers who value efficiency.
Satisfaction with Financial Institution by Channel
Customers who value products tend to be more satisfied, while customers who seek financial value tend to be less satisfied.
In general, automated channels appear to have higher satisfaction compared to personal channels. Customers expressed the highest satisfaction for mobile apps and the lowest satisfaction for contact centers.
Preferred Channel to Open Account
Customers who want quality products and services are significantly more likely to want to use a mobile app to open an account, as opposed to visit a branch.
About half of the respondents preferred to open an account at a branch, about one-third would prefer to use the website, while only one in ten would prefer a mobile app to open an account. The customer segment most likely to find appeal in using the mobile app are those who value products and services.
Preferred Channel for Problem Resolution
The contact center is the preferred channel to resolve a problem, followed by the branch.
Customers who want quality products and services are significantly more likely to want to use a mobile app to resolve a problem, as opposed to visit a branch.
Preferred Channel for Information
When asked how they prefer to get information, a plurality of customers prefer to use the website, followed by the contact center.
Customers who seek service quality are significantly more likely than financial value seekers to want to visit a branch to get information.
Preferred Channel for Advice
It is clear most customers still prefer the branch when seeking advice. Seeking advice is a type of transaction we refer to as a moment of truth, with high importance on the relationship with the customer, and it’s clear a little hand holding is appreciated across all segments.
Customers who seek service quality are significantly less likely than others to want to visit a website to seek financial advice.
Preferred Channel for Funds Transfer
The majority of customers prefer mobile apps to websites for funds transfers.
Customers who seek quality products or services are significantly more likely to prefer mobile apps to transfer funds, compared to those who seek service or efficiency.
When segmenting customers by what they value in a financial institution, differences begin to appear between customer segments in both measures of trust, satisfaction and behavior.
Customers who value products and services tend to trust and be more satisfied with their primary financial institution. This most likely reflects the transition from a branch centric model to a digital first model. Customers who are product oriented are less likely to require personal attention. Not surprisingly, these customers are also more likely to use a mobile app to complete a variety of transactions.
Preference for financial value and efficiency do not appear to have significant behavioral differences, however, attitudinally, they display different feeling of trust and satisfaction. Customers who value efficiency display weak trust in their primary financial institution; while those who seek financial value are generally less satisfied with their primary financial institution.
Customers who value service quality are most likely to be left behind in a switch from a branch distribution channel to a digital first deli very model. These customers represent 25% of all bank customers. They are less likely to use a website and app and more likely to visit a branch.
As the industry transitions to a digital first delivery model, managers of the customer experience will need to pay close attention to the customers who value personalized service and human interaction in the customer journey to foster both trust and satisfaction.
In previous posts to this five-part series on building an integrated digital-first service model we discussed:
- Matching different waypoints of the customer journey to the channels best suited for the specific waypoint;
- Customer preferences for a financial service provider; and
- What customers want from digital channels.
A waypoint is a point of reference when navigating a journey. Not only do the customer journeys take place across a multiple channels, but they take place across multiple transactions or waypoints.
An integrated digital channel strategy must be founded on understanding how specific channels match up to specific waypoints in the customer journey. In the first installment of these series there is a discussion of this issue. The understanding that different transactions match different channels is the whole point of an integrated strategy.
Currently, not every digital channel is a match for every customer need. Digital channels with higher a frequency of visits are increasingly the day-to-day face of the institution. The exposure risk of these channels is high, and managers of the customer experience must make sure digital channels are well programmed and tested to manage this exposure risk. Currently, however, customers prefer to match digital channels for low moment of truth interactions such as transfers, deposits, and researching information. In terms of satisfaction, these digital channels outperform the non-digital; however, they play on a very different playing field. Customers interact with branches and contact centers much less frequently, and assign lower satisfaction ratings to these channels. But when they do use these channels, it is much more important. Customers match these non-digital channels to high moment-of-truth interactions such as seeking advice, problem resolution, and opening an account.
Advances in artificial intelligence will no doubt close some of the moment of truth gaps between digital and non-digital channels, but for now, there still is a role for branches and contact centers. Closing these gaps between digital and non-digital channels will require attention to both personalization and trust. Again, people want banks to care about their needs and have the ability to meet their needs and solve their problems.
What do customers want from a bank?
Overall, customers value efficiency and personalized service from their primary financial institution. As we’ve seen, the most appealing service attributes to customers are:
• Online and mobile services
• Quick and efficient service
• Fast resolution of any issues
• Ability to manage my accounts in ways that suit me
• Polite and knowledgeable staff
It is important to note, this list includes both digital and personal channels. Customers value an integrated approach.
ROI Potential of Digital Banking Attributes
Investments in timely information, financial value, and cyber security assistance have the most potential for return on investment. The digital banking attributes with the highest potential for ROI in terms of appeal to customers and increasing their trust are:
• Alerts about upcoming direct debits
• Alerts about upcoming overdrafts
• Offers and perks from places shopped often
• Cyber security assistance
Further, investments in personalized information have the highest potential for fostering customer trust. Customers do not find the following attributes particularly appealing relative to other attributes; however, they do offer high ROI potential in terms of increasing customer trust:
• Budget information
• Savings tips
• Balance updates
Ultimately, success or failure of any integrated digital first strategy will require banks to achieve something that has eluded them so far – that is to scale personalization.
Video banking seems an obvious solution to scale personalization. However, while the potential of their adoption of this channel, in the age of Zoom, is delayed. According to our research only 4% of bank customers have used video banking. However, of those consumers who have used video banking, all of them trust their primary financial institution, and felt it looked after their financial wellbeing. This suggests video banking could be well received, and deepen the overall relationship with the customers.
In previous posts to this five-part series on building an integrated digital-first service model we discussed matching different waypoints of the customer journey to the channels best suited for the specific waypoint.
Beyond the overall CX at the institution level, we also researched CX attributes specific to the digital channel. In an effort to identify digital features to prioritize in digital channel design, Kinēsis investigated both the appeal and trust of various digital banking attributes.
Timely information, cyber security, and financial value are the most appealing digital banking features.
Digital Attribute Appeal
The following chart displayed the relative appeal of digital service attributes on a 5-point scale:
Timely information about overdraft and upcoming direct debits alerts received two of the top three appeal rankings (4.2 and 4.0, respectively). Assistance with cyber security threats, and offers and perks from places shopped often round out the top four.
The next tier of five attributes, with average appeal ratings of 3.3 to 3.0, contain themes of information and personalized advice:
• Personal financial reports and analytics/ Dashboards
• Savings tips based on my spending patterns
• Balance updates until my next payday
• Tips to act more sustainably based on my behavior
• Budget information based on spending
Chatbots and gamification (making the app more fun with elements of game playing, such as badges, points, etc), round out the bottom two attributes.
Digital Attributes & Trust
To add context to the digital attribute appeal ranking, we asked consumers if each of the attributes would increase their trust in the financial service provider.
Mirroring the appeal rankings, cyber security and timely information have the highest likelihood of increasing trust in the financial institution.
Cyber security assistance and alerts about overdrafts and upcoming direct debits increased trust for just about 19 out of 20 customers.
Only about 4 in 10 customers felt chatbots or gamification would increase their trust in the financial institution.
In order to provide CX managers context to make informed decisions about which digital attributes to prioritize, the trust and appeal rankings of each attribute are plotted on the quadrant chart below. Each of the quadrants below labeled (Q1 – Q4) are defined by the average appeal and trust rankings. Those in quadrant 1 (Q1) have higher than average appeal and trust, while those in Q4 have lower than average appeal and trust.
Investments in timely information, financial value and cyber security assistance have the most potential for return on investment.
With higher than average trust and appeal ratings, alerts about upcoming direct debits and overdrafts, offers and perks from places shopped often and cyber security assistance are the four digital attributes positioned to yield the highest return on investment in terms of appeal and trust.
Personal financial information such as analytics/dashboards, budget information, savings tips, and balance updates have below average appeal, however higher than average trust implications, and therefore should be prioritized next.
Tips to act sustainably, chatbots and making the app more fun with gamification have both below average appeal and they do not appear to have strong associations with trust of the institution, and therefore should be prioritized last.
In a previous post to this five-part series on building an integrated digital-first service model we discussed matching different waypoints of the customer journey to the channels best suited for the specific waypoint.
In an effort to help CX managers make informed decisions regarding their overall service mix, Kinēsis asked consumers to rate an assortment of financial service CX attributes with respect to their importance, as well as the relationship between the importance of these attributes and the customers’ trust that the financial institution looked after their financial wellbeing.
Efficiency and personalized service are the most important dimensions of the customer experience.
The following chart displayed the frequency customers stated each attribute was important to them when doing business with a financial service provider. To force respondents to consider only attributes which were important to them, they were only allowed to select up to five attributes.
Again, efficiency and personalized service are the most important dimensions of the customer experience. The most frequently cited service attributes surround themes of efficiency (online and mobile services, quick and efficient service, fast resolution to any issues) and personalized service (polite and knowledgeable staff, and ability to manage accounts in ways that suit me), followed by polite and knowledgeable staff, the ability to manage accounts in ways that suit the customer, and competitive rates and fees.
Attribute Value & Trust in Institution
The least frequently cited attributes surrounded products and the brand (customer loyalty programs, broad range of quality products, recommendations of appropriate products & services and appealing brand).
Beyond the appeal of each attribute, Kinēsis investigated their relationship to trust in the financial institution.
While brand appeal, recommendations of appropriate products & services and a broad range of quality products were cited with the least frequency in terms of their importance, customers who cited these attributes as important were more likely to trust their primary financial institution. While customers are not as likely to include these three attributes in their list of top-5 important attributes, brand appeal, recommendations of appropriate products & services and a broad range of quality products do appear to have a positive relationship to trust in the institution.
What started decades ago as a migration away from the branch channel has accelerated during the Covid-19 pandemic – aided by technological advances that were not available just a few years ago. This confluence of the pandemic and technical advances is culminating in an age where it is possible to deliver a seamless integrated digital first retail banking delivery model. Such an integrated delivery model is based on the understanding that customers have different needs at different moments in their customer journey. This delivery model matches channels strategically to these different needs at the correct moment for the customer.
Customer Journey Waypoints
A waypoint is a point of reference when navigating a journey. Not only do the customer journeys take place across a multiple channels, but they take place across multiple transactions or waypoints as well. To investigate how customers navigate digital and personal channels, Kinēsis researched customer channel preferences for six different customer journey waypoints: opening an account, problem resolution, seeking advice, getting information, making a deposit, and transferring funds.
Two CX Risks: Exposure & Moments of Truth
Business is often a process of balancing risks. The customer experience is no different. Managers of an integrated delivery model should be aware of the two primary risks they face: exposure and moments of truth. Exposure risk is the sheer frequency of customer encounters in the channel. Poor experiences in a high exposure channels spread this poor experience across more customers. Moments of truth are critical experiences with more individual importance of the waypoint. Poor experiences in a moment of truth interaction lead to negative customer emotions, with similarly negative impacts on customer profitability and word of mouth.
Waypoints and Channel Preferences
The foundation of an integrated digital first CX model is based on matching the best suited channels based on the needs of both the customer and the institution. Customer channel choice is not uniform. Rather, customers select channels they deem appropriate based on the waypoint of the customer journey they find themselves. For customers conducting a transfer or deposit, mobile apps are the most popular channel (preferred by 58% and 53% of the customers, respectively). Customers seeking information have a broader range of preferred channels, but a plurality (40%) prefers to seek information via the website. The contact center’s preferred role is problem resolution (51%); while the branch is preferred to both seek advice and open an account.
The following table illustrates these different channel preferences for different waypoints in the journey, as well as overlaying channel use, satisfaction and the moment of truth potential for each waypoint:
The above table illustrates the current state of the integrated digital first business model. The digital channels, with the most exposure risk, are the primary customer choice for waypoints which represent low moment of truth risk.
• Automated transactions such as transfers and deposits are preferred with an app.
• The website serves as both an information center, and to a lesser extent transactional center.
• The contact center’s primary role is problem resolution, and as a result carries significant risk in terms of moments of truth.
• The branch is where customers come to seek advice as well as initiate or deepen a banking relationship by opening an account.
Again, managers of the customer experience should be cognizant of both their risk in terms of exposure and moments of truth. With an average of nearly one visit every other day, poorly executed mobile experiences represent significant exposure risk, yet the nature of these transactions represent low moment of truth risk. Fortunately, 88% of customers are satisfied with their financial institution’s app, with a near super majority of customers describing themselves as very satisfied. The branch and contact center have the opposite risk profile. With, respectively, an average of just 7 and 17 visits annually, they do not represent a significant exposure risk. However, both the contact center and the branch represent significant risk with respect to encountering moments of truth. While digital channels are the daily face of the institution, when faced with a moment of truth, customers appear to prefer a see a real face, or hear a comforting voice. Customers interact with branches and contact centers much less frequently – but when they do – it is important. In this light, the relative dissatisfaction with these channels (average satisfaction 4.2 and 4.0, respectively) relative to apps and websites (average satisfaction 4.5 and 4.4, respectively) is cause for concern. The computers appear to be out performing the people – but they perform on an easier playing field.
The current environment with pandemic-related disruptions has pushed most customers into accelerating digital adoption. However, there is an element of trust missing with digital delivery. As most customers shy away from digital channels when their need advances up the moment of truth scale. Trust will be key in deepening digital relationships.
Momentum toward digital banking has been building for decades as emergent technologies, aided by the pandemic, increase the utility and use of digital channels. This confluence of the pandemic and technical advances is culminating in an age where it is possible to deliver a seamless digital first integrated retail banking business model.
Such an integrated delivery model is based on the understanding that customers have different needs at different moments of their customer journey. This delivery model matches channels strategically to these different needs at the correct moment for the customer.
Neither size nor a focus on technology provides an advantage in terms of the overall customer experience. The evidence strongly suggests, being closer to the customer, and matching different waypoints of the customer journey to the channels best suited for the specific waypoint is the best CX model:
• Automated transactions are preferred with an app.
• The website is best positioned as an information center, and to a lesser extent transactional center.
• Problem resolution in customers’ minds is the contact center’s primary role.
• Customers visit a branch to seek advice or open an account.
In future installments of this five-part series, we will:
A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Stabilizing Relationships
Part 3: Onboarding Research: Research Techniques to Track Effectiveness of Stabilizing New Customer Relationships
As we explored in an earlier post, Three Types of Customer Experiences CX Managers Must Understand, there are three types of customer interactions: Planned, Stabilizing, and Critical.
Stabilizing interactions are service encounters which promote customer retention, particularly in the early stages of the relationship. It is incumbent on an integrated digital-first banking model to stabilize new customers, without relying on the local branch to build the relationship. It is important, therefore, to get the onboarding process right in a systematic way.
New customers are at the highest risk of defection, as they have had less opportunity to confirm the provider meets their expectations. Turnover by new customers is particularly damaging to profits because many defections occur prior to recouping acquisition costs, resulting in a net loss on the customer relationship. As a result, customer experience managers should stabilize the customer relationship early to ensure a return on acquisition costs.
Systematic education drives customer expectations beyond simply informing customers about additional products and services; it also informs new customers how to use services more effectively and efficiently – this is going to be critical in a digital first integrated strategy. Customers need to know how to navigate these channels effectively.
The first step in designing a research plan for the onboarding process is to define the process itself. Ask yourself, what type of stabilizing customer experiences do we expect at both the initial account opening and at discrete time periods thereafter (be it 30 days, 90 days, 1-year)? Understanding the expectations of the onboarding process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.
Kinesis recommends measuring the onboarding process by auditing the performance of the process and its influence on the customer relationship from the bank and customer perspective.
Bank Perspective: Performance Audits
Performance audits are a type of mystery shop, and an effective tool to audit the performance of the onboarding process.
First, mystery shop the initial account opening (across a channels: digital, contact center and branch) to evaluate its efficacy and effectiveness. Be sure to link these observations to a dependent variable, such as purchase intent, to determine which service attributes drive purchase intent. This will inform decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.
Beyond auditing the initial account opening experience, a performance audit of the onboarding process should test the presence and timing of specific onboarding events expected at discrete time periods. As an example, you may expect the following onboarding process after a new account is opened:
|At Opening||Internet Banking Presentation
Mobile Banking Presentation
Contact Center Presentation
|1-10 Days||Welcome Letter
Internet Banking Password
Overdraft Protection Brochure
Mobile Banking E-Mail
|30-45 Days||First Statement
Credit Card Offer
Auto Loan Brochure
Mortgage/Home Equity Loan Brochure
In this example, the bank’s customer experience managers have designed a process to increase awareness of digital channels, introduce the integrated layered service concept, and introduce additional services offered. An integrated research plan would recruit mystery shoppers for a long-term evaluation of the presence, timing, and effectiveness of each event in the onboarding process.
In parallel to auditing the presence and timing of onboarding events, research should be conducted to evaluate the effectiveness of the process in stabilizing the customer relationship by surveying new customers at distinct intervals after customer acquisition. We recommend testing the effectiveness of the onboarding process by benchmarking three loyalty attitudes:
- Would Recommend: The likelihood of the customer recommending the brand to a friend, relative or colleague.
- Customer Advocacy: The extent to which the customer agrees with the statement, “You care about me, not just the bottom line?”
- Primary Provider: Does the customer consider you their primary provider for financial services?
These three measures, tracked together throughout the onboarding process, will give managers a measure of the effectiveness of stabilizing the relationship.
Again, new customers are at an elevated risk of defection. Therefore, it is important to stabilize the customer relationship early on to ensure ROI on acquisition costs. A well-designed research process will give managers an important audit of both the presence and timing of onboarding events, as well as track customer engagement and loyalty early in their tenure.
In the next post, we will explore the third type of experience – experiences with a significant amount of influence on the customer relationship – critical experiences.
A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Planned Interactions
Part 2: Research Tools to Monitor Planned Interactions through the Customer Lifecycle
As we explored in an earlier post, Three Types of Customer Experiences CX Managers Must Understand, there are three types of customer interactions: Planned, Stabilizing, and Critical.
Planned interactions are intended to increase customer profitability through the customer lifecycle by engaging customers with relevant planned interactions and content in an integrated omni-channel environment. Planned interactions will continue to grow in importance as the financial service industry shifts to an integrated digital first model.
These planned interactions are frequently triggered by changes in account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action toward planned interactions. Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating their effectiveness – regardless of the channel.
The key to an effective strategy for planned interactions is relevance. Triggered requests for increased engagement must be made in the context of the customer’s needs and with their permission; otherwise, the requests will come off as clumsy and annoying, and give the impression the bank is not really interested in the customer’s individual needs. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and relevant approach to planned interactions.
Research Plan for Planned Interactions
The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign. Ask yourself, what customer interactions are planned through these layers of integrated channels. Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.
For example, after acquisition and onboarding, assume a bank has a campaign to trigger planned interactions based on triggers from past engagement. These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.
Often it is instructive to think of customer experience research in terms of the bank-customer interface, employing different research tools to study the customer experience from both sides of this interface.
In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:
|Customer Side||Brand Side|
These post-experience surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey. They can be performed across all channels, digital, contact center and in-person. As the name implies, the purpose of this type of survey is to measure experience with a specific customer experience.
Ultimately, employees are at the center of the integrated customer experience model.
Employee surveys often measure employee satisfaction and engagement. However, there is far more value to be gleaned from employees. We employ them to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.
They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identify perceptual gaps between management and frontline personnel.
|Overall Satisfaction Surveys
Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction. They give managers valuable insight into overall satisfaction, engagement, image and positioning across the entire customer base, not just active customers.
|Digital Delivery Channel Shopping
Be it a website or mobile app, digital mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these digital channels.
|Transactional Mystery Shopping
Mystery shopping is about alignment. It is an excellent tool to align the customer experience to the brand. Best-in-class mystery shopping answers the question: is our customer experience consistent with our brand objectives? Historically, mystery shopping has been in the in-person channel, however we are seeing increasing mystery shopping to contact center agents.
In the growth phase, we measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:
|Customer Side||Brand Side|
Awareness of the brand, its products and services, is central to planned service interactions. Managers need to know how awareness and attitudes change as a result of these planned experiences.
|Cross-Sell Mystery Shopping
In these unique mystery shops, mystery shoppers are seeded into the lead/referral process. The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction.
These shops work very well in planned sales interactions within the contact center environment.
|Wallet Share Surveys
These surveys are used to evaluate customer engagement with and loyalty to the institution. Specifically, they determine if customers consider the institution their primary provider of financial services, and identify potential road blocks to wallet share growth.
Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:
|Customer Side||Brand Side|
|Critical Incident Technique (CIT)
CIT is a qualitative research methodology designed to uncover details surrounding a service encounter that a customer found particularly satisfying or dissatisfying. This research technique identifies these common critical incidents, their impact on the customer experience, and customer engagement, giving managers an informed perspective upon which to prepare employees to recognize moments of truth, and respond in ways that will lead to positive outcomes.
Employees observe firsthand the relationship with the customer. They are a valuable resource of customer experience information, and can provide a lot of context into the types of bad experiences customers frequently experience.
|Lost Customer Surveys
Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention.
|Life Cycle Mystery Shopping
If an integrated channel approach is the objective, one should measure the customer experience in an integrated manner.
In lifecycle shops, shoppers interact with the bank over a period of time, across multiple touch points (digital, contact center and in-person). This lifecycle approach provides broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across all channels.
Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.
Call to Action – Make the Most of the Research
For customer experience surveys, we recommend testing the effectiveness of planned interactions by benchmarking three loyalty attitudes:
- Would Recommend: The likelihood of the customer recommending the bank to a friend, relative or colleague.
- Customer Advocacy: The extent to which the customer agrees with the statement, “My bank cares about me, not just the bottom line?”
- Primary Provider: Does the customer consider the institution their primary provider for financial services?
For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.
As the integrated digital first business model accelerates, planned interactions will continue to grow in importance, and managers of the customer experience should build customer experience monitoring tools to evaluate the efficacy of these planned experiences in terms of driving desired customer attitudes and behaviors.
A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Three Types of Customer Experiences
Part 1: Three Types of Customer Experiences CX Managers Must Understand
COVID-19 Crisis Accelerating Change
The transformation began decades ago. Like a catalyst in a chemical reaction, the COVID-19 crisis has accelerated the transformation away from in-person channels. Recognizing paradigm shifts in the moment is often difficult, however – a long coming paradigm shift appears to be upon us.
Shifts away from one thing require a shift toward another. A shift away from an in-person first approach is toward a digital first approach with increasing integrated layers of engagement and expertise.
Digital apps allow for a near continuous engagement with customers. Apps now sit in customer’s pockets and are available to the customer on demand when and where they need them. This communication actually works both ways with the customer providing information to the bank, and the bank informing the customer. Managers of the customer experience can now deliver contextually relevant information directly to the customer. Automated advice and expertise is in its infancy, and shows promise. Chat bots and other preprogrammed help and advice can start the process of delivering help and expertise when requested.
Contact centers are the next logical layer of this integrated customer experience. Contact centers are an excellent channel to deliver general customer service and advice, as well as expert advice for more sophisticated financial needs. Kinesis has clients with Series 7 representatives and wealth managers providing expert financial advice via video conference.
The role of the branch obviously includes providing expert advice. Branches will continue to become smaller, more flexible, less monolithic and, tailored to the location and market. Small community centers will focus on community outreach, while larger flagship branches sit at the center of an integrated hub and spoke model – a model that includes digital and contact centers.
Three Types of Experiences
Every time a customer interacts with a bank, regardless of channel, they learn something about the bank, and adjust their behavior based on what they learn. This is the core component of customer experience management – to teach customers to behave in profitable ways. It is incumbent on managers of the customer experience to understand the different types of customer experiences, and their implications for managing the customer experience in this manner. Customer experiences come in a variety of forms; however there are three types of experiences customer experience managers should be alert to. These three are: planned, stabilizing, and critical experiences.
Planned interactions are intended to increase customer profitability by engaging customers in meaningful conversations in an integrated omni-channel environment. These interactions can be triggered by changes in the customers’ purchasing patterns, account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action. Customer experience managers should have a process to record and analyze the quality of execution of planned interactions, with the objective of evaluating their performance.
The key to an effective strategy for planned interactions is appropriateness. Triggered requests for increased spending must be made in the context of the customer’s needs and with their permission; otherwise the requests will come off as clumsy, annoying, and not customer centric. By aligning information about execution quality (cause) and customer actions (effect), customer experience managers can build a more effective and appropriate approach to planned interactions.
In future posts, we will look at planned experiences and consider their implications in light of this shift toward a digital first approach.
Stabilizing interactions promote customer retention, particularly in the early stages of the relationship.
New customers are at the highest risk of defection. Long-term customers know what to expect from their bank, and due to self-selection, their expectations tend to be aligned with their experience. New customers are more likely to experience disappointment, and thus more likely to defect. Turnover by new customers is particularly unprofitable because many defections occur prior to the break-even point of customer acquisition costs, resulting in a net loss on the customer. Thus, experiences that stabilize the customer relationship early ensure a higher proportion of customers will reach positive profitability.
The keys to an effective stabilizing strategy are education, consistency, and competence. Education influences expectation and helps customers develop realistic expectations. It goes beyond simply informing customers about the products and services offered. It systematically informs new customers how to use the bank’s services more effectively and efficiently: how to obtain assistance, how to complain, and what to expect as the relationship progresses. For an integrated digital first business model to work, customers need to learn how to use self-administered channels and know how, and when, to access the deeper layers offering more engagement and expertise.
In future posts, we will look at stabilizing experiences and consider their implications in light of this shift toward a digital first approach.
Critical interactions are events that lead to memorable customer experiences. While most customer experiences are routine, from time to time a situation arises that is out of the ordinary: a complaint, a question, a special request, a chance for an employee to go the extra mile. Today, many of these critical experiences occur amidst the underlying stresses of the COVID-19 crisis. The outcomes of these critical incidents can be either positive or negative, depending upon the way the bank responds to them; however, they are seldom neutral. The longer a customer remains with a financial institution, the greater the likelihood that one or more critical experiences will occur – particularly in a time of crisis, like the pandemic.
Because they are memorable and unusual, critical interactions tend to have a powerful effect on the customer relationship. We often think of these as “moments of truth” where the institution has an opportunity to solidify the relationship earning a loyal customer, or risking the customer’s defection. Positive outcomes lead to “customer delight” and word-of-mouth endorsements, while negative outcomes lead to customer defections, diminished share of wallet and unfavorable word-of-mouth.
The key to an effective critical interaction strategy is opportunity. Systems and processes must be in a position to react to these critical moments of truth. An effective customer experience strategy should include systems for recording critical interactions, analyzing trends and patterns, and feeding that information back to management. This can be particularly challenging in an integrated Omni-channel environment. Holistic customer profiles need to be available across channels, and employees must be trained to recognize critical opportunities and empowered to respond to them in such a way that they will lead to positive outcomes and desired customer behaviors.
In future posts, we will look at critical experiences and consider their implications in light of this shift toward a digital first approach.
Not All Customer Experience Variation is Equal: Use Control Charts to Identify Actual Changes in the Customer Experience
Variability in customer experience scores is common and normal. Be it a survey of customers, mystery shops, social listening or other customer experience measurement, a certain amount of random variation in the data is normal. As a result, managers need a means of interpreting any variation in their customer experience measurement to evaluate if the customer experience is truly changing, or if the variation they are seeing is simply random.
One solution to this need is control charts. Control charts are a statistical tool commonly used in Six Sigma programs to measure variation. They track customer experience measurements within upper and lower quality control limits. When measurements fall outside either limit, the trend indicates an actual change in the customer experience rather than just random variation.
To illustrate this concept, consider the following example of mystery shop results:
In this example the general trend of the mystery shop scores is up, however, from month to month there is a bit of variation. Managers of this customer experience need to know if July was a particularly bad month, conversely, is the improved performance of in October and November something to be excited about. Does it represent a true change in the customer experience?
To answer these questions, there are two more pieces of information we need to know beyond the average mystery shop scores: the sample size or count of shops for each month and the standard deviation in shop scores for each month.
The following table adds these two additional pieces of information into our example:
|Month||Count of Mystery Shops||Average Mystery Shop Scores||Standard Deviation of Mystery Shop Scores|
Now, in order to determine if the variation in shops scores is significant or not, we need to calculate upper and lower quality control limits, where any variation above or below these limits is significant, reflecting an actual change in the customer experience.
The upper and lower quality control limits (UCL and LCL, respectively), at a 95% confidence level, are calculated according to the following formulas:
x = Grand Mean of the score
n = Mean sample size (number of shops)
SD = Mean standard deviation
Applying these equations to the data in the above table, produces the following control chart, where the upper and lower quality control limits are depicted in red.
This control chart tells us that, not only is the general trend of the mystery shop scores positive, and that November’s performance has improved above the upper control limit, but it also reveals that something unusual happened in July, where performance slipped below the lower control limit. Maybe employee turnover caused the decrease, or something external such as a weather event was the cause, but we know with 95% confidence the attributes measured in July were less present relative to the other months. All other variation outside of November or July is not large enough to be considered statistically significant.
So…what this control chart gives managers is a meaningful way to determine if any variation in their customer experience measurement reflects an actual change in the experience as opposed to random variation or chance.
In the next post, we will look to the causes of this variation.
This post considers two types of variation in the customer experience: common and special cause variation, and their implications for customer researchers.
The concepts of common and special cause variation are derived from the process management discipline Six Sigma.
Common cause variation is normal or random variation within the system. It is statistical noise within the system. Examples of common cause variation in the customer experience are:
- Poorly defined, poorly designed, inappropriate policies or procedures
- Poor design or maintenance of computer systems
- Inappropriate hiring practices
- Insufficient training
- Measurement error
Special cause variation, on the other hand, is not random. It conforms to laws of probability. It is the signal within the system. Examples of special cause variation include:
- High demand/ high traffic
- Poor adjustment of equipment
- Just having a bad day
What are the implications of common and special cause variation for customer experience researchers?
Given the differences between common cause and special cause variation, researchers need a tool to help them distinguish between the two. Researchers need a means of determining if any observed variation in the customer experience is statistical noise or a signal within the system. Control charts are a statistical tool to make a determination if variation is noise or a signal.
Control charts track measurements within upper and lower quality control limits. These quality control limits define statistically significant variation overtime (typically at a 95% confidence), which means there is a 95% probability that the variation is the result of an actual change in the customer experience (special cause variation) not just normal common cause variation. Observed variation within these quality control limits are common cause variation. Variation which migrates outside these quality control limits is special cause variation.
To illustrate this concept, consider the following example of mystery shop results:
This chart depicts a set of mystery shop scores which both vary from month to month and generally appear to trend upward.
Customer experience researchers need to provide managers a means of determining if the month to month variation is statistical noise or some meaningful signal within the system. Turning this chart into a control chart by adding statistically defined upper and lower quality control limits will determine if the monthly variation is common or special cause.
To define quality control limits, the customer experience researcher needs to determine the count of observations for each month, the monthly standard deviation, and the average count of shops across all months.
The following table adds these three additional pieces of information into our example:
|Count of Mystery Shops||Average Mystery Shop Scores||Standard Deviation of Mystery Shop Scores|
To define the upper and lower quality control limits (UCL and LCL, respectively), apply the following formula:
x = Grand Mean of the score
n = Mean sample size (number of shops)
SD = Mean standard deviation
These equations yield quality control limits at 95% confidence, which means there is a 95% probability any variation observed outside these limits is special cause variation, rather than normal common cause variation within the system
Calculating these quality control limits and applying them to the above chart produces the following control chart, with upper and lower quality control limits depicted in red:
This control chart now answers the question, what variation is common cause and what variation is special cause. The general trend upward appears to be statistically significant with the most recent month above the upper quality control limit. Additionally, this control chart identifies a period of special cause variation in July. With 95% confidence we know some special cause drove the scores below the lower control limit. Perhaps this special cause was employee turnover, perhaps a new system rollout, or perhaps a weather event that impacted the customer experience.