Archive | Customer Comments RSS for this section

A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Planned Interactions

Part 2: Research Tools to Monitor Planned Interactions through the Customer Lifecycle

As we explored in an earlier post, Three Types of Customer Experiences CX Managers Must Understand, there are three types of customer interactions: Planned, Stabilizing, and Critical.

Planned interactions are intended to increase customer profitability through the customer lifecycle by engaging customers with relevant planned interactions and content in an integrated omni-channel environment.  Planned interactions will continue to grow in importance as the financial service industry shifts to an integrated digital first model.

These planned interactions are frequently triggered by changes in account usage, financial situation, family profile, etc.  CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action toward planned interactions.  Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating their effectiveness – regardless of the channel.

The key to an effective strategy for planned interactions is relevance. Triggered requests for increased engagement must be made in the context of the customer’s needs and with their permission; otherwise, the requests will come off as clumsy and annoying, and give the impression the bank is not really interested in the customer’s individual needs.  By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and relevant approach to planned interactions.

Research Plan for Planned Interactions

The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign.  Ask yourself, what customer interactions are planned through these layers of integrated channels.  Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.

For example, after acquisition and onboarding, assume a bank has a campaign to trigger planned interactions based on triggers from past engagement.  These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.

Engagement Phase

Often it is instructive to think of customer experience research in terms of the bank-customer interface, employing different research tools to study the customer experience from both sides of this interface.

In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:

Customer Side Brand Side
Post-Event Surveys
 
These post-experience surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey.  They can be performed across all channels, digital, contact center and in-person.  As the name implies, the purpose of this type of survey is to measure experience with a specific customer experience.
Employee Surveys

Ultimately, employees are at the center of the integrated customer experience model.
 
Employee surveys often measure employee satisfaction and engagement. However, there is far more value to be gleaned from employees.  We employ them to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.
 
They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identify perceptual gaps between management and frontline personnel.
Overall Satisfaction Surveys
 
Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction.  They give managers valuable insight into overall satisfaction, engagement, image and positioning across the entire customer base, not just active customers.
Digital Delivery Channel Shopping
 
Be it a website or mobile app, digital mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these digital channels.
  Transactional Mystery Shopping
 
Mystery shopping is about alignment.  It is an excellent tool to align the customer experience to the brand. Best-in-class mystery shopping answers the question: is our customer experience consistent with our brand objectives?  Historically, mystery shopping has been in the in-person channel, however we are seeing increasing mystery shopping to contact center agents.

Growth Phase

In the growth phase, we measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:

Customer Side Brand Side
Awareness Surveys
 
Awareness of the brand, its products and services, is central to planned service interactions.  Managers need to know how awareness and attitudes change as a result of these planned experiences.
Cross-Sell  Mystery Shopping
 
In these unique mystery shops, mystery shoppers are seeded into the lead/referral process.  The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction.
 
These shops work very well in planned sales interactions within the contact center environment. 
Wallet Share Surveys
 
These surveys are used to evaluate customer engagement with and loyalty to the institution.  Specifically, they determine if customers consider the institution their primary provider of financial services, and identify potential road blocks to wallet share growth.
 

Retention Phase

Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:

Customer Side Brand Side
Critical Incident Technique (CIT)
 
CIT is a qualitative research methodology designed to uncover details surrounding a service encounter that a customer found particularly satisfying or dissatisfying.  This research technique identifies these common critical incidents, their impact on the customer experience, and customer engagement, giving managers an informed perspective upon which to prepare employees to recognize moments of truth, and respond in ways that will lead to positive outcomes.
Employee Surveys
 
Employees observe firsthand the relationship with the customer.  They are a valuable resource of customer experience information, and can provide a lot of context into the types of bad experiences customers frequently experience.
Lost Customer Surveys
 
Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention.
Life Cycle Mystery Shopping
 
If an integrated channel approach is the objective, one should measure the customer experience in an integrated manner.
 
In lifecycle shops, shoppers interact with the bank over a period of time, across multiple touch points (digital, contact center and in-person).  This lifecycle approach provides broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across all channels.
Comment Listening
 
Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.
 

Call to Action – Make the Most of the Research

For customer experience surveys, we recommend testing the effectiveness of planned interactions by benchmarking three loyalty attitudes:

  • Would Recommend: The likelihood of the customer recommending the bank to a friend, relative or colleague.
  • Customer Advocacy: The extent to which the customer agrees with the statement, “My bank cares about me, not just the bottom line?”
  • Primary Provider: Does the customer consider the institution their primary provider for financial services?

For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.

As the integrated digital first business model accelerates, planned interactions will continue to grow in importance, and managers of the customer experience should build customer experience monitoring tools to evaluate the efficacy of these planned experiences in terms of driving desired customer attitudes and behaviors.

In the next post, we will take a look at stabilizing experiences, and their implications for customer experience research.

 

 

Click Here For More Information About Kinesis' Bank CX Research Services

A New Look at Comment Cards: Best Practices in Bank Customer Experience Measurement Design – Customer Comments & Feedback

Customer Comments and Feedback

Customer comment tools provide financial institutions a valuable tool to identify and reply to customers who have had a negative service experience and may be at risk for attrition or spreading negative word of mouth.

Beyond randomly surveying customers who have recently conducted a service interaction at a branch or call center, banks should also provide an avenue for self-selected customer feedback, feedback from customers who have not been selected to participate in a survey, but want to comment on the experience.

In the past, this vehicle for collecting this unsolicited feedback would be the good old fashioned comment card.  Today, the Internet offers a much more efficient means of collecting this feedback.  For the branch channel, invitations to provide feedback with a URL to an online comment form can be printed on transaction receipts.   For call centers, customers can be directed to IVR systems to capture voice feedback from customers.  Website and mobile users can be offered online comment forms as well.

Unsolicited feedback tools are not surveys, and should not be used as surveys.  In fact, they make terrible customer satisfaction surveys.  Many institutions try to turn them into surveys by asking customers to rate such things as service, convenience and product selection. But these comment channels do not give reliable information because they do not come from typical customers. The people who fill out the cards tend to fall into one of four groups:

  • Extremely happy customers
  • Extremely unhappy customers
  • Extremely bored customers
  • Customers with requests (for products, new store locations, etc.)

Notice the operative word in the first three categories: extreme. If a customer is satisfied with the product or service, why bother to give feedback? Customers expect to be satisfied. Having your expectations met is not something to write about. In research parlance, the sample is self-selected, and the people who provide such feedback are not likely to be representative of the general population of customers. It therefore makes no sense to ask these people to provide ratings that are going to be tabulated and averaged. The results will be useless at best and completely misleading at worst.

A better approach is to design them as letters to the bank president. They look something like,

“Dear [President’s name]:

Here is something I would like you to know . . .

 

[Lots of white space]

 

Sincerely yours,”

[Space for name, address and phone number]

 

Additionally, the check box can be included asking the customer if they would like someone to contact them as a result of their feedback.

This type of feedback tool will deliver valuable qualitative data about the experience that prompted the customer to provide the feedback.

It is essential that a system for analyzing and responding to the feedback be put into place.  First, sort the comments according to if the customer wants a reply to their feedback.  There are ways to streamline this process, but to ignore it is to make matters worse, because customers (the angry ones, at least) will expect a reply. On the other hand, responding to customer concerns makes comment tools exceptionally valuable. First, they provide a method to identify and reply to customers who have had a negative service experience and may be at risk for attrition or undermine the brand with negative word of mouth, and even worse social media commentary.  Second, they Minimize negative word-of-mouth advertising that would undermine marketing efforts; and increase positive word-of-mouth advertising (customers who have had a problem fixed are famous for becoming vocal advocates of a company).  The flip-side is that customers who have had a positive experience can be thanked for their feedback, which encourages customer loyalty.

The next step in acting on the qualitative feedback is to reduce it into quantifiable themes through the process of coding, where comments are grouped by theme.  For instance, 18% of comments may have referred to “slow service” and 14% to “lack of job knowledge”.  Now, we can monitor the frequency of various themes by business unit and over time.

Comment tools are not new, but with modern technology can be employed as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.

Finally, the unsolicited nature of customer comments offer a unique opportunity to feed themes identified in customer comments back into customer survey design, allowing managers to determine if issues uncovered are broadly present across all customers.

For more posts in this series, click on the following links:


Click Here For More Information About Kinesis' Bank CX Research Services