Tag Archive | ROI Customer Experience

A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Stabilizing Relationships

Part 3: Onboarding Research: Research Techniques to Track Effectiveness of Stabilizing New Customer Relationships

As we explored in an earlier post, Three Types of Customer Experiences CX Managers Must Understand, there are three types of customer interactions: Planned, Stabilizing, and Critical.

Stabilizing interactions are service encounters which promote customer retention, particularly in the early stages of the relationship.  It is incumbent on an integrated digital-first banking model to stabilize new customers, without relying on the local branch to build the relationship.  It is important, therefore, to get the onboarding process right in a systematic way.

New customers are at the highest risk of defection, as they have had less opportunity to confirm the provider meets their expectations.  Turnover by new customers is particularly damaging to profits because many defections occur prior to recouping acquisition costs, resulting in a net loss on the customer relationship.  As a result, customer experience managers should stabilize the customer relationship early to ensure a return on acquisition costs. 

Systematic education drives customer expectations beyond simply informing customers about additional products and services; it also informs new customers how to use services more effectively and efficiently – this is going to be critical in a digital first integrated strategy.  Customers need to know how to navigate these channels effectively.  

Onboarding Research

The first step in designing a research plan for the onboarding process is to define the process itself.  Ask yourself, what type of stabilizing customer experiences do we expect at both the initial account opening and at discrete time periods thereafter (be it 30 days, 90 days, 1-year)?  Understanding the expectations of the onboarding process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.

Kinesis recommends measuring the onboarding process by auditing the performance of the process and its influence on the customer relationship from the bank and customer perspective.

Bank Perspective: Performance Audits

Performance audits are a type of mystery shop, and an effective tool to audit the performance of the onboarding process.

First, mystery shop the initial account opening (across a channels: digital, contact center and branch) to evaluate its efficacy and effectiveness.  Be sure to link these observations to a dependent variable, such as purchase intent, to determine which service attributes drive purchase intent.  This will inform decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.

Beyond auditing the initial account opening experience, a performance audit of the onboarding process should test the presence and timing of specific onboarding events expected at discrete time periods.  As an example, you may expect the following onboarding process after a new account is opened:

Period Event
At Opening Internet Banking Presentation
Mobile Banking Presentation
Contact Center Presentation
ATM Presentation
Disclosures
1-10 Days Welcome Letter
Checks
Debit Card
Internet Banking Password
Overdraft Protection Brochure
Mobile Banking E-Mail
30-45 Days First Statement
Switch Kit
Credit Card Offer
Auto Loan Brochure
Mortgage/Home Equity Loan Brochure

In this example, the bank’s customer experience managers have designed a process to increase awareness of digital channels, introduce the integrated layered service concept, and introduce additional services offered.  An integrated research plan would recruit mystery shoppers for a long-term evaluation of the presence, timing, and effectiveness of each event in the onboarding process.

Customer Perspective

In parallel to auditing the presence and timing of onboarding events, research should be conducted to evaluate the effectiveness of the process in stabilizing the customer relationship by surveying new customers at distinct intervals after customer acquisition.  We recommend testing the effectiveness of the onboarding process by benchmarking three loyalty attitudes:

  • Would Recommend: The likelihood of the customer recommending the brand to a friend, relative or colleague.
  • Customer Advocacy: The extent to which the customer agrees with the statement, “You care about me, not just the bottom line?”
  • Primary Provider: Does the customer consider you their primary provider for financial services?

These three measures, tracked together throughout the onboarding process, will give managers a measure of the effectiveness of stabilizing the relationship.

Again, new customers are at an elevated risk of defection.  Therefore, it is important to stabilize the customer relationship early on to ensure ROI on acquisition costs.  A well-designed research process will give managers an important audit of both the presence and timing of onboarding events, as well as track customer engagement and loyalty early in their tenure.

In the next post, we will explore the third type of experience – experiences with a significant amount of influence on the customer relationship – critical experiences.

 

Click Here For More Information About Kinesis' Bank CX Research Services

A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Planned Interactions

Part 2: Research Tools to Monitor Planned Interactions through the Customer Lifecycle

As we explored in an earlier post, Three Types of Customer Experiences CX Managers Must Understand, there are three types of customer interactions: Planned, Stabilizing, and Critical.

Planned interactions are intended to increase customer profitability through the customer lifecycle by engaging customers with relevant planned interactions and content in an integrated omni-channel environment.  Planned interactions will continue to grow in importance as the financial service industry shifts to an integrated digital first model.

These planned interactions are frequently triggered by changes in account usage, financial situation, family profile, etc.  CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action toward planned interactions.  Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating their effectiveness – regardless of the channel.

The key to an effective strategy for planned interactions is relevance. Triggered requests for increased engagement must be made in the context of the customer’s needs and with their permission; otherwise, the requests will come off as clumsy and annoying, and give the impression the bank is not really interested in the customer’s individual needs.  By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and relevant approach to planned interactions.

Research Plan for Planned Interactions

The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign.  Ask yourself, what customer interactions are planned through these layers of integrated channels.  Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.

For example, after acquisition and onboarding, assume a bank has a campaign to trigger planned interactions based on triggers from past engagement.  These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.

Engagement Phase

Often it is instructive to think of customer experience research in terms of the bank-customer interface, employing different research tools to study the customer experience from both sides of this interface.

In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:

Customer Side Brand Side
Post-Event Surveys
 
These post-experience surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey.  They can be performed across all channels, digital, contact center and in-person.  As the name implies, the purpose of this type of survey is to measure experience with a specific customer experience.
Employee Surveys

Ultimately, employees are at the center of the integrated customer experience model.
 
Employee surveys often measure employee satisfaction and engagement. However, there is far more value to be gleaned from employees.  We employ them to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.
 
They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identify perceptual gaps between management and frontline personnel.
Overall Satisfaction Surveys
 
Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction.  They give managers valuable insight into overall satisfaction, engagement, image and positioning across the entire customer base, not just active customers.
Digital Delivery Channel Shopping
 
Be it a website or mobile app, digital mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these digital channels.
  Transactional Mystery Shopping
 
Mystery shopping is about alignment.  It is an excellent tool to align the customer experience to the brand. Best-in-class mystery shopping answers the question: is our customer experience consistent with our brand objectives?  Historically, mystery shopping has been in the in-person channel, however we are seeing increasing mystery shopping to contact center agents.

Growth Phase

In the growth phase, we measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:

Customer Side Brand Side
Awareness Surveys
 
Awareness of the brand, its products and services, is central to planned service interactions.  Managers need to know how awareness and attitudes change as a result of these planned experiences.
Cross-Sell  Mystery Shopping
 
In these unique mystery shops, mystery shoppers are seeded into the lead/referral process.  The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction.
 
These shops work very well in planned sales interactions within the contact center environment. 
Wallet Share Surveys
 
These surveys are used to evaluate customer engagement with and loyalty to the institution.  Specifically, they determine if customers consider the institution their primary provider of financial services, and identify potential road blocks to wallet share growth.
 

Retention Phase

Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:

Customer Side Brand Side
Critical Incident Technique (CIT)
 
CIT is a qualitative research methodology designed to uncover details surrounding a service encounter that a customer found particularly satisfying or dissatisfying.  This research technique identifies these common critical incidents, their impact on the customer experience, and customer engagement, giving managers an informed perspective upon which to prepare employees to recognize moments of truth, and respond in ways that will lead to positive outcomes.
Employee Surveys
 
Employees observe firsthand the relationship with the customer.  They are a valuable resource of customer experience information, and can provide a lot of context into the types of bad experiences customers frequently experience.
Lost Customer Surveys
 
Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention.
Life Cycle Mystery Shopping
 
If an integrated channel approach is the objective, one should measure the customer experience in an integrated manner.
 
In lifecycle shops, shoppers interact with the bank over a period of time, across multiple touch points (digital, contact center and in-person).  This lifecycle approach provides broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across all channels.
Comment Listening
 
Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.
 

Call to Action – Make the Most of the Research

For customer experience surveys, we recommend testing the effectiveness of planned interactions by benchmarking three loyalty attitudes:

  • Would Recommend: The likelihood of the customer recommending the bank to a friend, relative or colleague.
  • Customer Advocacy: The extent to which the customer agrees with the statement, “My bank cares about me, not just the bottom line?”
  • Primary Provider: Does the customer consider the institution their primary provider for financial services?

For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.

As the integrated digital first business model accelerates, planned interactions will continue to grow in importance, and managers of the customer experience should build customer experience monitoring tools to evaluate the efficacy of these planned experiences in terms of driving desired customer attitudes and behaviors.

In the next post, we will take a look at stabilizing experiences, and their implications for customer experience research.

 

 

Click Here For More Information About Kinesis' Bank CX Research Services

A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Three Types of Customer Experiences

Part 1: Three Types of Customer Experiences CX Managers Must Understand

COVID-19 Crisis Accelerating Change

The transformation began decades ago.  Like a catalyst in a chemical reaction, the COVID-19 crisis has accelerated the transformation away from in-person channels.   Recognizing paradigm shifts in the moment is often difficult, however – a long coming paradigm shift appears to be upon us.

Shifts away from one thing require a shift toward another.  A shift away from an in-person first approach is toward a digital first approach with increasing integrated layers of engagement and expertise. 

Digital First with Integrated Layers of Engagement & Expertise

Digital apps allow for a near continuous engagement with customers.  Apps now sit in customer’s pockets and are available to the customer on demand when and where they need them.  This communication actually works both ways with the customer providing information to the bank, and the bank informing the customer.  Managers of the customer experience can now deliver contextually relevant information directly to the customer.  Automated advice and expertise is in its infancy, and shows promise.  Chat bots and other preprogrammed help and advice can start the process of delivering help and expertise when requested.

Contact centers are the next logical layer of this integrated customer experience.  Contact centers are an excellent channel to deliver general customer service and advice, as well as expert advice for more sophisticated financial needs.  Kinesis has clients with Series 7 representatives and wealth managers providing expert financial advice via video conference.

The role of the branch obviously includes providing expert advice.  Branches will continue to become smaller, more flexible, less monolithic and, tailored to the location and market.  Small community centers will focus on community outreach, while larger flagship branches sit at the center of an integrated hub and spoke model – a model that includes digital and contact centers.

Three Types of Experiences

Every time a customer interacts with a bank, regardless of channel, they learn something about the bank, and adjust their behavior based on what they learn.  This is the core component of customer experience management – to teach customers to behave in profitable ways.  It is incumbent on managers of the customer experience to understand the different types of customer experiences, and their implications for managing the customer experience in this manner.  Customer experiences come in a variety of forms; however there are three types of experiences customer experience managers should be alert to.  These three are: planned, stabilizing, and critical experiences.

Planned

Planned interactions are intended to increase customer profitability by engaging customers in meaningful conversations in an integrated omni-channel environment. These interactions can be triggered by changes in the customers’ purchasing patterns, account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action.  Customer experience managers should have a process to record and analyze the quality of execution of planned interactions, with the objective of evaluating their performance.

The key to an effective strategy for planned interactions is appropriateness. Triggered requests for increased spending must be made in the context of the customer’s needs and with their permission; otherwise the requests will come off as clumsy, annoying, and not customer centric. By aligning information about execution quality (cause) and customer actions (effect), customer experience managers can build a more effective and appropriate approach to planned interactions.

In future posts, we will look at planned experiences and consider their implications in light of this shift toward a digital first approach.

Stabilizing

Stabilizing interactions promote customer retention, particularly in the early stages of the relationship.

New customers are at the highest risk of defection.  Long-term customers know what to expect from their bank, and due to self-selection, their expectations tend to be aligned with their experience.  New customers are more likely to experience disappointment, and thus more likely to defect. Turnover by new customers is particularly unprofitable because many defections occur prior to the break-even point of customer acquisition costs, resulting in a net loss on the customer. Thus, experiences that stabilize the customer relationship early ensure a higher proportion of customers will reach positive profitability.

The keys to an effective stabilizing strategy are education, consistency, and competence. Education influences expectation and helps customers develop realistic expectations.  It goes beyond simply informing customers about the products and services offered.  It systematically informs new customers how to use the bank’s services more effectively and efficiently: how to obtain assistance, how to complain, and what to expect as the relationship progresses. For an integrated digital first business model to work, customers need to learn how to use self-administered channels and know how, and when, to access the deeper layers offering more engagement and expertise.

In future posts, we will look at stabilizing experiences and consider their implications in light of this shift toward a digital first approach.

Critical

Critical interactions are events that lead to memorable customer experiences.  While most customer experiences are routine, from time to time a situation arises that is out of the ordinary: a complaint, a question, a special request, a chance for an employee to go the extra mile.  Today, many of these critical experiences occur amidst the underlying stresses of the COVID-19 crisis.  The outcomes of these critical incidents can be either positive or negative, depending upon the way the bank responds to them; however, they are seldom neutral. The longer a customer remains with a financial institution, the greater the likelihood that one or more critical experiences will occur – particularly in a time of crisis, like the pandemic.

Because they are memorable and unusual, critical interactions tend to have a powerful effect on the customer relationship. We often think of these as “moments of truth” where the institution has an opportunity to solidify the relationship earning a loyal customer, or risking the customer’s defection.  Positive outcomes lead to “customer delight” and word-of-mouth endorsements, while negative outcomes lead to customer defections, diminished share of wallet and unfavorable word-of-mouth.

The key to an effective critical interaction strategy is opportunity. Systems and processes must be in a position to react to these critical moments of truth. An effective customer experience strategy should include systems for recording critical interactions, analyzing trends and patterns, and feeding that information back to management.  This can be particularly challenging in an integrated Omni-channel environment.  Holistic customer profiles need to be available across channels, and employees must be trained to recognize critical opportunities and empowered to respond to them in such a way that they will lead to positive outcomes and desired customer behaviors.

In future posts, we will look at critical experiences and consider their implications in light of this shift toward a digital first approach.

In the next post we will explore planned interactions, and tools to monitor them through this accelerating change in distribution model.

Click Here For More Information About Kinesis' Bank CX Research Services

 

Not All Customer Experience Variation is Equal: Use Control Charts to Identify Actual Changes in the Customer Experience

Variability in customer experience scores is common and normal. Be it a survey of customers, mystery shops, social listening or other customer experience measurement, a certain amount of random variation in the data is normal. As a result, managers need a means of interpreting any variation in their customer experience measurement to evaluate if the customer experience is truly changing, or if the variation they are seeing is simply random.

One solution to this need is control charts. Control charts are a statistical tool commonly used in Six Sigma programs to measure variation. They track customer experience measurements within upper and lower quality control limits. When measurements fall outside either limit, the trend indicates an actual change in the customer experience rather than just random variation.

To illustrate this concept, consider the following example of mystery shop results:

Mystery Shop Scores

In this example the general trend of the mystery shop scores is up, however, from month to month there is a bit of variation.  Managers of this customer experience need to know if July was a particularly bad month, conversely, is the improved performance of in October and November something to be excited about.  Does it represent a true change in the customer experience?

To answer these questions, there are two more pieces of information we need to know beyond the average mystery shop scores: the sample size or count of shops for each month and the standard deviation in shop scores for each month.

The following table adds these two additional pieces of information into our example:

Month Count of Mystery Shops Average Mystery Shop Scores Standard Deviation of Mystery Shop Scores
May 510 83% 18%
June 496 84% 18%
July 495 82% 20%
Aug 513 83% 15%
Sept 504 83% 15%
Oct 489 85% 14%
Nov 494 85% 15%
Averages 500 83.6% 16.4%

Now, in order to determine if the variation in shops scores is significant or not, we need to calculate upper and lower quality control limits, where any variation above or below these limits is significant, reflecting an actual change in the customer experience.

The upper and lower quality control limits (UCL and LCL, respectively), at a 95% confidence level, are calculated according to the following formulas:

Equations

Where:

x = Grand Mean of the score

n = Mean sample size (number of shops)

SD = Mean standard deviation

Applying these equations to the data in the above table, produces the following control chart, where the upper and lower quality control limits are depicted in red.

Control Chart

This control chart tells us that, not only is the general trend of the mystery shop scores positive, and that November’s performance has improved above the upper control limit, but it also reveals that something unusual happened in July, where performance slipped below the lower control limit. Maybe employee turnover caused the decrease, or something external such as a weather event was the cause, but we know with 95% confidence the attributes measured in July were less present relative to the other months.  All other variation outside of November or July is not large enough to be considered statistically significant.

So…what this control chart gives managers is a meaningful way to determine if any variation in their customer experience measurement reflects an actual change in the experience as opposed to random variation or chance.

In the next post, we will look to the causes of this variation.

Next post:

Not All Customer Experience Variation is Equal: Common Cause vs. Special Cause Variation

 

Click Here For More Information About Kinesis' Research Services

Implications of CX Consistency for Researchers – Part 3 – Common Cause v Special Cause Variation

Previously, we discussed the implications of intra-channel consistency for researchers.

This post considers two types of variation in the customer experience: common and special cause variation, and their implications for customer researchers.

The concepts of common and special cause variation are derived from the process management discipline Six Sigma.

Common cause variation is normal or random variation within the system.  It is statistical noise within the system.   Examples of common cause variation in the customer experience are:

  • Poorly defined, poorly designed, inappropriate policies or procedures
  • Poor design or maintenance of computer systems
  • Inappropriate hiring practices
  • Insufficient training
  • Measurement error

Special cause variation, on the other hand, is not random.  It conforms to laws of probability.   It is the signal within the system.  Examples of special cause variation include:

  • High demand/ high traffic
  • Poor adjustment of equipment
  • Just having a bad day

What are the implications of common and special cause variation for customer experience researchers?

Given the differences between common cause and special cause variation, researchers need a tool to help them distinguish between the two.  Researchers need a means of determining if any observed variation in the customer experience is statistical noise or a signal within the system.  Control charts are a statistical tool to make a determination if variation is noise or a signal.

Control charts track measurements within upper and lower quality control limits.  These quality control limits define statistically significant variation overtime (typically at a 95% confidence), which means there is a 95% probability that the variation is the result of an actual change in the customer experience (special cause variation) not just normal common cause variation.  Observed variation within these quality control limits are common cause variation.  Variation which migrates outside these quality control limits is special cause variation.

To illustrate this concept, consider the following example of mystery shop results:

Mystery Shop Scores

This chart depicts a set of mystery shop scores which both vary from month to month and generally appear to trend upward.

Customer experience researchers need to provide managers a means of determining if the month to month variation is statistical noise or some meaningful signal within the system.  Turning this chart into a control chart by adding statistically defined upper and lower quality control limits will determine if the monthly variation is common or special cause.

To define quality control limits, the customer experience researcher needs to determine the count of observations for each month, the monthly standard deviation, and the average count of shops across all months.

The following table adds these three additional pieces of information into our example:

 

Month

Count of Mystery Shops Average Mystery Shop Scores Standard Deviation of Mystery Shop Scores

May

510 83% 18%

June

496 84% 18%

July

495 82% 20%

Aug

513 83%

15%

Sept 504 83%

15%

Oct 489 85%

14%

Nov 494 85%

15%

Averages 500 83.6%

16.4%

To define the upper and lower quality control limits (UCL and LCL, respectively), apply the following formula:

Where:

x = Grand Mean of the score

n = Mean sample size (number of shops)

SD = Mean standard deviation

These equations yield quality control limits at 95% confidence, which means there is a 95% probability any variation observed outside these limits is special cause variation, rather than normal common cause variation within the system

Calculating these quality control limits and applying them to the above chart produces the following control chart, with upper and lower quality control limits depicted in red:

Control Chart

This control chart now answers the question, what variation is common cause and what variation is special cause.  The general trend upward appears to be statistically significant with the most recent month above the upper quality control limit.  Additionally, this control chart identifies a period of special cause variation in July.  With 95% confidence we know some special cause drove the scores below the lower control limit.  Perhaps this special cause was employee turnover, perhaps a new system rollout, or perhaps a weather event that impacted the customer experience.

Click Here For More Information About Kinesis' Research Services

Implications of CX Consistency for Researchers – Part 2 – Intra-Channel Consistency

Previously, we discussed the implications of inter-channel consistency for researchers, and introduced a process for management to define a set of employee behaviors which will support the organization’s customer experience goals across multiple channels.

This post considers the implications of intra-channel consistency for customer experience researchers.

As with cross-channel consistency, intra-channel consistency, or consistency within individual channels requires the researcher to identify the causes of variation in the customer experience.  The causes of intra-channel variation, is more often than not at the local level – the individual stores, branches, employees, etc.  For example, a bank branch with large variation in customer traffic is more likely to experience variation in the customer experience.

Regardless of the source, consistency equals quality.

In our own research, Kinēsis conducted a mystery shop study of six national institutions to evaluate the customer experience at the branch level.  In this research, we observed a similar relationship between consistency and quality.  The branches in the top quartile in terms of consistency delivered customer satisfaction scores 15% higher than branches in the bottom quartile.  But customer satisfaction is a means to an end, not an end goal in and of itself.  In terms of an end business objective, such as loyalty or purchase intent, branches in the top quartile of consistency delivered purchase intent ratings 20% higher than branches in the bottom quartile.

Satisfaction and purchase intent by customer experience consistency

Purchase intent and satisfaction with the experience were both measured on a 5-point scale.

Again, it is incumbent on customer experience researchers to identify the causes of inconsistency.   A search for the root cause of variation in customer journeys must consider processes cause variation.

One tool to measure process cause variation is a Voice of the Customer (VOC) Table. VOC Tables have a two-fold purpose:  First, to identify specific business processes which can cause customer experience variations, and second, to identify which business processes will yield the largest ROI in terms of improving the customer experience.

VOC Tables provide a clear road map to identify action steps using a vertical and horizontal grid.  On the vertical axis, each customer experience attribute within a given channel is listed.  For each of these attributes a judgment is made about the relative importance of each attribute.  This importance is expressed as a numeric value.   On the horizontal axis is a exhaustive list of business processes the customer is likely to encounter, both directly and indirectly, in the customer journey.

This grid design matches each business process on the horizontal axis to each service attribute on the vertical axis.  Each cell created in this grid contains a value which represents the strength of the influence of each business process listed on the horizontal axis to each customer experience attribute.

Finally, a value is calculated at the bottom of each column which sums the values of the strength of influence multiplied by the importance of each customer experience attribute.  This yields a value of the cumulative strength of influence of each business process on the customer experience weighted by its relative importance.

Consider the following example in a retail mortgage lending environment.

VOC Table

In this example, the relative importance of each customer experience attributes was determined by correlating these attributes to a “would recommend” question, which served as a loyalty proxy.  This yields an estimate of importance based on the attribute’s strength of relationship to customer loyalty, and populates the far left column.  Specific business processes for the mortgage process are listed across the top of this table.  Within each cell, an informed judgment has been made regarding the relative strength of the business process’s influence on the customer experience attribute.  This strength of influence has been assigned a value of 1 – 3.  It is multiplied by the importance measure of each customer experience attribute and summed into a weighted strength of influence – weighted by importance, for each business process.

In this example, the business processes which will yield the highest ROI in terms of driving the customer experience are quote of loan terms (weighted strength of influence 23.9), clearance of exemptions (22.0), explanation of loan terms (20.2), loan application (18.9) and document collection (16.3).

Next, we will look into the concepts of common and special cause variation, and another research methodology designed to identify areas for attention. Control charts as just such a tool.

Click Here For More Information About Kinesis' Research Services

Business Case and Implications for Consistency – Part 3: The Causal Chain from Consistency to Customer Loyalty

In an earlier post we discussed the business case for consistency, primarily because consistency drives customer loyalty.  This post describes the causal chain from consistency to customer loyalty.

Brands are defined by how customers experience them, and they will have both an emotional and behavioral reaction to what they experience.  It is these reactions to the customer experience which drive satisfaction, loyalty and profitability.

There is a causal chain from consistency to customer loyalty.  McKinsey and Company concluded in their 2014 report, The Three Cs of Customer Satisfaction: Consistency, Consistency, Consistency, that feelings of trust are the strongest drivers of customer satisfaction and loyalty, and consistency is central to building customer trust.

For example, in our experience in the banking industry, institutions in the top quartile of consistent delivery are 30% more likely to be trusted by their customers compared to the bottom quartile.  Furthermore, agreement with the statements: my bank is “a brand I feel close to” and “a brand that I can trust” are significant drivers of brand differentiation as a result of the customer experience.  Again, brands are defined by how customers experience them.  In today’s environment where consumer trust in financial institutions is extremely low, fostering trust is critical for driving customer loyalty.  Consistency fosters trust.  Trust drives loyalty.

In our next post we will continue to explore the business case for consistency by considering the influence of poor experiences.

Click Here For More Information About Kinesis' Research Services

Business Case and Implications for Consistency – Part 2: Business Case for Consistency

In a previous post we considered why humans value consistency.

Loyalty is the holy grail of managing the customer experience.

The foundation of customer loyalty is consistency. In a 2014 research paper entitled, The Three Cs of Customer Satisfaction: Consistency, Consistency, Consistency, McKinsey & Company concluded that trust, trust driven by consistent experiences, is the strongest drivers of customer loyalty and satisfaction.

Kinēsis, believes that each time a brand and a customer interact, the customer learns something about the brand, and they adjust their behavior based on what they learn. There is real power in understanding this proposition. In it is the power to influence the customer into profitable behaviors and away from unprofitable behaviors. One of these behaviors is repeat purchases or loyalty.

Customer loyalty takes time to build. Feelings of security and confidence in a brand are built up by consistent customer experiences over a sustained period of time. Across all industries, customers want a good, consistent experience with the products and services they use.

The value of customer loyalty is obvious. Kinēsis has found the concept of the “loyalty effect” to be an excellent framework for illustrating the value of loyalty. The loyalty effect is a proposition that states that customer profitability increases with customer tenure. Consider the following chart of customer profit contribution to customer tenure:

This curve of profit contribution per customer over time is called the loyalty curve. At customer acquisition, the profit contribution is initially negative as a result of the cost of customer acquisition. After acquisition, customer profit contribution increase with time as a result of revenue growth, cost savings, referrals and price premiums. Loyal customers and consistent customer experiences require less customer education, generate fewer complaints, reduce the number of phone calls, handle time and are more efficient across the board.

In the next post we will explore the causal chain from consistency to customer loyalty.

Click Here For More Information About Kinesis' Research Services

Business Case and Implications for Consistency – Part 1: Why We Value Consistency

Humans value consistency – we are hard wired to do so – it’s in our DNA.

It is generally believed that modern humans originated on the Savanna Plain. Life was difficult for our distant forefathers. Sources of water, food, shelter were unreliable. Dangers existed at every turn. Evolving in this unreliable and hostile environment, evolutionary forces selected in modern humans a value for consistency – in effect hard wiring us to value consistency. We seek security in an insecure world.

In this context, it is not surprising we evolved to value consistency. While our modern world is a far more reliable environment, our brains are still hard wired to value consistency.

The implication for managers of the customer experience is obvious – customers want and value consistency in the customer experience. We’ve all felt it. When a car fails to start, when the power goes out, when software crashes we all feel uncomfortable. A lack of reliability and consistency creates confusion and frustration. We want to have confidence that reliable events like starting the car, turning on the lights or using software will work consistently. In the customer experience realm, we want to have confidence that the brands we have relationships with will deliver consistently on their brand promise each time without variation in quality.

Customers expect consistent delivery on the brand promise. They base their expectations on prior experience. Thus customers are in a self-reinforcing cycle where expectations are set based on prior experiences continually reinforcing the importance of consistency. This is the foundation of customer loyalty. We are creates of habit. The foundation of customer loyalty is built on the foundation of dependable, consistent, quality service delivery.

While we evolved in a difficult and unreliable environment, our modern society is much more reliable. Our modern society offers a much more consistent existent. Again, it’s a self-reinforcing cycle. Product quality and consistency of our mass production economy has reinforced our expectations of consistency.

Today’s information technology continues to reinforce our desire for consistency. However, it adds an additional element of customization. Henry Ford, the father of mass production, famously said of the Model-T, “You can have any color you want as long as it’s black.” Those days are gone. Today, we expect both consistency and customization.

In the next post, we will explore the business case for consistency.

Click Here For More Information About Kinesis' Research Services

Mystery Shopping Gap Analysis: Identify Service Attributes with Highest Potential for ROI

Research without call to action may be interesting, but in the end, not very useful.

This is particularly true with customer experience research.  It is incumbent on customer experience researchers to give management research tools which will identify clear call to action items –items in which investments will yield the highest return on investment (ROI) in terms of meeting management’s customer experience objectives.   This post introduces a simple intuitive mystery shopping analysis technique that identifies the service behaviors with the highest potential for ROI in terms of achieving these objectives.

Mystery shopping gap analysis is a simple three-step analytical technique.

Step 1: Identify the Key Objective of the Customer Experience

The first step is to identify the key objective of the customer experience.  Ask yourself, “How do we want the customer to think, feel or act as a result of the customer experience?”

For example:

  • Do you want the customer to have increased purchase intent?
  • Do you want the customer to have increased return intent?
  • Do you want the customer to have increased loyalty?

Let’s assume the key objective is increased purchase intent.  At the conclusion of the customer experience you want the customer to have increased purchase intent.

Next draft a research question to serve as a dependent variable measuring the customer’s purchase intent.  Dependent variables are those which are influenced or dependent on the behaviors measured in the mystery shop.

Step 2: Determine Strength of the Relationship of this Key Customer Experience Objective

After fielding the mystery shop study, and collecting a statistically significant number of shops, the next step is to determine the strength of the relationship between this key customer experience measure (the dependent variable) and each behavior or service attribute measured (independent variable).  There are a number of ways to determine the strength of the relationship, perhaps the easiest is a simple cross-tabulation of the results.  Cross tabulation groups all the shops with positive purchase intent and all the shops with negative purchase intent together and makes comparisons between the two groups.  The greater the difference in the frequency of a given behavior or service attribute between shops with positive purchase intent compared to negative, the stronger the relationship to purchase intent.

The result of this cross-tabulation yields a measure of the importance of each behavior or service attribute.  Those with stronger relationships to purchase intent are deemed more important than those with weaker relationships to purchase intent.

Step 3: Plot the Performance of Each Behavior Relative to Its Relationship to the Key Customer Experience Objective

The third and final step in this analysis to plot the importance of each behavior relative to the performance of each behavior together on a 2-dimensional quadrant chart, where one axis is the importance of the behavior and the other is its performance or the frequency with which it is observed.

Interpretation

Interpreting the results of this quadrant analysis is fairly simple.    Behaviors with above average importance and below average performance are the “high potential” behaviors.  These are the behaviors with the highest potential for return on investment (ROI) in terms of driving purchase intent.  These are the behaviors to prioritize investments in training, incentives and rewards.  These are the behaviors which will yield the highest ROI.

The rest of the behaviors are prioritized as follows:

Those with the high importance and high performance are the next priority.  They are the behaviors to maintain.  They are important and employees perform them frequently, so invest to maintain their performance.

Those with low importance are low performance are areas to address if resources are available.

Finally, behaviors or service attributes with low importance yet high performance are in no need of investment.  They are performed with a high degree of frequency, but not very important, and will not yield an ROI in terms of driving purchase intent.

Research without call to action may be interesting, but in the end, not very useful.

This simple, intuitive gap analysis technique will provide a clear call to action in terms of identifying service behaviors and attributes which will yield the most ROI in terms of achieving your key objective of the customer experience.

Mystery_Shopping_Page