Measure the Customer Experience in an Integrated Cross Channel Environment
Success in retail banking requires meeting customers with the correct channel for the customer’s waypoint in that journey.
A waypoint is a point of reference when navigating a journey.
Not all waypoints are equal. Customers prefer different channels based on the waypoint in their customer journey. As a result, different channels have assumed different roles in the customer journey. The challenge for customer experience managers is to provide an integrated customer experience across all waypoints.
Kinesis’ research has identified specific roles for each integrated channel in the customer journey:
|Mobile||– Transaction Tool|
|Web||– Primary Role: Research Tool|
– Secondary Role: Sales & Transfers
|Contact Center||– Help Center|
– Source of Advice
|Branch||– Sales Center|
– Source of Advice
The mobile channel is seen by customers as a transaction tool; the website’s role is broader, as a research, transaction and sales channel; contact centers are primarily a help center; and the branch is primarily a sales and advice channel.
This post offers a framework to measure individual channels in a way that will provide both channel specific direction in managing the experience, as well as benchmarking each channel against each other using consistent measurements.
Two CX Risks: Exposure and Moments of Truth
In designing a customer experience measurement program, it is instructive to think of the omni-channel experience in terms of two risks: exposure and moments of truth.
Exposure risk is the frequency of customer interactions within each channel. Poor experiences in channels with high frequencies are replicated across more customers resulting in exposing more customers to poor experiences. Mobile apps are the most frequently used channel. According to our research, customers use mobile banking apps 24 times more frequently than visiting a branch. Mobile banking has most exposure risk. Websites are used by banking customers 16 times more frequently than a branch; followed contact centers, used 2.3 times more frequently than branches.
Moments of Truth
Moments of truth are critical experiences with more individual importance. Poor experiences in a moment of truth interaction lead to negative customer emotions, with similarly negative impacts on customer profitability and word of mouth.
Routine transactions, like transfers or deposits, represent low moment of truth risk, problem resolution or account opening are significant moments of truth.
Exposure & Moment of Truth Risk by Channel
Different channels represent exposure and moment of truth risk is different ways.
The mobile channel’s role is primarily a transaction tool. According to our research the mobile channel is the preferred channel for both transfers (58%) and deposits (53%). It, therefore, has the highest exposure risk and lowest moment of truth risk.
The website is a mixed channel between research, transactions and opening accounts. A plurality of customers (40%) consider the website their preferred channel to get information, followed by transfers (33%) and opening accounts (31%). As a result, the web channel has a mix of exposure and moment of truth risk.
The contact center is primarily viewed as a channel for problem resolution (51%), followed by an advice and information source (27% and 23%, respectively). It represents low exposure risk and elevated moment of truth risk.
Finally, the branch is the primary a source for advice and account opening (53% and 51%, respectively). With infrequent use and high impact customer experiences, the branch has very low exposure risk, and significant moment of truth risk.
Understanding Exposure and Moments of Truth Risk to Inform CX Measurement
This concept of risk, along exposure and moments of truth, provides an excellent framework for informing customer experience measurement.
Digital channels with high exposure risk should be tested thoroughly with usability, focus groups, ethnography and other qualitative research to ensure features meet customer needs and are programmed correctly. Once programmed and tested, they need to be monitored with ongoing audits.
Channels with higher moment of truth risk are best monitored with post-transaction surveys, mystery shopping and the occasional focus group.
|Exposure Risk||Moments of Truth|
|Design Focus Groups|
|Post Transaction Surveys|
Integrated CX Measurement Design
When measuring the customer experience across multiple channels in an integrated manner, we recommend gathering both consistent measures across all channels, as well as measures specific to each channel. Each channel has their own specific needs; however, consistent measures across all channels provide context and a point of comparison.
Cross-channel consistency is key to the customer experience. Inconsistent experiences confuse and frustrate customers, and risk erosion of the brand value.
The consistent cross-channel measures Kinesis prefers to use are measures of the brand personality and efficacy of the customer experience.
|Brand Personality||Efficacy of the Experience|
Likelihood of Referral
Brand Personality: To measure brand personality, Kinesis asks clients to list five adjectives that describes their brand personality. Then we simply ask customers if each adjective described the customer experience. We also ask clients to give us five statements that describe their desired brand, and measure the experience with an agreement scale. For example, a client may desire their brand to be described by the statements: We are committed to the community. We would then ask respondents the extent to which they are in agreement with the statement: We are committed to the community. These measures of brand adjectives and brand statements provide managers of the customer experience a clear benchmark from which to evaluate how each channel reflects the desired brand personality.
Efficacy of the Experience: Ultimately, the goal of the customer experience is to produce the intended result – results like loyalty, increased wallet share, or lower transaction costs. Kinesis has had success using three measures to evaluate the efficacy of the customer experience:
- Purchase Intent: Purchase intent is an excellent measure of efficacy of the experience. To measure purchase intent we ask respondents how the experience influenced their intention to either open an account or maintain an existing relationship with the financial institution.
- Likelihood of Referral: The use of measures of likelihood of referral, like NPS, as a proxy for customer loyalty is almost universally accepted, and as a result, is often an excellent measure of efficacy of the experience.
- Customer Advocacy: Beyond likelihood of referral, agreement with the statement, My bank cares about me, not just the bottom line, is an excellent predictor of customer loyalty.
Channel Specific Attributes
In addition to consistent cross-channel measurements, it is important to focus on channel specific customer experience attributes. While consistent measures across channels provide a benchmark to brand objectives, measuring specific service attributes provides actionable information about how to improve the customer experience in each specific channel.
In designing channel specific research features, ask yourself what specific service attributes or behaviors do you expect from each channel. The answer to these questions will depend on the channel and your brand objectives. In general, they typically roll up to the following broad dimensions of the customer experience:
Specific Channel Dimensions
|Digital Channels||Personal Channels|
For digital channels, the best specific attributes to measure are ones associated with appeal, identity, navigation, content/ presentation, value, trust. For personal channels, such as contact centers and branches, we find the best attributes are associated with dimensions of reliability, responsiveness, empathy, competence, and tangibles.
Not all waypoints in the customer journey are equal. Customer experience researchers need to consider the role of each channel in the customer journey and design measurement tools with both channel specific observations, as well consistent measures across all channels.
A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Planned Interactions
Part 2: Research Tools to Monitor Planned Interactions through the Customer Lifecycle
As we explored in an earlier post, Three Types of Customer Experiences CX Managers Must Understand, there are three types of customer interactions: Planned, Stabilizing, and Critical.
Planned interactions are intended to increase customer profitability through the customer lifecycle by engaging customers with relevant planned interactions and content in an integrated omni-channel environment. Planned interactions will continue to grow in importance as the financial service industry shifts to an integrated digital first model.
These planned interactions are frequently triggered by changes in account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action toward planned interactions. Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating their effectiveness – regardless of the channel.
The key to an effective strategy for planned interactions is relevance. Triggered requests for increased engagement must be made in the context of the customer’s needs and with their permission; otherwise, the requests will come off as clumsy and annoying, and give the impression the bank is not really interested in the customer’s individual needs. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and relevant approach to planned interactions.
Research Plan for Planned Interactions
The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign. Ask yourself, what customer interactions are planned through these layers of integrated channels. Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.
For example, after acquisition and onboarding, assume a bank has a campaign to trigger planned interactions based on triggers from past engagement. These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.
Often it is instructive to think of customer experience research in terms of the bank-customer interface, employing different research tools to study the customer experience from both sides of this interface.
In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:
|Customer Side||Brand Side|
These post-experience surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey. They can be performed across all channels, digital, contact center and in-person. As the name implies, the purpose of this type of survey is to measure experience with a specific customer experience.
Ultimately, employees are at the center of the integrated customer experience model.
Employee surveys often measure employee satisfaction and engagement. However, there is far more value to be gleaned from employees. We employ them to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.
They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identify perceptual gaps between management and frontline personnel.
|Overall Satisfaction Surveys
Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction. They give managers valuable insight into overall satisfaction, engagement, image and positioning across the entire customer base, not just active customers.
|Digital Delivery Channel Shopping
Be it a website or mobile app, digital mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these digital channels.
|Transactional Mystery Shopping
Mystery shopping is about alignment. It is an excellent tool to align the customer experience to the brand. Best-in-class mystery shopping answers the question: is our customer experience consistent with our brand objectives? Historically, mystery shopping has been in the in-person channel, however we are seeing increasing mystery shopping to contact center agents.
In the growth phase, we measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:
|Customer Side||Brand Side|
Awareness of the brand, its products and services, is central to planned service interactions. Managers need to know how awareness and attitudes change as a result of these planned experiences.
|Cross-Sell Mystery Shopping
In these unique mystery shops, mystery shoppers are seeded into the lead/referral process. The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction.
These shops work very well in planned sales interactions within the contact center environment.
|Wallet Share Surveys
These surveys are used to evaluate customer engagement with and loyalty to the institution. Specifically, they determine if customers consider the institution their primary provider of financial services, and identify potential road blocks to wallet share growth.
Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:
|Customer Side||Brand Side|
|Critical Incident Technique (CIT)
CIT is a qualitative research methodology designed to uncover details surrounding a service encounter that a customer found particularly satisfying or dissatisfying. This research technique identifies these common critical incidents, their impact on the customer experience, and customer engagement, giving managers an informed perspective upon which to prepare employees to recognize moments of truth, and respond in ways that will lead to positive outcomes.
Employees observe firsthand the relationship with the customer. They are a valuable resource of customer experience information, and can provide a lot of context into the types of bad experiences customers frequently experience.
|Lost Customer Surveys
Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention.
|Life Cycle Mystery Shopping
If an integrated channel approach is the objective, one should measure the customer experience in an integrated manner.
In lifecycle shops, shoppers interact with the bank over a period of time, across multiple touch points (digital, contact center and in-person). This lifecycle approach provides broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across all channels.
Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.
Call to Action – Make the Most of the Research
For customer experience surveys, we recommend testing the effectiveness of planned interactions by benchmarking three loyalty attitudes:
- Would Recommend: The likelihood of the customer recommending the bank to a friend, relative or colleague.
- Customer Advocacy: The extent to which the customer agrees with the statement, “My bank cares about me, not just the bottom line?”
- Primary Provider: Does the customer consider the institution their primary provider for financial services?
For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.
As the integrated digital first business model accelerates, planned interactions will continue to grow in importance, and managers of the customer experience should build customer experience monitoring tools to evaluate the efficacy of these planned experiences in terms of driving desired customer attitudes and behaviors.
In the next post, we will take a look at stabilizing experiences, and their implications for customer experience research.
Not All Customer Experience Variation is Equal: Common Cause vs. Special Cause Variation
Variability in customer experience scores is common and normal. Be it a survey of customers, mystery shops, social listening or other customer experience measurement, a certain amount of random variation in the data is normal. As a result, managers need a means of interpreting any variation in their customer experience measurement to evaluate if the customer experience is truly changing, or if the variation they are seeing is simply random.
In a previous post, we proposed the use of control charts as a tool to track customer experience measurements within upper and lower quality control limits, giving managers a meaningful way to determine if any variation in their customer experience measurement reflects an actual change in the experience as opposed to random variation or chance.
Now, managers need to understand the causes of variation, specifically common and special cause variation. Common and special cause variation are six sigma concepts, while most commonly used in industrial production, they can be borrowed and employed to the customer experience.
Common Cause Variation: Much like variation in the roll of dice, common cause variation is natural variation within any system. Common cause variation is any variation constantly active within a system, and represents statistical “noise” within the system.
Examples of common cause variation in the customer experience are:
- Poorly defined, poorly designed, inappropriate policies or procedures
- Poor design or maintenance of computer systems
- Inappropriate hiring practices
- Insufficient training
- Measurement error
Special Cause Variation: Unlike the roll of the dice, special cause variation is not probabilistically predictable within the system, as a result it does not represent statistical “noise” within the system, but is the signal within the system.
Examples of special cause variation include:
- High demand/ high traffic
- Poor adjustment of equipment
- Just having a bad day
When measuring the customer experience it is helpful to consider everything within the context of the company-customer interface. Every time a sales or service interaction within this interface occurs the customer learns something from the experience and adjusts their behavior as a result of the experience. Managing the customer experience is the practice of managing what the customers learn from the experience and thus managing their behavior in profitable ways.
A key to managing customer behaviors is understanding common cause and special cause variation and their implications. Common cause variation is variation built into the system: policies, procedures, equipment, hiring practices, and training. Special cause variation is more or less how the human element and the system interact.
See earlier post:
Not All Customer Experience Variation is Equal: Use Control Charts to Identify Actual Changes in the Customer Experience
Mystery Shopping Gap Analysis: Identify Service Attributes with Highest Potential for ROI
Research without call to action may be interesting, but in the end, not very useful.
This is particularly true with customer experience research. It is incumbent on customer experience researchers to give management research tools which will identify clear call to action items –items in which investments will yield the highest return on investment (ROI) in terms of meeting management’s customer experience objectives. This post introduces a simple intuitive mystery shopping analysis technique that identifies the service behaviors with the highest potential for ROI in terms of achieving these objectives.
Mystery shopping gap analysis is a simple three-step analytical technique.
Step 1: Identify the Key Objective of the Customer Experience
The first step is to identify the key objective of the customer experience. Ask yourself, “How do we want the customer to think, feel or act as a result of the customer experience?”
- Do you want the customer to have increased purchase intent?
- Do you want the customer to have increased return intent?
- Do you want the customer to have increased loyalty?
Let’s assume the key objective is increased purchase intent. At the conclusion of the customer experience you want the customer to have increased purchase intent.
Next draft a research question to serve as a dependent variable measuring the customer’s purchase intent. Dependent variables are those which are influenced or dependent on the behaviors measured in the mystery shop.
Step 2: Determine Strength of the Relationship of this Key Customer Experience Objective
After fielding the mystery shop study, and collecting a statistically significant number of shops, the next step is to determine the strength of the relationship between this key customer experience measure (the dependent variable) and each behavior or service attribute measured (independent variable). There are a number of ways to determine the strength of the relationship, perhaps the easiest is a simple cross-tabulation of the results. Cross tabulation groups all the shops with positive purchase intent and all the shops with negative purchase intent together and makes comparisons between the two groups. The greater the difference in the frequency of a given behavior or service attribute between shops with positive purchase intent compared to negative, the stronger the relationship to purchase intent.
The result of this cross-tabulation yields a measure of the importance of each behavior or service attribute. Those with stronger relationships to purchase intent are deemed more important than those with weaker relationships to purchase intent.
Step 3: Plot the Performance of Each Behavior Relative to Its Relationship to the Key Customer Experience Objective
The third and final step in this analysis to plot the importance of each behavior relative to the performance of each behavior together on a 2-dimensional quadrant chart, where one axis is the importance of the behavior and the other is its performance or the frequency with which it is observed.
Interpreting the results of this quadrant analysis is fairly simple. Behaviors with above average importance and below average performance are the “high potential” behaviors. These are the behaviors with the highest potential for return on investment (ROI) in terms of driving purchase intent. These are the behaviors to prioritize investments in training, incentives and rewards. These are the behaviors which will yield the highest ROI.
The rest of the behaviors are prioritized as follows:
Those with the high importance and high performance are the next priority. They are the behaviors to maintain. They are important and employees perform them frequently, so invest to maintain their performance.
Those with low importance are low performance are areas to address if resources are available.
Finally, behaviors or service attributes with low importance yet high performance are in no need of investment. They are performed with a high degree of frequency, but not very important, and will not yield an ROI in terms of driving purchase intent.
Research without call to action may be interesting, but in the end, not very useful.
This simple, intuitive gap analysis technique will provide a clear call to action in terms of identifying service behaviors and attributes which will yield the most ROI in terms of achieving your key objective of the customer experience.
Key Driver Analysis: Drive Your Core Customer Experience Objectives
Mystery shopping not in pursuit of an overall customer experience objective may be interesting, it may be successful in motivating certain service behaviors, but ultimately will fail in maximizing return on investment.
Consider the following proposition:
“Every time a customer interacts with a brand, the customer learns something about the brand, and based on what they learn, adjust their behavior in either profitable or unprofitable ways.”
These behavioral adjustments could be profitable: positive word of mouth, complain less, less expensive channel use, increased wallet share, loyalty, or purchase intent, etc.. Or…these adjustments could be unprofitable: negative word of mouth, more complaints, decreased wallet share, purchase intent or loyalty, etc.
There is power in this proposition. Understanding it is the key to managing the customer experience in a profitable way. Unlocking this power gives managers a clear objective for the customer experience in terms of what you want the customer to learn from it and react to it. Ultimately, it becomes a guidepost for all aspects of customer experience management – including customer experience measurement.
In designing customer experience measurement tools, ask yourself:
- What is the overall objective of the customer experience?
- How do you want the customer to feel as a result of the experience?
- How do you want the customer to act as a result of the experience?
- Do you want the customer to have increased purchase intent?
- Do you want the customer to have increased return intent?
- Do you want the customer to have increased loyalty?
The answer to the above series of questions will become the guideposts for designing a customer experience which will achieve your objectives.
The answers to the above questions will serve as a basis for evaluating the customer experience against your objectives. In research terms, the answer to this question or questions will become the dependent variable(s) of your customer experience research – the variables influenced or dependent on the specific attributes of the customer experience.
For example, let’s assume your objective of the customer experience is increased return intent. As part of a mystery shopping program, ask a question designed to capture return intent – a question like, “Had this been an actual visit, how did the experience during this shop influence your intent to return for another transaction?” This is the dependent variable.
The next step is to determine the relationship between every service behavior or attribute and the dependent variable (return intent). The strength of this relationship is a measure of the importance of each behavior or attribute in terms of driving return intent. It provides a basis from which to make informed decisions as to which behaviors or attributes deserve more investment in terms of training, incentives, and rewards.
This is what Kinesis calls Key Driver Analysis, an analysis technique designed to identify service behaviors and attributes which are key drivers of your key objectives of the customer experience. In the end, providing an informed basis for which to make decisions about investments in the customer experience.
Use the Right Research Tool: Avoid NPS with Mystery Shopping
Net Promoter Score (NPS) burst on the customer experience scene 15 years ago in a Harvard Business Review article with the confident (some might say over confident) title “The One Number You Need to Grow.” NPS was introduced as the one survey question you need to ask in a customer survey.
Unfortunately, I’ve seen many customer experience managers include NPS in their mystery shopping programs, which is frankly a poor research practice.
The NPS methodology is relatively simple. Ask customers a “would recommend” question, “How likely are you to recommend us to a friend, relative or colleague?” on an 11-point scale from 0-10.
Next, segment respondents according to their responses to this would recommend question. Respondents who answered “9” or “10” are labeled “promoters”, those who answered “7” or “8” are identified as “passive referrers”, and finally, those who answered 0-6 are labeled “detractors”. Once this segmentation is complete, the Net Promoter Score (NPS) is calculated by subtracting the proportion of “detractors” from the proportion of “promoters.” This yields the net promoters, the proportion of promoters after the detractors have been subtracted out.
The theory behind NPS is simple. It is used as a proxy for customer loyalty. Loyalty is a behavior, surveys best measure attitudes, not behaviors. Therefore customer experience researchers need a proxy measurement for loyalty. NPS is considered an excellent proxy for loyalty under the theory that if one is likely to put their reputation at risk by referring a brand to others, they are more likely to be loyal to the brand. In contrast, to those who are not willing to put their reputation at risk are less likely to be loyal.
Fads in customer experience measurement come and go. The NPS fad has been particularly stubborn. Mostly because the theory behind it is intuitive, it is a solution to the problem of measuring loyalty within a survey, and it is simple. I personally think it was oversold as the “one number you need to grow.” Overselling it as the one number you need to grow doesn’t do justice to the complexities of managing the customer experience, nor does one NPS number give any direction in terms of how to improve your NPS score. An NPS score alone is just not very actionable.
While NPS is an excellent loyalty proxy and has a lot of utility is a customer experience survey, it is not an appropriate tool to use in a mystery shopping context. Mystery shopping is a snapshot of one experience in time, where a mystery shopper interacts with the representative of the brand. NPS is a measure of one’s likelihood to refer the brand to others. The problem is the likelihood to refer the brand to others is almost never the result of a snapshot in time. Rather, it is a holistic measure of the health of the entire relationship with the brand, and as such does not work well in a mystery shop context where the measurement is of a single interaction. As such, NPS is a measure of things unrelated to the specific experience measured in the mystery shop; things like: past-experiences, overall branding, alignment of the brand to customer expectations, etc.
Now, I understand the intent of inserting NPS in the mystery shop. It is to identify a dependent variable from which to evaluate the efficacy of the experience. NPS is just the wrong solution for this objective.
There is a better way.
Instead of blindly using NPS in the wrong research context, focus on your business objectives. Ask yourself:
- What are our business objectives with respect to the experience mystery shopped?
- What do we want to accomplish?
- How do we want the customer to feel as a result of the experience?
- What do we want the customer to do as a result of the experience shopped?
Once you have determined what business objectives you want to achieve as a result of the customer experience, design a specific question to measure the influence of the customer experience on this business objective.
For example, assume your objective of the customer experience is purchase intent. You want the customer to be more motivated to purchase after the experience than before. Ask a purchase intent question, designed to capture the shopper’s change in purchase intent as a result of the shop.
Now, you have a true dependent variable from which to evaluate the behaviors measured in the mystery shop. This is what we call Key Driver Analysis – identifying the behaviors which are key drivers of the desired business objective. In the example above we want to identify key drivers of purchase intent.
I like to think of different question types and analytical techniques as tools in a tool box. Each is important for its specific purpose, but few are universal tools which work in every context. NPS may be a useful tool for customer experience surveys. It is not, however, an appropriate tool for mystery shopping.
Mystery Shop Sample Size and Customer Experience Variation
Mystery shop programs measure human interactions; interactions with other humans and increasingly human interactions with automated machines. Given that humans are on one or both sides of the equation, it is not surprising that variation in the customer experience exists.
When designing a mystery shop program, a central decision is the number of shops to deploy. This decision is dependent on a number of issues including: desired reliability, number of customer interactions, and the budgetary resources available for the program. However, one additional and very important consideration, which frankly doesn’t get much attention, is the amount of variation expected in the customer experience to be measured.
The level of variation in the customer experience is an important consideration. Consistent customer experience processes require less mystery shops than those with a high degree of variation. To illustrate this, consider the following:
Assume a customer experience process is 100% consistent with zero variation from experience to experience. Such a process would require only one shop to accurately describe the experience as a whole. Now, consider a customer experience process with an infinite level of variation in the experience. Such a process would require far more than one shop. In fact, assuming an infinite level of variation, 400 shops would be required to achieve a margin of error of plus or minus five percent.
Obviously, the variation of most customer experience processes reside somewhere between perfect consistency and infinite variation. So how do managers determine the level of variation in their process? The answer to this question will probably be more qualitative than quantitative. Ask yourself:
- Do you have a set of standardized customer experience expectations?
- Are these expectations clearly communicated to employees?
- Other than mystery shopping, do you have any processes in place to monitor the customer experience? If so, are the results of these monitoring tools consistent from month-to-month or quarter-to-quarter?
To make it easy, I always ask new clients to give a qualitative estimate of the level of variation in their customer experience from: high, medium to low. The answer to this question will also be considered along with the level of statistical reliability desired and budgetary resources available for the program in determining the appropriate number of shops.
So – ask yourself; how much variation can we expect in our customer experience?
Best Practices in Mystery Shop Program Launch: Post-Shop Communication
In a previous post we introduced the importance of proper program launch.
Self-help resources typically take the form of a webpage housed on the mystery shop provider’s website or on an internal resource page. These resources provide a tutorial in the form of either a PowerPoint or video, reinforcing to stakeholders many of the subjects already discussed: definition of the brand, behavioral service expectations, and a copy of the questionnaire.
These self-help resources are also an excellent opportunity to introduce the mystery shop reports and how to read them (both on an individual shop basis and on an analytical level), and introduce concepts designed to identify the relative importance of specific sales and service behaviors which drive desired outcomes like purchase intent and customer loyalty.
Shop Results E-Mail
Upon distribution of the first shop, it is a best practice in launching a mystery shop program to send an e-mail to the supervisor of the employee shopped advising them of a completed shop, and containing either a PDF shop report or access to the shop via an online reporting tool.
The content of this e-mail should be dependent on the performance of the individuals shopped. If a shop is perfect, the e-mail should congratulate the employees on a perfect shop. If a shop is below expectations, it should inform the employees, in as positive way as possible, that their performance was below expectations and set the stage for coaching. It should remind employees that it is not the performance of this first shop that counts, but subsequent improvement as a result of the shops.
Depending on the timing of shop e-mails, some clients prefer the shop to be sent as soon as it clears the provider’s quality control process, while others prefer shops be held and released in mass at the end of a given shopping period (typically monthly). If the e-mail is sent at the end of a given period, this is an excellent opportunity to identify top performers who received perfect shops as a means of both recognizing superior performance, and motivating other employees to seek similar achievement.
Finally, this e-mail should reinforce superior shop performance by reminding front-line employees and managers of the rewards earned by successful shop performance.
This e-mail should be modified for all subsequent waves of shopping and be used as a cover letter for distribution of all future shops.
Additional e-mails may be sent to notify employees and their managers of specific events, such as: perfect shops, failed shops, shops within a specific score range, or shops which identify a specific behavior of an employee like a cross-sell effort.
Post Shop Call/ Presentation
Similar to the kickoff presentation, after the first wave of shopping, it is a best practice to conduct a post shop presentation, again by conference call or WebEx. The purpose of this presentation is to present the reports available, discuss how to read them, and – most importantly – take action on the results through coaching and interpreting call to action elements built into the program. Call to action elements designed to identify which behaviors are most important in terms of driving purchase intent or loyalty.
Best Practices in Mystery Shop Program Launch: Pre-Launch Communication
In a previous post we introduced the importance of proper program launch.
There should be no surprises in mystery shopping. A key to keeping all stakeholders informed of the mystery shop process is pre-shop communication.
The first communication tool is the kickoff letter. This letter is most often in the form of an e-mail. Sent prior to shopping, its purpose is to introduce employees to the program, explain its purpose in a positive way, make sure employees are aware of what is expected of them, and link shopping to their best interests, by reinforcing it is designed to make them more successful.
The kickoff e-mail should:
- Define the brand and emphasize that frontline employees are the personification of the brand. They are the physical embodiment of the brand.
- Explain that certain behaviors are expected from them in their role as the physical embodiment of the brand.
- List the specific sales and service behaviors that shoppers are asked to observe. Stress that management wants every representative to score well. Management has no interest in setting employees up for failure. If they perform these behaviors, they will receive a perfect shop score.
- Detail the incentive and reward structures in place as a result of the mystery shop program.
A presentation, conference call, or WebEx is an excellent tool to kick off a mystery shop program. All stakeholders in the process should understand their role and what is expected of them.
As with the kickoff letter or e-mail, the presentation should define the brand, stress that employees are the physical embodiment of the brand, and identify the specific sales and service behaviors expected from employees.
It should identify the internal administrator of the program, communicate the dispute process, discuss incentives and rewards earned through positive mystery shops, as well as introduce the concept of coaching as a result of the shop – making sure that managers and customer-facing personnel understand their role in the coaching process.
Finally, this presentation should introduce employees to self-help resources available for taking positive action as a result of the shop.
In a subsequent post we will discuss the importance of post-launch communication.
Integrated Digital First CX Model: Implications for CX Researchers
In previous posts to this five-part series on building an integrated digital-first service model we discussed
An integrated delivery channel requires an integration of research methodologies to measure the customer experience. Researchers should think in terms of exposure and moments of truth as they monitor each waypoint in the customer experience.
Understanding Exposure & Moments of Truth Risks
Digital waypoints with high exposure risk should be tested thoroughly with usability, focus groups, ethnography and other qualitative research to ensure features meet customer needs and are programmed correctly. Once programmed and tested, they need to be monitored with ongoing audits.
Waypoints with higher moment of truth risk are best monitored with post-transaction surveys, mystery shopping and the occasional focus group.
Integrated Channel CX Measurement
When measuring the customer experience across multiple channels in an integrated manner, it is important to both gather consistent measures across all channels, as well as measures specific to each channel. Each channel has their own specific needs; however, consistent measures across all channels provide context and a point of comparison.
Here is what an integrated omni-channel research plan may look like:
Kinēsis recommends measuring each channel against a set of consistent brand attribute measurements. Brands have personality, and it is incumbent on CX researchers to evaluate each channel against the overall desired brand personality objectives. A channel disconnected from the institution’s brand objectives can do a lot of damage to the institution’s perceived image.
Kinēsis uses brand adjectives and agreement statements to measure customer impressions of the brand. Ask yourself, what 5 or 6 adjectives you would like customers to describe your institution. Then simply take these adjectives and ask customers if the adjectives describe the customer experience.
Next, ask yourself, what statements you would like the customer to describe their perception of the brand as a result of any interaction. Statements such as:
• We are easy to do business with.
• We are knowledgeable.
• We are interested in members as people, and concerned for their individual needs.
• We are committed to the community.
These statements can be incorporated into the research by asking customers the extent they agree with each of the statements.
Again, brands have personality. Brand adjectives and agreement statements are an excellent way to tie disparate research across multiple channels together with consistent measures of perceptions of the brand personality as a result of the experience.
Channel Specific Dimensions
Different channels have different service attributes; therefore, it is important to provide each channel manager with specific research relevant to their channel. Digital channels, for example, may require measures around: appeal, identity, navigation, content, value and trust. Non-digital managers may require measures such as: reliability, responsiveness, competence, empathy and the physical environment.
Efficacy of the Experience
Regardless of channel, all research should contain consistent measures of the efficacy of the experience. The efficacy of the experience is the institution’s ultimate objective of every customer experience. Ask yourself, how do we want the customer to feel or think as a result of the interaction?
Some examples of efficacy measurements include:
• Purchase Intent/ Return Intent: Kinēsis has a long history using this dependent variable, using purchase intent.
• Likelihood of Referral: Likelihood of referral measures (like Net Promoter Score) are generally accepted as a reliable proxy measure for customer loyalty.
• Member/ Customer Advocacy: The extent to which the financial institution is an advocate for the customer is best measured by using an agreement scale to measure the agreement with the following statement, “This bank cares about me, not just the bottom line.” Agreement with this statement is also an excellent proxy measurement for loyalty.