Tag Archive | mystery shop best practices

Mystery Shopping Gap Analysis: Identify Service Attributes with Highest Potential for ROI

Research without call to action may be interesting, but in the end, not very useful.

This is particularly true with customer experience research.  It is incumbent on customer experience researchers to give management research tools which will identify clear call to action items –items in which investments will yield the highest return on investment (ROI) in terms of meeting management’s customer experience objectives.   This post introduces a simple intuitive mystery shopping analysis technique that identifies the service behaviors with the highest potential for ROI in terms of achieving these objectives.

Mystery shopping gap analysis is a simple three-step analytical technique.

Step 1: Identify the Key Objective of the Customer Experience

The first step is to identify the key objective of the customer experience.  Ask yourself, “How do we want the customer to think, feel or act as a result of the customer experience?”

For example:

  • Do you want the customer to have increased purchase intent?
  • Do you want the customer to have increased return intent?
  • Do you want the customer to have increased loyalty?

Let’s assume the key objective is increased purchase intent.  At the conclusion of the customer experience you want the customer to have increased purchase intent.

Next draft a research question to serve as a dependent variable measuring the customer’s purchase intent.  Dependent variables are those which are influenced or dependent on the behaviors measured in the mystery shop.

Step 2: Determine Strength of the Relationship of this Key Customer Experience Objective

After fielding the mystery shop study, and collecting a statistically significant number of shops, the next step is to determine the strength of the relationship between this key customer experience measure (the dependent variable) and each behavior or service attribute measured (independent variable).  There are a number of ways to determine the strength of the relationship, perhaps the easiest is a simple cross-tabulation of the results.  Cross tabulation groups all the shops with positive purchase intent and all the shops with negative purchase intent together and makes comparisons between the two groups.  The greater the difference in the frequency of a given behavior or service attribute between shops with positive purchase intent compared to negative, the stronger the relationship to purchase intent.

The result of this cross-tabulation yields a measure of the importance of each behavior or service attribute.  Those with stronger relationships to purchase intent are deemed more important than those with weaker relationships to purchase intent.

Step 3: Plot the Performance of Each Behavior Relative to Its Relationship to the Key Customer Experience Objective

The third and final step in this analysis to plot the importance of each behavior relative to the performance of each behavior together on a 2-dimensional quadrant chart, where one axis is the importance of the behavior and the other is its performance or the frequency with which it is observed.

Interpretation

Interpreting the results of this quadrant analysis is fairly simple.    Behaviors with above average importance and below average performance are the “high potential” behaviors.  These are the behaviors with the highest potential for return on investment (ROI) in terms of driving purchase intent.  These are the behaviors to prioritize investments in training, incentives and rewards.  These are the behaviors which will yield the highest ROI.

The rest of the behaviors are prioritized as follows:

Those with the high importance and high performance are the next priority.  They are the behaviors to maintain.  They are important and employees perform them frequently, so invest to maintain their performance.

Those with low importance are low performance are areas to address if resources are available.

Finally, behaviors or service attributes with low importance yet high performance are in no need of investment.  They are performed with a high degree of frequency, but not very important, and will not yield an ROI in terms of driving purchase intent.

Research without call to action may be interesting, but in the end, not very useful.

This simple, intuitive gap analysis technique will provide a clear call to action in terms of identifying service behaviors and attributes which will yield the most ROI in terms of achieving your key objective of the customer experience.

Mystery_Shopping_Page

Key Driver Analysis: Drive Your Core Customer Experience Objectives

Mystery shopping not in pursuit of an overall customer experience objective may be interesting, it may be successful in motivating certain service behaviors, but ultimately will fail in maximizing return on investment.

Consider the following proposition:

“Every time a customer interacts with a brand, the customer learns something about the brand, and based on what they learn, adjust their behavior in either profitable or unprofitable ways.”

These behavioral adjustments could be profitable: positive word of mouth, complain less, less expensive channel use, increased wallet share, loyalty, or purchase intent, etc..  Or…these adjustments could be unprofitable: negative word of mouth, more complaints, decreased wallet share, purchase intent or loyalty, etc.

There is power in this proposition.  Understanding it is the key to managing the customer experience in a profitable way.  Unlocking this power gives managers a clear objective for the customer experience in terms of what you want the customer to learn from it and react to it.  Ultimately, it becomes a guidepost for all aspects of customer experience management – including customer experience measurement.

In designing customer experience measurement tools, ask yourself:

  • What is the overall objective of the customer experience?
  • How do you want the customer to feel as a result of the experience?
  • How do you want the customer to act as a result of the experience?

For example:

  • Do you want the customer to have increased purchase intent?
  • Do you want the customer to have increased return intent?
  • Do you want the customer to have increased loyalty?

The answer to the above series of questions will become the guideposts for designing a customer experience which will achieve your objectives.

The answers to the above questions will serve as a basis for evaluating the customer experience against your objectives.   In research terms, the answer to this question or questions will become the dependent variable(s) of your customer experience research – the variables influenced or dependent on the specific attributes of the customer experience.

For example, let’s assume your objective of the customer experience is increased return intent.  As part of a mystery shopping program, ask a question designed to capture return intent – a question like, “Had this been an actual visit, how did the experience during this shop influence your intent to return for another transaction?”  This is the dependent variable.

The next step is to determine the relationship between every service behavior or attribute and the dependent variable (return intent).  The strength of this relationship is a measure of the importance of each behavior or attribute in terms of driving return intent.  It provides a basis from which to make informed decisions as to which behaviors or attributes deserve more investment in terms of training, incentives, and rewards.

This is what Kinesis calls Key Driver Analysis, an analysis technique designed to identify service behaviors and attributes which are key drivers of your key objectives of the customer experience.   In the end, providing an informed basis for which to make decisions about investments in the customer experience.
Mystery_Shopping_Page

 

Use the Right Research Tool: Avoid NPS with Mystery Shopping

Net Promoter Score (NPS) burst on the customer experience scene 15 years ago in a Harvard Business Review article with the confident (some might say over confident) title “The One Number You Need to Grow.”  NPS was introduced as the one survey question you need to ask in a customer survey.

Unfortunately, I’ve seen many customer experience managers include NPS in their mystery shopping programs, which is frankly a poor research practice.

The NPS methodology is relatively simple.  Ask customers a “would recommend” question, “How likely are you to recommend us to a friend, relative or colleague?”  on an 11-point scale from 0-10.

Net Promoter Score (NPS)

Next, segment respondents according to their responses to this would recommend question.  Respondents who answered “9” or “10” are labeled “promoters”, those who answered “7” or “8” are identified as “passive referrers”, and finally, those who answered 0-6 are labeled “detractors”.  Once this segmentation is complete, the Net Promoter Score (NPS) is calculated by subtracting the proportion of “detractors” from the proportion of “promoters.”  This yields the net promoters, the proportion of promoters after the detractors have been subtracted out.

The theory behind NPS is simple.  It is used as a proxy for customer loyalty.  Loyalty is a behavior, surveys best measure attitudes, not behaviors.  Therefore customer experience researchers need a proxy measurement for loyalty.  NPS is considered an excellent proxy for loyalty under the theory that if one is likely to put their reputation at risk by referring a brand to others, they are more likely to be loyal to the brand.  In contrast, to those who are not willing to put their reputation at risk are less likely to be loyal.

Fads in customer experience measurement come and go.  The NPS fad has been particularly stubborn.  Mostly because the theory behind it is intuitive, it is a solution to the problem of measuring loyalty within a survey, and it is simple.  I personally think it was oversold as the “one number you need to grow.”  Overselling it as the one number you need to grow doesn’t do justice to the complexities of managing the customer experience, nor does one NPS number give any direction in terms of how to improve your NPS score.  An NPS score alone is just not very actionable.

While NPS is an excellent loyalty proxy and has a lot of utility is a customer experience survey, it is not an appropriate tool to use in a mystery shopping context.  Mystery shopping is a snapshot of one experience in time, where a mystery shopper interacts with the representative of the brand.  NPS is a measure of one’s likelihood to refer the brand to others.  The problem is the likelihood to refer the brand to others is almost never the result of a snapshot in time.  Rather, it is a holistic measure of the health of the entire relationship with the brand, and as such does not work well in a mystery shop context where the measurement is of a single interaction.  As such, NPS is a measure of things unrelated to the specific experience measured in the mystery shop; things like: past-experiences, overall branding, alignment of the brand to customer expectations, etc.

Now, I understand the intent of inserting NPS in the mystery shop.  It is to identify a dependent variable from which to evaluate the efficacy of the experience.  NPS is just the wrong solution for this objective.

There is a better way.

Instead of blindly using NPS in the wrong research context, focus on your business objectives.  Ask yourself:

  • What are our business objectives with respect to the experience mystery shopped?
  • What do we want to accomplish?
  • How do we want the customer to feel as a result of the experience?
  • What do we want the customer to do as a result of the experience shopped?

Once you have determined what business objectives you want to achieve as a result of the customer experience, design a specific question to measure the influence of the customer experience on this business objective.

For example, assume your objective of the customer experience is purchase intent.  You want the customer to be more motivated to purchase after the experience than before.  Ask a purchase intent question, designed to capture the shopper’s change in purchase intent as a result of the shop.

Now, you have a true dependent variable from which to evaluate the behaviors measured in the mystery shop.  This is what we call Key Driver Analysis – identifying the behaviors which are key drivers of the desired business objective.  In the example above we want to identify key drivers of purchase intent.

I like to think of different question types and analytical techniques as tools in a tool box.  Each is important for its specific purpose, but few are universal tools which work in every context.  NPS may be a useful tool for customer experience surveys.  It is not, however, an appropriate tool for mystery shopping.
 

Mystery_Shopping_Page

 

Mystery Shop Sample Size and Customer Experience Variation

Mystery shop programs measure human interactions; interactions with other humans and increasingly human interactions with automated machines.  Given that humans are on one or both sides of the equation, it is not surprising that variation in the customer experience exists.

When designing a mystery shop program, a central decision is the number of shops to deploy.  This decision is dependent on a number of issues including: desired reliability, number of customer interactions, and the budgetary resources available for the program.  However, one additional and very important consideration, which frankly doesn’t get much attention, is the amount of variation expected in the customer experience to be measured.

The level of variation in the customer experience is an important consideration.  Consistent customer experience processes require less mystery shops than those with a high degree of variation.  To illustrate this, consider the following:

Assume a customer experience process is 100% consistent with zero variation from experience to experience.  Such a process would require only one shop to accurately describe the experience as a whole.  Now, consider a customer experience process with an infinite level of variation in the experience.  Such a process would require far more than one shop.   In fact, assuming an infinite level of variation, 400 shops would be required to achieve a margin of error of plus or minus five percent.

Obviously, the variation of most customer experience processes reside somewhere between perfect consistency and infinite variation. So how do managers determine the level of variation in their process?  The answer to this question will probably be more qualitative than quantitative.   Ask yourself:

  • Do you have a set of standardized customer experience expectations?
  • Are these expectations clearly communicated to employees?
  • Other than mystery shopping, do you have any processes in place to monitor the customer experience? If so, are the results of these monitoring tools consistent from month-to-month or quarter-to-quarter?

To make it easy, I always ask new clients to give a qualitative estimate of the level of variation in their customer experience from: high, medium to low.  The answer to this question will also be considered along with the level of statistical reliability desired and budgetary resources available for the program in determining the appropriate number of shops.

So – ask yourself; how much variation can we expect in our customer experience?

 

Mystery_Shopping_Page

 

Best Practices in Mystery Shop Scoring

FocalPoint3

Most mystery shopping programs score shops according to some scoring methodology to distill the mystery shop results down into a single number.  Scoring methodologies vary, but the most common methodology is to assign points earned for each behavior measured and divide the total points earned by the total points possible, yielding a percentage of points earned relative to points possible.

Drive Desired Behaviors

Some behaviors are more important than others.  As a result, best in class mystery shop programs weight behaviors by assigning more points possible to those deemed more important.  Best practices in mystery shop weighting begin by assigning weights according to management standards (behaviors deemed more important, such as certain sales or customer education behaviors), or according to their importance to their relationship to a desired outcome such as purchase intent or loyalty.  Service behaviors with stronger relationships to the desired outcome receive stronger weight.

One tool to identify behavioral relationships to desired outcomes is Key Driver Analysis.  See the attached post for a discussion of Key Driver Analysis.

Don’t Average Averages

It is a best practice in mystery shopping to calculate the score for each business unit independently (employee, store, region, division, corporate), rather than averaging business unit scores together (such as calculating a region’s score by averaging the individual stores or even shop scores for the region).  Averaging averages will only yield a mathematically correct score if all shops have exactly the same points possible, and if all business units have exactly the same number of shops.  However, if the shop has any skip logic, where some questions are only answered if specific conditions exist, different shops will have different points possible, and it is a mistake to average them together.  Averaging them together gives shops with skipped questions disproportionate weight.  Rather, points earned should be divided by points possible for each business unit independently.   Just remember – don’t average averages!

Work Toward a Distribution of Shops

When all is said and done, the product of a best in class mystery shop scoring methodology will produce a distribution of shop scores, particularly on the low end of the distribution.

Distribution

Mystery shop programs with tight distributions around the average shop score offer little opportunity to identify areas for improvement.  All the shops end up being very similar to each other, making it difficult to identify problem areas and improve employee behaviors.  Distributions with scores skewed to the low end, make it much easier to identify poor shops and offer opportunities for improvement via employee coaching.  If questionnaire design and scoring create scores with tight distributions, consider a redesign.

Most mystery shopping programs score shops according to some scoring methodology.  In designing a mystery shop score methodology best in class programs focus on driving desired behaviors, do not average averages and work toward a distribution of shops.

Good MS Score

 

 

Click Here for Mystery Shopping Best Practices

 

 

Click Here for Mystery Shopping Best Practices

 

 

Mystery_Shopping_Page

Mystery Shop Key Driver Analysis

Best in class mystery shop programs provide managers a means of applying coaching, training, incentives, and other motivational tools directly on the sales and service behaviors that matter most in terms of driving the desired customer experience outcome.  One tool to identify which sales and service behaviors are most important is Key Driver Analysis.

Key Driver Analysis determines the relationship between specific behaviors and a desired outcome.  For most brands and industries, the desired outcomes are purchase intent or return intent (customer loyalty).  This analytical tool helps mangers identify and reinforce sales and service behaviors which drive sales or loyalty – behaviors that matter.

As with all research, it is a best practice to anticipate the analysis when designing a mystery shop program.  In anticipating the analytical needs of Key Driver Analysis identify what specific desired outcome you want from the customer as a result of the experience.

  • Do you want the customer to purchase something?
  • Do you want them return for another purchase?

The answer to these questions will anticipate the analysis and build in mechanisms for Key Driver Analysis to identify which behaviors are more important in driving this desired outcome – which behaviors matter most.

Next, ask shoppers if they had been an actual customer, how the experience influenced their return intent.  Group shops by positive and negative return intent to identify how mystery shops with positive return intent differ from those with negative.  This yields a ranking of the importance of each behavior by the strength of its relationship to return intent.

Additionally, pair the return intent rating with a follow-up question asking, why the shopper rated their return intent as they did.  The responses to this question should be grouped and classified into similar themes, and grouped by the return intent rating described above.  The result of this analysis produces a qualitative determination of what sales and service practices drive return intent.

Finally, Key Driver Analysis produces a means to identify which behaviors have the highest potential for return on investment in terms of driving return intent.  This is achieved by comparing the importance of each behavior (as defined above) and its performance (the frequency in which it is observed).  Mapping this comparison in a quadrant chart, provides a means for identifying behaviors with relatively high importance and low performance – behaviors which will yield the highest potential for return on investment in terms of driving return intent.

Gap_Analysis

 

Behaviors with the highest potential for return on investment can then be inserted into a feedback loop into the mystery shop scoring methodology by informing decisions with respect to weighting specific mystery shop questions, assigning more weight to behaviors with the highest potential for return on investment.

Employing Key Driver Analysis gives managers a means of focusing training, coaching, incentives, and other motivational tools directly on the sales and service behaviors that will produce the largest return on investment. See the attached post for further discussion of mystery shop scoring.

Click Here for Mystery Shopping Best Practices

 

 

Click Here for Mystery Shopping Best Practices

 

 

Mystery_Shopping_Page

Mystery Shop Program Launch

Title Bar

Previously we examined best practices in taking action on mystery shop results.

Obtain Buy-In From the Front-Line

When mystery shopping initiatives fail to meet their potential, it is often because the people who are accountable for the results — front-line employees, supervisors, store managers, and regional managers — were never properly introduced to the program. As a result, there may be internal resistance, creating an unnecessary distraction from the achievement of the company’s service improvement goals. A mystery shopping best practice is to ensure employees throughout the organization are fully informed and have bought into the mystery shopping program before it is launched. Pre-launch efforts should include: the specific behaviors expected of customer facing employees, a copy of the mystery shop questionnaire, training on how to read mystery shopping reports, how to use the information effectively, and how to set goals for improvement.

Provide Adequate Internal Administration

A best practice in mystery shop program design is to anticipate the amount of administration necessary to run a successful mystery shopping program. It requires a strong administrator to keep the company focused and engaged, and to make sure that recalcitrant field managers are not able to undermine the program before it stabilizes and begins to realize its potential value.

Provide a Fair & Firm Dispute Process

Disputed shops are part of the process.  Mystery shops are just a snap shot in time, measuring complex service interactions.  As a result, there may be extenuating circumstances that need to be addressed, or questions about the quality of the mystery shopper’s performance that require both a fair and firm process to dispute shop scores.  Fairness is critical to employee buy-in and morale.  Firmness is required to keep the number of shop disputes in check, and cut down on frivolous score disputes.

The specifics of the dispute process will depend on each brand’s culture and values.  Here are some ways a fair and firm best in class mystery shop dispute process can be designed:

Arbitration: Most brands have a program manager or group of program managers acting as an arbitrator of disputes and ordering reshops or adjusting points to an individual shop as they see fit.  The arbiter of disputes must be both fair and firm, otherwise, employees and other managers will quickly start gaming the system, bogging the process down with frivolous disputes.

Fixed Number of Challenges: Other brands give each business unit (or store) a fixed number of challenges in which they can ask for an additional shop.  Managers responsible for that business unit can request a reshop for any reason.  However, when the fixed number of disputes is exhausted they lose the ability to request a reshop.  This approach is fair (each business unit has the same number of disputes), it reduces the administrative burden on a centralized arbiter, and reduces the potential for massive gaming of the system as there is a limited number of disputes.

Click here for the final installment in this series.

Click Here for Mystery Shopping Best Practices

Mystery_Shopping_Page

Taking Action on Mystery Shop Results

Title Bar

Previously we examined best practices in mystery shop sample planning.

Call to Action Analysis

A best practice in mystery shop design is to build in call to action elements designed to identify key sales and service behaviors which correlate to a desired customer experience outcome.  This Key Driver Analysis determines the relationship between specific behaviors and a desired outcome.  For most brands and industries, the desired outcomes are purchase intent or return intent (customer loyalty).  This approach helps brands identify and reinforce sales and service behaviors which drive sales or loyalty – behaviors that matter.

Key Driver Graphic 1


Earlier we suggested anticipating the analysis in questionnaire design in a mystery shop best practice.  Here is how the three main design elements discussed provide input into call to action analysis.

HowShoppers are asked if they had been an actual customer, how the experience influenced their return intent.  Cross-tabulating positive and negative return intent will identify how the responses of mystery shoppers who reported a positive influence on return intent vary from those who reported a negative influence.  This yields a ranking of the importance of each behavior by the strength of its relationship to return intent.

WhyIn addition, paired with this rating is a follow-up question asking, why the shopper rated their return intent as they did.  The responses to this question are grouped and classified into similar themes, and cross-tabulated by the return intent rating described above.  The result of this analysis produces a qualitative determination of what sales and service practices drive return intent.
WhatThe final step in the analysis is identifying which behaviors have the highest potential for ROI in terms of driving return intent.  This is achieved by comparing the importance of each behavior (as defined above) and its performance (the frequency in which it is observed).  Mapping this comparison in a quadrant chart, like the one to the below, provides a means for identifying behaviors with relatively high importance and low performance, which will yield the highest potential for ROI in terms of driving return intent.

Gap Graphic

 

This analysis helps brands focus training, coaching, incentives, and other motivational tools directly on the sales and service behaviors that will produce the largest return on investment – behaviors that matter.

Taking Action

Part of Balanced Scorecard

A best practice in mystery shopping is to integrate customer experience metrics from both sides of the brand-customer interface as part of an incentive plan.  The exact nature of the compensation plan should depend on broader company culture and objectives.  In our experience, a best practice is a balanced score card approach which incorporates customer experience metrics along with financial, internal business processes (cycle time, productivity, employee satisfaction, etc.), as well as innovation and learning metrics.

Within these four broad categories of measurement, Kinēsis recommends managers select the specific metrics (such as ROI, mystery shop scores, customer satisfaction, and cycle time), which will best measure performance relative to company goals. Discipline should be used, however. Too many can be difficult to absorb. Rather, a few metrics of key significance to the organization should be collected and tracked in a balanced score card.

Coaching

Best in class mystery shop programs identify employees in need of coaching.  Event-triggered reports should identify employees who failed to perform targeted behaviors.  For example, if it is important for a brand to track cross- and up-selling attempts in a mystery shop, a Coaching Report should be designed to flag any employees who failed to cross- or up-sell.  Managers simply consult this report to identify which employees are in need of coaching with respect to these key behaviors – behaviors that matter.

Click here for the next installment in this series: Mystery Shop Program Launch.

Click Here for Mystery Shopping Best Practices

Mystery_Shopping_Page