Archive | Mystery Shopping RSS for this section

Taking Action on Mystery Shop Results

Title Bar

Previously we examined best practices in mystery shop sample planning.

Call to Action Analysis

A best practice in mystery shop design is to build in call to action elements designed to identify key sales and service behaviors which correlate to a desired customer experience outcome.  This Key Driver Analysis determines the relationship between specific behaviors and a desired outcome.  For most brands and industries, the desired outcomes are purchase intent or return intent (customer loyalty).  This approach helps brands identify and reinforce sales and service behaviors which drive sales or loyalty – behaviors that matter.

Key Driver Graphic 1


Earlier we suggested anticipating the analysis in questionnaire design in a mystery shop best practice.  Here is how the three main design elements discussed provide input into call to action analysis.

HowShoppers are asked if they had been an actual customer, how the experience influenced their return intent.  Cross-tabulating positive and negative return intent will identify how the responses of mystery shoppers who reported a positive influence on return intent vary from those who reported a negative influence.  This yields a ranking of the importance of each behavior by the strength of its relationship to return intent.

WhyIn addition, paired with this rating is a follow-up question asking, why the shopper rated their return intent as they did.  The responses to this question are grouped and classified into similar themes, and cross-tabulated by the return intent rating described above.  The result of this analysis produces a qualitative determination of what sales and service practices drive return intent.
WhatThe final step in the analysis is identifying which behaviors have the highest potential for ROI in terms of driving return intent.  This is achieved by comparing the importance of each behavior (as defined above) and its performance (the frequency in which it is observed).  Mapping this comparison in a quadrant chart, like the one to the below, provides a means for identifying behaviors with relatively high importance and low performance, which will yield the highest potential for ROI in terms of driving return intent.

Gap Graphic

 

This analysis helps brands focus training, coaching, incentives, and other motivational tools directly on the sales and service behaviors that will produce the largest return on investment – behaviors that matter.

Taking Action

Part of Balanced Scorecard

A best practice in mystery shopping is to integrate customer experience metrics from both sides of the brand-customer interface as part of an incentive plan.  The exact nature of the compensation plan should depend on broader company culture and objectives.  In our experience, a best practice is a balanced score card approach which incorporates customer experience metrics along with financial, internal business processes (cycle time, productivity, employee satisfaction, etc.), as well as innovation and learning metrics.

Within these four broad categories of measurement, Kinēsis recommends managers select the specific metrics (such as ROI, mystery shop scores, customer satisfaction, and cycle time), which will best measure performance relative to company goals. Discipline should be used, however. Too many can be difficult to absorb. Rather, a few metrics of key significance to the organization should be collected and tracked in a balanced score card.

Coaching

Best in class mystery shop programs identify employees in need of coaching.  Event-triggered reports should identify employees who failed to perform targeted behaviors.  For example, if it is important for a brand to track cross- and up-selling attempts in a mystery shop, a Coaching Report should be designed to flag any employees who failed to cross- or up-sell.  Managers simply consult this report to identify which employees are in need of coaching with respect to these key behaviors – behaviors that matter.

Click here for the next installment in this series: Mystery Shop Program Launch.

Click Here for Mystery Shopping Best Practices

Mystery_Shopping_Page

Mystery Shop Sample Plans – How many shops?

Title Bar

Previously we examined best practices in mystery shop scoring.

Decisions regarding the number of shops are primarily driven by budgetary resources available and the level of statistical reliability required.

Reliability at Individual or Store Level

The most appropriate measure of reliability at the individual or store level is maximum possible shop distortion (MPSD).  Given that shops are snapshots of specific moments in time, it is possible for unique events to influence the outcome of any one shop.  It is possible, therefore, that the experience observed by the mystery shopper is not representative of what normally happens.  Consider the following examples: a retail location is shopped hours after it was held up, or a bank teller is shopped on the day after her child was up sick all night, or a server at a restaurant just had an extremely bad day.  In each of these cases, it is possible these external events impacted employee performance and the customer experience.

How do we know if the experience is typical or not?

Maximum possible shop distortion is the maximum influence any unique event can have on a set of shops to an individual or location.

With one shop to a given location, we do not know if it is typical or not; we only have one data point, so the MPSD is 100%.  It is possible the experience is not representative of what is typical.  With two shops, the MPSD is 50%.  If there are discrepancies within the shops, we do not know which is normal and which is the outlier.  With three shops, we now have potentially two shops to point to the outlier (MPSD 33%).  The MPSD continues to decline with each additional shop.

MPSD

As this graph illustrates, maximum possible shop distortion begins to flatten out relative to the incremental program cost as we approach 3 to 4 shops per store.   This is where ROI in terms of improved reliability is maximized.

Click here for the next installment in this series: Taking Action on Mystery Shop Results.

Click Here for Mystery Shopping Best Practices

Mystery_Shopping_Page

Best Practices in Mystery Shop Scoring

Title Bar

Previously we examined the process of mystery shop questionnaire design.

Most mystery shopping programs score shops according to some scoring methodology to distill the mystery shop results down into a single number.

Scoring methodologies vary, but the most common methodology is to assign points earned for each behavior measured and divide the total points earned by the total points possible, yielding a percent of points earned relative to points possible.  It is a best practice in mystery shopping to calculate the score for each business unit independently (employee, store, region, division, corporate).

Not all Behaviors are Equal

Some behaviors are more important than others.  As a result, best in class mystery shop programs weight behaviors by assigning more points possible to those deemed more important.  Best practices in mystery shop weighting begin by assigning weights according to management standards (behaviors deemed more important, such as certain sales or customer education behaviors), or according to their importance to a desired outcome such as purchase intent or loyalty.  Service behaviors with stronger relationships to the desired outcome, identified through Key Driver Analysis, receive stronger weight.  Again, see the subsequent discussion of Key Driver Analysis.

Don’t Average Averages!

It is a mistake to calculate business unit scores by averaging unit scores together (such as calculating a region’s score by averaging the individual stores or even shop scores for the region).  This will only yield a mathematically correct score if all shops have exactly the same points possible, and if all business units have exactly the same number of shops.  However, if the shop has any skip logic, where some questions are only answered if specific conditions exist, different shops will have different points possible, and it is a mistake to average them together.  Averaging them together gives shops with skipped questions disproportionate weight.  Rather, points earned should be divided by points possible for each business unit independently.   Just remember – don’t average averages!

What Is A Good Score?

This is perhaps the most common question asked by mystery shop clients – one for which there is no simple answer.  It amazes me how many mystery shop providers I’ve heard pull a number out of the air, say 90%, and quote that as the benchmark with no thought given to the context of the question.  The fact of the matter is much more complex.   Context is key.  What constitutes a good score varies dramatically from client-to-client, program-to-program based on the specifics of the evaluation.  One program may be an easy evaluation, measuring easy behaviors, where a score must be near perfect to be considered “good” – others may be difficult evaluations measuring more difficult behaviors, in this case a good score will be well below perfect.  The best practice in determining what constitutes a good mystery shop score is to consider the distribution of your shop scores as a whole, determine the percentile rank of each shop (the proportion of shops that fall below a given score), and set an appropriate cut off point.   For example, if management decides the 60th percentile is an appropriate standard (6 out of 10 shops are below it), and a shop score of 86% is in the 60th percentile, then a shop score of 86% is a “good” shop score.

Work Toward a Distribution

Distribution

When all is said and done, the product of a best in class mystery shop scoring methodology will produce a distribution of shop scores, particularly on the low end of the distribution. Mystery shop programs with tight distributions around the average shop score offer little opportunity to identify areas for improvement. All the shops end up being very similar to each other, making it difficult to identify problem areas and improve employee behaviors. Distributions with scores skewed to the low end, make it much easier to identify poor shops and offer opportunities for improvement via employee coaching. If questionnaire design and scoring create scores with tight distributions, consider a redesign.

Click here for the next installment in this series: Mystery Shop Sample Plans.

Click Here for Mystery Shopping Best Practices

Mystery_Shopping_Page

Mystery Shopping Questionnaire Design

Title Bar

Previously we examined the process of defining mystery shopping objectives.

Keep it Simple

Often mystery shopping programs are designed by committee which can result in an overly complicated and cumbersome program. Unrealistic scenarios combined with long, overly complex questionnaires result in frustration for mystery shopper, mystery shop provider and the end client.  In such cases the likelihood of shopper exposure is increased and the accuracy of the observations suffers.  Keep it simple – simpler designs work better and provide more value.

Anticipate the Analysis

Finally, identify what specific desired outcome you want from the customer as a result of the experience.  Do you want the customer to purchase something?  Do you want them return for another purchase?  The answer to this question will anticipate the analysis and build in mechanisms for Key Driver Analysis to identify which behaviors are more important in driving this desired outcome – behaviors that matter.

What, How & Why

A best practice in mystery shop questionnaire design is to include observations of objective behaviors, subjective impressions and comments.  Each of these serves a specific purpose in identifying the service behaviors that matter – behaviors which drive profitability.  Together, these three elements of questionnaire design reveal the “what”, “how” and “why” of the customer experience.

What How Why

 

WhatObjective Behaviors:  Observations of objective behaviors form the backbone of best in class mystery shops.  These observations identify what specific sales and service behaviors were observed.  Mystery shopping is primarily an observational form of research, and as such, a best practice in mystery shopping is to focus on observations of specific objective and observable behaviors. These objective observations serve two purposes:  First, they measure and motivate expected sales and service behaviors.  Second, they serve as a foundation for Key Driver Analysis, where the other two subjective elements of the questionnaire are used to determine the relationship between employee behaviors and a desired outcome, such as purchase intent or customer loyalty.

WhySubjective Impressions:  Subjective impressions are primarily captured through scientifically designed and strategically selected rating scales.  These questions reveal how the shopper felt about the experience.  They add both a quantitative and qualitative perspective to the objective behaviors observed and provide a basis for interpretation of not only individual shops, but also an analytical means to determine the relationship between each service behavior and the desired outcome.  We will explore this in more detail in a discussion of Key Driver Analysis.

HowSubjective Comments:  Beyond measuring what behaviors were observed and how the shopper felt about the experience, open-ended comments capture why shoppers felt the way they did about the experience.  While objective behaviors are the backbone of the shop, many of Kinēsis’ clients consider these comments the heart of the shop, providing a qualitative texture to understand specifically what the shopper felt about the experience.  They not only serve as a framework for understanding each shop individually, but provide raw material for content analysis to determine key qualitative key drivers of the desired outcome (purchase intent and customer loyalty).

Anticipate the Analysis

A best practice in mystery shopping program design is to anticipate the analysis.  Together, these three design elements provide input into Key Driver Analysis techniques to identify key sales and service drivers of purchase intent and loyalty – behaviors that matter.

Key Driver Graphic 2

Click here for the next installment in this series: Best Practices in Mystery Shop Scoring.

Click Here for Mystery Shopping Best Practices

Mystery_Shopping_Page

Defining Mystery Shopping Objectives

Title Bar

Previously we examined different types of mystery shopping.

The first step in building a best in class mystery shop program is defining your objectives.  Defining research objectives prior to making any other decisions about the program will ensure your program starts right, stays on track, on budget, and produces positive results.   The mystery shop best practice in defining objectives for a program is a fairly simple process.  First, generate a list of specific behavioral expectations you have of your employees.

What Do You Expect?

Ask yourself what sales and service behaviors you expect from employees.  This list of behaviors is going to vary broadly from industry-to-industry, channel-to-channel, and brand-to-brand.  Some of the questions you might ask yourself look like this:

  • What specific service behaviors do we expect?
  • When greeting a customer, what specific behaviors do we expect from staff?
  • When meeting with customers after the greeting, what specific behaviors do we expect?
  • If a phone interaction, what specific hold/transfer procedures do we expect (for example asking to be placed on hold, informing customer of the destination of the transfer)?
  • Are there specific profiling questions we expect to be asked? – If so, what are they?
  • What closing behaviors do you expect? How do you want employees to ask for the business?
  • At the conclusion of the interaction, how do you want the employee to conclude the conversation or say goodbye?
  • Are there specific follow-up behaviors that you expect, such as getting contact information, suggesting another appointment, or offering to call the customer?
  • What other specific behaviors do we expect?

Map Expectations to the Shop Questionnaire

Once you have developed a list of the specific behaviors you expect, the next step is to map each of your behavioral expectations to a question or set of questions on the mystery shop questionnaire.  Remember. these behaviors must be specific, objective and observable.

Click here for the next installment in this series: Mystery Shopping Questionnaire Design.

Click Here for Mystery Shopping Best Practices

Mystery_Shopping_Page

Mystery Shopping Best Practices

Title Bar

Introduction

“You can expect what you inspect.”

This management philosophy is as true today as it was 50 years ago when W. Edwards Deming coined it. Managers of the customer experience have several tools available to them to inspect or monitor the customer experience.  However, when it comes to monitoring employee behaviors – service and sales behaviors that drive customer experience success – no tool is better suited for that objective than mystery shopping.

Mystery shopping programs, when administered in accordance with certain mystery shopping best practices, not only test for the presence of service behaviors, but identify which sales and service behaviors matter most.  These behaviors – the ones that matter most – are those which drive either purchase intent or customer loyalty. Mystery shopping provides a vehicle to not only measure but motivate these key behaviors.  Central to the success of any customer experience initiative is understanding and adhering to certain best practices.  This white paper advances several mystery shopping best practices.

Central to monitoring the customer experience is an understanding of the brand-customer interface.  At the center of the customer experience are the various channels which form the interface between the customer and the brand.  Together, these channels define the brand more than any external messaging. Different research tools have different research purposes.   Some are designed to monitor the customer experience from the customer side of this interface; others, like mystery shopping monitor it from the brand side of the interface.  Best in class mystery shopping programs focus on the behavioral side of the equation, answering the question: are our employees exhibiting appropriate sales and service behaviors and are these behaviors the ones that matter?

Types of Mystery Shopping

Before discussing best practices in mystery shopping, it is instructive to consider how brands use mystery shopping to measure and motivate the desired customer experience.  Just about any channel in the brand-customer interface can be shopped at any point in the customer journey.

Some of the types of shops include:

In_PersonIn-Person: While distribution channels shift to more self-administered on-line channels, in many industries the in-person channel continues to be the embodiment of the brand – central to a multichannel strategy.  This role will put new pressures on store personnel as brand advocates.  In-person mystery shopping evaluates and motivates sales and service behaviors as part of this role.

Contact_CenterContact Center: Contact center mystery shopping provides managers a unique opportunity to evaluate the customer experience using predetermined scenarios.  Most contact centers employ call monitoring to evaluate agent performance.  Best in class mystery shopping programs augment call monitoring by giving managers a tool to present specific scenarios to agents to test their performance.

InternalInternal Shops: Internal shops evaluate service provided to internal customers to identify internal bottlenecks which may hinder the ability to provide optimal customer service.

Web_MobileWeb/Mobile Shops:  Across many industries, self-administered channels are increasingly becoming key to opening and deepening the customer relationship.  Mystery shopping website and mobile channels provides managers tools to test ease of use, navigation and the overall customer experience of online and mobile channels.

Life_CycleLife Cycle Shops:  Life cycle mystery shops are designed to evaluate the customer experience through the entire customer journey across a variety of delivery channels, and a spectrum of transactions, over an extended period of time.

CompetitiveCompetitive Shops:  Shopping competitors allows customer experience managers to benchmark their brand-customer interface relative to their competitors.

Click here for the next installment in this series: Defining Mystery Shopping Objectives.

Click Here for Mystery Shopping Best Practices

Mystery_Shopping_Page

Research Tools to Monitor Planned Interactions Through the Customer Life Cycle

As we explored in an earlier post, 3 Types of Customer Interactions Every Customer Experience Manager Must Understand, there are three types of customer interactions: Stabilizing, Critical, and Planned.

The third of these, “planned” interactions, are intended to increase customer profitability through up-selling and cross-selling.

These interactions are frequently triggered by changes in the customer’s purchasing patterns, account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action from service and sales personnel. Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating the performance of the brand at the customer brand interface – regardless of the channel.

The key to an effective strategy for planned interactions is appropriateness. Triggered requests for increased spending must be made in the context of the customer’s needs and permission; otherwise, the requests will come off as clumsy and annoying. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and appropriate approach to planned interactions.

Research Plan for Planned Interactions

The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign. Ask yourself, what customer interactions are planned based on customer behavior? Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.

For example, after acquisition and onboarding, assume a brand has a campaign to trigger planned interactions based on triggers from tenure, recency, frequency, share of wallet, and monetary value of transactions. These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.

LifeCycle

 

Engagement Phase

Often it is instructive to think of customer experience research in terms of the brand-customer interface, employing different research tools to study the customer experience from both sides of this interface.

In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:

Customer Side Brand Side
Post-Transaction Surveys

Post-transaction surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey, targeting specific customers shortly after a service interaction. As the name implies, the purpose of this type of survey is to measure satisfaction with a specific transaction.

Transactional Mystery Shopping

Mystery shopping is about alignment.  It is an excellent tool to align sales and service behaviors to the brand. Mystery shopping focuses on the behavioral side of the equation, answering the question: are our employees exhibiting the sales and service behaviors that will engage customers to the brand?

Overall Satisfaction Surveys

Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction.  These surveys give managers a feel for satisfaction, engagement, image and positioning across the entire customer base, not just active customers.

Alternative Delivery Channel Shopping

Website mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these additional channels.

Employee Surveys

Employee surveys often measure employee satisfaction and engagement. However, they can also be employed to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identifies perceptual gaps between management and frontline personnel.

 

Growth Phase

In the growth phase, one may measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:

Customer Side Brand Side
Awareness Surveys

Awareness of the brand, its products and services, is central planned service interactions.  Managers need to know how awareness and attitudes change as a result of these planned experiences.

Cross-Sell  Mystery Shopping

In these unique mystery shops, mystery shoppers are seeded into the lead/referral process.  The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction.

Wallet Share Surveys

These surveys are used to evaluate customer engagement with and loyalty to the brand.  Specifically, to determine if customers consider the brand their primary provider, and identify potential road blocks to wallet share growth.

 

Retention Phase

Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:

Customer Side Brand Side
Lost Customer Surveys

Lost customer surveys identify sources of run-off or churn to provide insight into improving customer retention.

Life Cycle Mystery Shopping

Shoppers interact with the company over a period of time, across multiple touch points, providing broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across multiple channels.

Comment Listening

Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.

 

Call to Action – Make the Most of the Research

Research without call to action may be interesting, but not very useful.  Regardless of the research choices you make, be sure to build call to action elements into research design.

For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.

For surveys of customers, we recommend testing the effectiveness of the onboarding process by benchmarking three loyalty attitudes:

  • Would Recommend: The likelihood of the customer recommending the brand to a friend relative or colleague.
  • Customer Advocacy: The extent to which the customer agrees with the statement, “you care about me, not just the bottom line?”
  • Primary Provider: Does the customer consider the brand their primary provider for similar services?

As you contemplate campaigns to build planned experiences into your customer experience, it doesn’t matter what specific model you use.  The above model is simply for illustrative purposes.  As you build your own model, be sure to design customer experience research into the planned experiences to monitor both the presence and effectiveness of these planned experiences.


 

Click Here For More Information About Kinesis' Research Services

Onboarding Research: Research Techniques to Track Effectiveness of Stabilizing New Customer Relationships

As we explored in an earlier post, 3 Types of Customer Interactions Every Customer Experience Manager Must Understand, there are three types of customer interactions: Stabilizing, Critical, and Planned.

The first of these, “stabilizing” interactions are service encounters which promote customer retention, particularly in the early stages of the relationship.

stable

New customers are at the highest risk of defection, as they have had less opportunity to confirm the provider meets their expectations.  Turnover by new customers is particularly damaging to profits because many defections occur prior to recouping acquisition costs, resulting in a net loss on the customer relationship.  As a result, customer experience managers should stabilize the customer relationship early to ensure a return on acquisition costs.

Systematic education drives customer expectations beyond simply informing customers about additional products and services; education systematically informs new customers how to use services more effectively and efficiently.  Part of this systematic approach to create stabilizing service encounters is to measure the efficacy of customer experience at all stages of this stabilizing process.

Onboarding Research

The first step in designing a research plan for the onboarding process is to define the process itself.  Ask yourself, what type of stabilizing customer experiences do we expect at both initial purchase and at discrete time periods thereafter (be it 30 days, 90 days, 1-year)?  Understanding the expectations of the process itself will define your research objectives, allowing an informed judgment of what to measure and how to measure it.

Specific recommendations vary from industry to industry, however, typically, we recommend measuring the onboarding process by auditing the performance of the process and its influence on the customer relationship.

Performance Audits

Mystery shopping is an effective tool to audit the performance of the onboarding process.

First, mystery shop the initial sales process to evaluate the efficacy and effectiveness of the sales process.  Be sure to link the mystery shop observations to a dependent variable, such as purchase intent, to determine which sales behaviors drive purchase intent.  This will inform decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.

Beyond auditing the initial sales experience, a mystery shop audit of the onboarding process should test the presence and timing of specific onboarding events expected at discrete time periods.  As an example, a retail bank may expect the following onboarding process after a new account is opened:

Period Events
At Opening Internet Banking PresentationMobile Banking PresentationContact Center Presentation

ATM Presentation

Disclosures

 

1 – 10 Days Welcome LetterChecksDebit Card

Internet Banking Password

Overdraft Protection Brochure

Mobile Banking E-Mail

 

30 – 45 Days First StatementSwitch KitCredit Card Offer

Auto Loan Brochure

Mortgage/Home Equity Loan Brochure

 

In this example, the bank’s customer experience managers have designed a process to make customers aware of more convenient, less expensive channels, as well as additional services offered.  An integrated research plan would recruit mystery shoppers for a long-term evaluation to audit the presence, timing, and effectiveness of each event in the onboarding process.

Customer Perspective

In parallel to auditing the presence and timing of onboarding events, research should be conducted to evaluate the effectiveness of the process in stabilizing the customer relationship by surveying new customers at distinct intervals after customer acquisition.  We recommend testing the effectiveness of the onboarding process by benchmarking three loyalty attitudes:

  • Would Recommend: The likelihood of the customer recommending the brand to a friend relative or colleague.
  • Customer Advocacy: The extent to which the customer agrees with the statement, “you care about me, not just the bottom line?”
  • Primary Provider: Does the customer consider the branch their primary provider for similar services?

These three measures tracked together throughout the onboarding process will give managers a measure of the effectiveness of stabilizing the relationship.

Again, new customers are at an elevated risk of defection.  Therefore, it is important to stabilize the customer relationship early on to ensure ROI on acquisition costs.  A well designed research process will give managers an important audit of both the presence and timing of onboarding events, as well as track customer engagement and loyalty early in their tenure.

 

Click Here For More Information About Kinesis' Research Services

Measure and Motivate the Right Contact Center Agent Behaviors

Increasingly banks must operate in a multi-channel environment.  While the changing role of the branch, combined with automated channels such as online and mobile, are getting a lot of attention, there remains a key role for the contact center in delivering an effective customer experience.  Central to this key role is designing an effective customer experience, comprised of the right sales and service behaviors – those which influence customer attitudes and behaviors in a profitable way yielding the most return on investment.

To provide direction with respect to what sales and service behaviors will yield the most return on investment, Kinesis conducted a series of mystery shops to identify which sales and service behaviors have the most influence on purchase intent. In addition to observing specific sales and service behaviors, mystery shoppers were also asked to rate how the call would have influenced their purchase intent if they had been a real customer. This purchase intent rating was then used as means of calculating the strength of the relationship between each behavior and purchase intent.

To determine the relationship between these service attributes and purchase intent, the data for these different studies was cross-tabulated by the purchase intent rating and subjected to significance testing. [i]

When the percentage of calls in which purchase intent significantly increased is tested against the percentage of calls where purchase intent significantly decreased, nearly all the sales and service attributes are statistically significant at or above a 95% confidence level.

 

Significantly Increased Significantly Decreased Test Statistic
Product knowledge 98% 35% 9.6
Explanations easy to understand 99% 45% 9.0
When thanked, respond graciously 98% 42% 8.5
Friendly demeanor / pleasant voice 100% 60% 8.4
Express appreciation for interest / thank you for business 92% 20% 8.3
Listen attentively 99% 60% 7.3
Ask probing questions 79% 10% 6.4
Offer further assistance 85% 25% 6.2
Speak clearly and avoid bank jargon 98% 68% 5.8
Listen attentively to your needs 80% 25% 5.3
Mention other bank product 99% 75% 5.3
Clear Greeting 95% 60% 5.1
Invite you to visit branch 64% 10% 4.6
Explain the value of banking with bank 57% 5% 4.4
Offer to mail material / mention website 66% 20% 4.3
Ask your name 68% 25% 3.8
Ask for your business / close the sale 57% 21% 2.9
Avoid interrupting 100% 95% 2.9
If no one available to assist you, offered options 100% 0% 2.2
Professional greeting 98% 89% 1.9

 

The differences between the highest and lowest purchase intent for product knowledge and ease to understanding explanations are the most significant, while a professional greeting is the least significant.

Dividing these behaviors into rough quartiles and comparing them side-by-side, reveals some interesting observations:

 

 

Quartile I

Product knowledge

Explanations easy to understand

When thanked, respond graciously

Friendly demeanor / pleasant voice

Express appreciation for interest / thank you for business

 

Quartile II

Listen attentively

Ask probing questions

Offer further assistance

Speak clearly and avoid bank jargon

When thanked, employee respond graciously

 

Quartile III

Listen attentively to your needs

Mention other bank product

Clear greeting

Invite you to visit branch

Explain the value of banking with bank

Offer to mail material / mention website

 

Quartile IV

Ask your name

Ask for your business / close the sale

Avoid interrupting

If no one available to assist you, offered options

Professional greeting

 

The attributes with the most significant differences between high and low purchase intent ratings appear to be those associated with reliability and empathy.  It appears mystery shoppers valued such “core” attributes as product knowledge or interest/enthusiasm for the customer.  They seem to be less concerned with more peripheral service attributes, such as asking for names, etc.  Influencing purchase intent is not as simple as merely using the customer’s name or answering the phone within a short period of time.  Rather it is a much more challenging undertaking of being competent in your job and having the customer’s best interests at heart.

[i] Significance testing determines if any differences observed are the result of actual differences in the populations measured rather than the result of normal variation.  Without getting into too much detail, significance testing produces a test statistic to determine the probability that differences observed are statistically significant.  A test statistic above 1.96 equates to a 95% confidence level, which means there is a 95% chance any differences observed are the result of actual differences in the populations measured rather than normal variation.  For all practical purposes a test statistic over 3.1 means there is 100% chance the differences observed are statistically significant (although in reality the probability never reaches 100%).  Finally, in interpreting the following analysis, it is important note that test statistics are not lineal.  A test statistic of two is not twice as significant as a test statistic of one.  The influence on significance decreases as the test statistic increases.  However, the test statistic does give us an opportunity to rank the service attributes by their statistical significance.


Click Here For More Information About Kinesis'; Bank Mystery Shopping


Click Here For More Information About Kinesis' Contact Center CX Research


Clink Here for Mystery Shopping Best Practices

Best Practices in Bank Customer Experience Measurement Design

The question was simple enough…  If you owned customer experience measurement for one of your bank clients, what would you do?

Through the years, I developed a point of view of how to best measure the customer experience, and shared it with a number of clients, however, never put it down to writing.

So here it is…

Best practices in customer experience measurement use multiple inputs in a coordinated fashion to give managers a 360-degree view of the customer experience.  Just like tools in a tool box, different research methodologies have different uses for specific needs.  It is not a best practice to use a hammer to drive a screw, nor the butt end of a screwdriver to pound a nail.  Each tool is designed for a specific purpose, but used in concert can build a house. The same is true for research tools.  Individually they are designed for specific purposes, but used in concert they can help build a more whole and complex structure.

Generally, Kinesis believes in measuring the customer experience with three broad classifications of research methodologies, each providing a unique perspective:

  1. Customer Feedback – Using customer surveys and other less “scientific” feedback tools (such as comment tools and social media monitoring), managers collect valuable input into customer expectations and impressions of the customer experience.
  1. Observation Research – Using performance audits and monitoring tools such as mystery shopping and call monitoring, managers use these tools to gather observations of employee sales and service behaviors.
  1. Employee Feedback – Frontline employees are the single most underutilized asset in terms of understanding the customer experience. Frontline employees spend the majority of their time in the company-customer interface and as a result have a unique perspective on the customer experience.  They have a good idea about what customers want, how the institution compares to competitors, and how policies, procedures and internal service influence the customer experience.

These research methodologies are employed in concert to build a 360-degree view of the customer experience.

360-degree bank customer experience measurement

The key to building a 360-degree view of the customer experience is to understand the bank-customer interface.  At the center of the customer experience are the various channels which form the interface between the customer and institution.  Together these channels define the brand more than any external messaging.  Best in class customer experience research programs monitor this interface from multiple directions across all channels to form a comprehensive view of the customer experience.

Customer and front-line employees are the two stakeholders who interact most commonly with each other in the customer-institution interface.  As a result, a best practice in understanding this interface is to monitor it directly from each direction.

Tools to measure the experience from the customer side of interface include:

Post-Transaction Surveys: Post-transaction surveys provide intelligence from the other side of customer-employee interface.  These surveys are targeted, event-driven, collecting feedback from customers about specific service encounters soon after the interaction occurs.  They provide valuable insight into both customer impressions of the customer experience, and if properly designed, insight into customer expectations.  This creates a learning feedback loop, where customer expectations can be used to inform service standards measured through mystery shopping.  Thus two different research tools can be used to inform each other.  Click here for a broader discussion of post-transaction surveys.

Customer Comments:  Beyond surveying customers who have recently conducted a service interaction, a best practice is to provide an avenue for customers who want to comment on the experience.  Comment tools are not new (in the past they were the good old fashioned comment card), but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.  Additionally, comment tools can be used to inform the post transaction surveys.  If common themes develop in customer comments, they can be added to the post-transaction surveys for a more scientific measurement of the issue.  Click here for a broader discussion of comment tools.

Social Monitoring:  Increasingly social media is “the media”; prospective customers assign far more weight to social media then any external messaging.  A social listening system that analyzes and responds to social indirect feedback is increasingly becoming essential.  As with comment tools, social listening can be used to inform the post transaction surveys.  Click here for a broader discussion of social listening tools.

Directing our attention to the bank side of the interface, tools to measure the experience from the bank side of bank-customer interface include:

Mystery Shopping:  In today’s increasing connected world, one bad experience could be shared hundreds if not thousands of times over.  As in-person delivery models shift to a universal associate model with the branch serving as more of a sales center, monitoring and motivating selling skills is becoming increasingly essential.  Mystery shopping is an excellent tool to align sales and service behaviors to the brand. Unlike the various customer feedback tools designed to inform managers about how customers feel about the bank, mystery shopping focuses on the behavioral side of the equation, answering the question: are our employees exhibiting appropriate sales and service behaviors?  Click here for a broader discussion of mystery shopping tools.

Employee Surveys:  Employee surveys often measure employee satisfaction and engagement.  However, in terms of understanding the customer experience, a best practice is to move employee surveys beyond employee engagement and to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.  This information comes directly out one side of the customer-employee interface, and provides not only intelligence into the customer experience, but also evaluates the level of support within the organization, solicit recommendations, and compares perceptions by position (frontline vs. management) to identify perceptual gaps which typically exist within organizations.  Click here for a broader discussion of employee surveys.

For more posts in this series, click on the following links:


Click Here For More Information About Kinesis' Bank CX Research Services