These days, post-transaction surveys are ubiquitous. Brands large and small take advantage of internet-based survey technology to evaluate the customer experience at almost every touch point. Similarly, loyalty proxy methodologies such as Net Promoter (NPS) are very much in vogue. However, many NPS surveys are fielded in a post-transaction context (potentially exposing the research to sampling bias as a result of only hearing from customers who have recently conducted a transaction), and are not designed in a manner that will give managers appropriate information upon which to take action on the research.
At their core, loyalty proxies are brand perception research – not transactional. We believe it is a best practice to define the sample frame as the entire customer base, as opposed to customers who have recently interacted with the brand. Ultimately, these surveys are image and perception research of the brand across the entire customer base.
Happily, this perception research offers an excellent opportunity to gather customer perceptions of the brand, compare them to your desired brand image, as well as measure engagement or wallet share. An excellent survey instrument to accomplish this is a survey divided into three parts:
- Loyalty Proxy: Consisting of the NPS rating or some other appropriate measure and 1 or 2 follow up questions to explore why the customer gave the NPS rating they did.
- Image perception: consisting of 3 or 4 questions to determine how customers perceive the brand.
- Engagement/Wallet Share: consisting of 3 or 4 questions to determine if the customer considers the brand their primary provider, and to gauge share of wallet of various financial products & services across the brand and its competitors.
This research plan will not only yield an NPS, but it will provide insight into why the customers assigned the NPS they did, evaluate the extent to which the entire customer base’s impressions of the brand matches your desired brand image, as well as identify how the brand is perceived by promoters and detractors. This plan will also yield valuable insight into share of wallet, and how wallet share differs for promoters and detractors.
Such a survey need not be long, the above objectives can be accomplished with 10 – 12 questions and will probably take less than 5 minutes for the customer to complete.
In a subsequent posts, we will explore each of these 3-parts of the survey in more detail:
Call to Action Analysis
A best practice in mystery shop design is to build in call to action elements designed to identify key sales and service behaviors which correlate to a desired customer experience outcome. This Key Driver Analysis determines the relationship between specific behaviors and a desired outcome. For most brands and industries, the desired outcomes are purchase intent or return intent (customer loyalty). This approach helps brands identify and reinforce sales and service behaviors which drive sales or loyalty – behaviors that matter.
Earlier we suggested anticipating the analysis in questionnaire design in a mystery shop best practice. Here is how the three main design elements discussed provide input into call to action analysis.
Shoppers are asked if they had been an actual customer, how the experience influenced their return intent. Cross-tabulating positive and negative return intent will identify how the responses of mystery shoppers who reported a positive influence on return intent vary from those who reported a negative influence. This yields a ranking of the importance of each behavior by the strength of its relationship to return intent.
In addition, paired with this rating is a follow-up question asking, why the shopper rated their return intent as they did. The responses to this question are grouped and classified into similar themes, and cross-tabulated by the return intent rating described above. The result of this analysis produces a qualitative determination of what sales and service practices drive return intent.
The final step in the analysis is identifying which behaviors have the highest potential for ROI in terms of driving return intent. This is achieved by comparing the importance of each behavior (as defined above) and its performance (the frequency in which it is observed). Mapping this comparison in a quadrant chart, like the one to the below, provides a means for identifying behaviors with relatively high importance and low performance, which will yield the highest potential for ROI in terms of driving return intent.
This analysis helps brands focus training, coaching, incentives, and other motivational tools directly on the sales and service behaviors that will produce the largest return on investment – behaviors that matter.
Part of Balanced Scorecard
A best practice in mystery shopping is to integrate customer experience metrics from both sides of the brand-customer interface as part of an incentive plan. The exact nature of the compensation plan should depend on broader company culture and objectives. In our experience, a best practice is a balanced score card approach which incorporates customer experience metrics along with financial, internal business processes (cycle time, productivity, employee satisfaction, etc.), as well as innovation and learning metrics.
Within these four broad categories of measurement, Kinēsis recommends managers select the specific metrics (such as ROI, mystery shop scores, customer satisfaction, and cycle time), which will best measure performance relative to company goals. Discipline should be used, however. Too many can be difficult to absorb. Rather, a few metrics of key significance to the organization should be collected and tracked in a balanced score card.
Best in class mystery shop programs identify employees in need of coaching. Event-triggered reports should identify employees who failed to perform targeted behaviors. For example, if it is important for a brand to track cross- and up-selling attempts in a mystery shop, a Coaching Report should be designed to flag any employees who failed to cross- or up-sell. Managers simply consult this report to identify which employees are in need of coaching with respect to these key behaviors – behaviors that matter.
As we explored in an earlier post, 3 Types of Customer Interactions Every Customer Experience Manager Must Understand, there are three types of customer interactions: Stabilizing, Critical, and Planned.
The third of these, “planned” interactions, are intended to increase customer profitability through up-selling and cross-selling.
These interactions are frequently triggered by changes in the customer’s purchasing patterns, account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action from service and sales personnel. Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating the performance of the brand at the customer brand interface – regardless of the channel.
The key to an effective strategy for planned interactions is appropriateness. Triggered requests for increased spending must be made in the context of the customer’s needs and permission; otherwise, the requests will come off as clumsy and annoying. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and appropriate approach to planned interactions.
Research Plan for Planned Interactions
The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign. Ask yourself, what customer interactions are planned based on customer behavior? Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.
For example, after acquisition and onboarding, assume a brand has a campaign to trigger planned interactions based on triggers from tenure, recency, frequency, share of wallet, and monetary value of transactions. These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.
Often it is instructive to think of customer experience research in terms of the brand-customer interface, employing different research tools to study the customer experience from both sides of this interface.
In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:
|Customer Side||Brand Side|
Post-transaction surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey, targeting specific customers shortly after a service interaction. As the name implies, the purpose of this type of survey is to measure satisfaction with a specific transaction.
|Transactional Mystery Shopping
Mystery shopping is about alignment. It is an excellent tool to align sales and service behaviors to the brand. Mystery shopping focuses on the behavioral side of the equation, answering the question: are our employees exhibiting the sales and service behaviors that will engage customers to the brand?
|Overall Satisfaction Surveys
Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction. These surveys give managers a feel for satisfaction, engagement, image and positioning across the entire customer base, not just active customers.
|Alternative Delivery Channel Shopping
Website mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these additional channels.
Employee surveys often measure employee satisfaction and engagement. However, they can also be employed to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identifies perceptual gaps between management and frontline personnel.
In the growth phase, one may measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:
|Customer Side||Brand Side|
Awareness of the brand, its products and services, is central planned service interactions. Managers need to know how awareness and attitudes change as a result of these planned experiences.
|Cross-Sell Mystery Shopping
In these unique mystery shops, mystery shoppers are seeded into the lead/referral process. The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction.
|Wallet Share Surveys
These surveys are used to evaluate customer engagement with and loyalty to the brand. Specifically, to determine if customers consider the brand their primary provider, and identify potential road blocks to wallet share growth.
Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:
|Customer Side||Brand Side|
|Lost Customer Surveys
Lost customer surveys identify sources of run-off or churn to provide insight into improving customer retention.
|Life Cycle Mystery Shopping
Shoppers interact with the company over a period of time, across multiple touch points, providing broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across multiple channels.
Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.
Call to Action – Make the Most of the Research
Research without call to action may be interesting, but not very useful. Regardless of the research choices you make, be sure to build call to action elements into research design.
For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.
For surveys of customers, we recommend testing the effectiveness of the onboarding process by benchmarking three loyalty attitudes:
- Would Recommend: The likelihood of the customer recommending the brand to a friend relative or colleague.
- Customer Advocacy: The extent to which the customer agrees with the statement, “you care about me, not just the bottom line?”
- Primary Provider: Does the customer consider the brand their primary provider for similar services?
As you contemplate campaigns to build planned experiences into your customer experience, it doesn’t matter what specific model you use. The above model is simply for illustrative purposes. As you build your own model, be sure to design customer experience research into the planned experiences to monitor both the presence and effectiveness of these planned experiences.
Every time a customer interacts with a brand, they learn something about that brand, and adjust their behavior based on what they learn. They will adjust their behavior in ways that are either profitable or unprofitable for the brand. The implication of this proposition is that the customer experience can be managed in such a way to influence customer behavior in profitable ways.
In order to understand how to drive customer behaviors via the customer experience, it is first, is important to define the customer behaviors you wish to influence, and to align marketing message, performance standards, training content, employee incentives and measurement systems to encourage those behaviors.
It is impossible, of course, to plan every customer experience or to ensure that every experience occurs exactly as intended. However, companies can identify the types of experiences that impart the right kind of information to customers at the right times. It is useful to group these experiences into three categories of company/customer interaction: Stabilizing, Critical, and Planned.
Stabilizing interactions promote customer retention, particularly in the early stages of the relationship.
New customers are at the highest risk of defection. As customers become more familiar with a brand they adjust their expectations accordingly, however new customers are more likely to experience disappointment, and thus more likely to defect. Turnover by new customers is particularly hard on profits because many defections occur prior to break-even, resulting in a net loss for the company. Thus, experiences that stabilize the customer relationship early on ensure that a higher proportion of customers will reach positive profitability.
The keys to an effective stabilizing strategy are education, competence and consistency.
Education influences expectations, helping customers develop a realistic expectations. It goes beyond simply informing customers about the products and services offered by the company. It systematically informs new customers how to use the brand’s services more effectively and efficiently, how to obtain assistance, how to complain, and what to expect as the relationship progresses. In addition to influencing expectations, systematic education leads to greater efficiency in the way customers interact with the company, thus driving down the cost of customer service and support.
Critical interactions are service encounters that lead to memorable customer experiences. While most service is routine, from time to time a situation arises that is out of the ordinary: a complaint, a question, a special request, a chance for an employee to go the extra mile. The outcomes of these critical incidents can be either positive or negative, depending upon the way the company responds to them; however, they are seldom neutral. The longer a customer remains with a company, the greater the likelihood that one or more critical interactions will have occurred.
Because they are memorable and unusual, critical interactions tend to have a powerful effect on the customer relationship. We often think of as “moments of truth where the brand has an opportunity to solidify the relationship earning a loyal customer or risk the customer’s defection. Positive outcomes lead to “customer delight” and word-of-mouth endorsements, while negative outcomes lead to customer defections, diminished share of wallet and unfavorable word-of-mouth.
The key to an effective critical interaction strategy is opportunity. Systems and processes must be in a position to react to these critical moments of truth.
An effective customer experience strategy should include systems for recording critical interactions, analyzing trends and patterns, and feeding that information back to the organization. Employees can then be trained to recognize critical opportunities, and empowered to respond to them in such a way that they will lead to positive outcomes and desired customer behaviors.
Planned interactions are intended to increase customer profitability through up-selling and cross-selling. These interactions are frequently triggered by changes in the customers’ purchasing patterns, account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action from service and sales personnel. Customer experience managers should have a process to record and analyzing the quality of execution of planned interactions with the objective off evaluating the performance of the brand at the customer brand interface – regardless of the channel.
The key to an effective strategy for planned interactions is appropriateness. Triggered requests for increased spending must be made in the context of the customers’ needs and permission; otherwise the requests will come off as clumsy and annoying. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and appropriate approach to planned interactions.
For additional perspectives on research techniques to monitor the customer experience in the stabilizing phase of the relationship, see the post: Onboarding Research: Research Techniques to Track Effectiveness of Stabilizing New Customer Relationships.
For additional perspectives on a research methodology to investigate “Critical” experiences, see the post: Critical Incident Technique: A Tool to Identify and Prepare for Your Moments of Truth.
For additional perspectives on research methodologies to investigate “Planned” experiences through out the customer life cycle, see the post: Research Tools to Monitor Planned Interactions Through the Customer Life Cycle.
What if I told you that after all your efforts with marketing (product, positioning and price), there is a one-in-ten chance the branch representatives will undermine the sale?
Now more than ever, it is critical for banks to establish themselves as the primary provider of financial services, not only for deposit accounts but across a variety of financial products and services. Increasing the average products per customer will require a strategic approach to both product design and marketing. However, at the end of this strategic marketing process, there is the human element, where prospective customers must interact with bank employees to complete the sales process.
As part of our services to our clients, Kinesis tracks purchase intent as a result of in-branch sales presentations. According to our research, 10% of in-branch sales presentations observed by mystery shoppers, result in negative purchase intent.
What do these 10% failed sales presentations look like?
Here are some quotes describing the experience:
“There was no personal attention. The banker did not seem to care if I was there or not. At the teller line, there was only one teller that seemed to care that there were several people waiting. No one moved with a sense of urgency. There was no communication materials provided.”
Here’s another example…
“It was painfully obvious that the banker was lacking basic knowledge of the accounts.”
“Brian did not give the impression that he wanted my business. He did not stand up and shake my hand when I went over to his desk. He very rarely made eye contact. I felt like he was just going through the motions. He did not ask for my name or address me by my name. He told me about checking account products but failed to inquire about my situation or determine what needs I have or might have in the future. He did not wrap up the recommendation by going over everything nor did he ask for my business. He did not thank me for coming in.”
In contrast, here is what the shops with positive intent look like:
“The appearance of the bank was comfortable and very busy in a good way. The customers were getting tended to and the associates had the customers’ best interests in mind. The response time was amazing and I felt as if the associate was sincere about wanting me as a customer, but he was not pushy or demanding about it.”
Now…after all the effort and expense of a strategic cross-sell strategy, which of the above experiences do you want your customers to encounter?
Would it be acceptable to you as a marketer to at the end of a strategic marketing campaign, have 10% of the sales presentations undermine its success?
These are rhetorical questions.
Time and time again, in study after study, we consistently observe that purchase intent is driven by two dimensions of the customer experience: reliability and empathy. Customers want bankers who care about them and their needs and have the ability to satisfy those needs. Specifically, our research suggests the following behaviors are strongly related to purchase intent:
- Greeting/Stand to Greet/Acknowledge Wait
- Interest in Helping/Offer Assistance
- Discuss Benefits/Solutions
- Promised Services Get Done
- Express Appreciation/Gracious
- Personalized Comment (such as, How are you?)
- Listen Attentively/Undivided Attention
As part of any strategic marketing campaign to both bring in new customers as well as increase wallet share of existing customers, it is incumbent on the institution to install appropriate customer experience training, sales and service monitoring, linked with incentives and rewards structures to motivate sales and service behaviors which drive purchase intent.
by CHRIS ARLEN
Not All Dimensions Are Equal
All dimensions are important to customers, but some more than others.
Service providers need to know which are which to avoid majoring in minors. At the same time they can’t focus on only one dimension and let the others suffer.
SERVQUAL research showed dimensions’ importance to each other by asking customers to assign 100 points across all five dimensions.*
Here’s their importance to customers.
The 5 Service Dimensions Customers Care About
What’s this mean for service providers?
#1 Just Do It
RELIABILITY: Do what you say you’re going to do when you said you were going to do it.
Customers want to count on their providers. They value that reliability. Don’t providers yearn to find out what customers value? This is it.It’s three times more important to be reliable than have shiny new equipment or flashy uniforms.
Doesn’t mean you can have ragged uniforms and only be reliable. Service providers have to do both. But providers first and best efforts are better spent making service reliable.
Whether it’s periodics on schedule, on-site response within Service Level Agreements (SLAs), or Work Orders completed on time.
#2 Do It Now
RESPONSIVENESS: Respond quickly, promptly, rapidly, immediately, instantly.
Waiting a day to return a call or email doesn’t make it. Even if customers are chronically slow in getting back to providers, responsiveness is more than 1/5th of their service quality assessment.
Service providers benefit by establishing internal SLAs for things like returning phone calls, emails and responding on-site. Whether it’s 30 minutes, 4 hours, or 24 hours, it’s important customers feel providers are responsive to their requests. Not just emergencies, but everyday responses too.
Call centers typically track caller wait times. Service providers can track response times. And their attainment of SLAs or other Key Performance Indicators (KPIs) of responsiveness. This is great performance data to present to customers in Departmental Performance Reviews.
#3 Know What Your Doing
ASSURANCE: Service providers are expected to be the experts of the service they’re delivering. It’s a given.
SERVQUAL research showed it’s important to communicate that expertise to customers. If a service provider is highly skilled, but customers don’t see that, their confidence in that provider will be lower. And their assessment of that provider’s service quality will be lower.
RAISE CUSTOMER AWARENESS OF YOUR COMPETENCIES
Service providers must communicate their expertise and competencies – before they do the work. This can be done in many ways that are repeatedly seen by customers, such as:
- Display industry certifications on patches, badges or buttons worn by employees
- Include certification logos on emails, letters & reports
- Put certifications into posters, newsletters & handouts
By communicating competencies, providers can help manage customer expectations. And influence their service quality assessment in advance.
#4 Care about Customers as much as the Service
EMPATHY: Services can be performed completely to specifications. Yet customers may not feel provider employees care about them during delivery. And this hurts customers’ assessments of providers’ service quality.
For example, a day porter efficiently cleans up a spill in a lobby. However, during the clean up doesn’t smile, make eye contact, or ask the customer if there is anything else they could do for them. In this hypothetical the provider’s service was performed fully. But the customer didn’t feel the provider employee cared. And it’s not necessarily the employees fault. They may not know how they’re being judged. They may be overwhelmed, inadequately trained, or disinterested.
SERVICE DELIVERY MATTERS
Providers’ service delivery can be as important as how it was done. Provider employees should be trained how to interact with customers and their end-users. Even a brief session during initial orientation helps. Anything to help them understand their impact on customers’ assessment of service quality.
#5 Look Sharp
TANGIBLES: Even though this is the least important dimension, appearance matters. Just not as much as the other dimensions.
Service providers will still want to make certain their employees appearance, uniforms, equipment, and work areas on-site (closets, service offices, etc.) look good. The danger is for providers to make everything look sharp, and then fall short on RELIABILITY or RESPONSIVENESS.
At the End of the Day
Customers’ assessments include expectations and perceptions across all five SERVQUAL dimensions. Service providers need to work on all five, but emphasize them in order of importance. If sacrifices must be made, use these dimensions as a guide for which ones to rework.
Also, providers can use SERVQUAL dimensions in determining specific customer and site needs. By asking questions around these dimensions, providers can learn how they play out at a particular location/bid opportunity. What dimensions are you in?
* For a description of the SERVQUAL methodology, see the following post: SERVQUAL Model: A Multi-Item Tool for Comparing Customer Perceptions vs. Expectations
Looking for a tried and true model to understand your service quality?
The SERVQUAL model is an empiric model that has been around for nearly 30 years. While not new, it is a foundation of many of the service quality and customer experience concepts in use today. It is a gap model designed to measure gaps between customer perceptions relative to customer expectations.
SERQUAL describes the customer experience in terms of five dimensions:
1. TANGIBLES – Appearance of physical facilities, equipment, personnel, and communication materials
2. RELIABILITY – Ability to perform the promised service dependably and accurately
3. RESPONSIVENESS – Willingness to help customers and provide prompt service
4. ASSURANCE – Knowledge and courtesy of employees and their ability to convey trust and confidence
5. EMPATHY – Caring, individualized attention the firm provides its customers
Each of these five dimensions is measured using a survey instrument consisting of individual attributes which role up into each dimension.
For example, each of the five dimensions may consist of the following individual attributes:
• Appearance/cleanliness of physical facilities
• Appearance/cleanliness of personnel
• Appearance/cleanliness of communication/marketing materials
• Appearance/cleanliness of equipment
• Perform services as promised/right the first time
• Perform services on time
• Follow customer’s instructions
• Show interest in solving problems
• Telephone calls/other inquiries answered promptly
• Willingness to help/answer questions
• Problems resolved quickly
• Knowledgeable employees/job knowledge
• Employees instill confidence in customer
• Employee efficiency
• Employee recommendations
• Questioning to understand needs
• Interest in helping
• Individualized/personal attention
• Ease of understanding/use understandable terms
• Understand my needs/recommending products to best fit my needs
• The employees have my best interests at heart
Call to Action
Research without a call to action may be informative, but not very useful. By measuring both customer perceptions and expectations, SERVQUAL gives managers the ability to prioritize investments in the customer experience based not only on their performance, but performance relative to customer expectations.
The first step in taking action on SERVQUAL results is to calculate a Gap Score by simply subtracting the expectation rating from the perception rating for each attribute (Gap Score = Perception – Expectation). This step alone will give you a basis for ranking each attribute based on its gap between customer perceptions and expectations.
Service Quality Score
In addition to ranking service attributes, the Gap Score can be used to calculate both a Service Quality Score based on the relative importance assigned by customers to each of the five service quality dimensions.
The first step in calculating a Service Quality Score is to average the Gap Score of each attribute within each dimension. This will give you the Gap Score for each dimension (GSD). Averaging the dimension Gap Scores will yield an Unweighted Service Quality Score.
From this unweighted score it is a three step process to calculate a Weighted Service Quality Score.
First, determine importance weights by asking customers to allocate a fixed number of points (typically 100) across each of the five dimensions based on how important the dimension is to them. This point allocation will yield a weight for each dimension based on its importance.
The second step is to multiply the Gap Score for each dimension (GSD) by its importance weight. The final step is to simply sum this product across all five dimensions; this will yield a Weighted Service Quality Score.
What does all this mean? See the following post for discussion of the implications of SERVQUAL for customer experience managers: The 5 Service Dimensions All Customers Care About.