As we explored in an earlier post, 3 Types of Customer Interactions Every Customer Experience Manager Must Understand, there are three types of customer interactions: Stabilizing, Critical, and Planned.
The third of these, “planned” interactions, are intended to increase customer profitability through up-selling and cross-selling.
These interactions are frequently triggered by changes in the customer’s purchasing patterns, account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action from service and sales personnel. Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating the performance of the brand at the customer brand interface – regardless of the channel.
The key to an effective strategy for planned interactions is appropriateness. Triggered requests for increased spending must be made in the context of the customer’s needs and permission; otherwise, the requests will come off as clumsy and annoying. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and appropriate approach to planned interactions.
Research Plan for Planned Interactions
The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign. Ask yourself, what customer interactions are planned based on customer behavior? Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.
For example, after acquisition and onboarding, assume a brand has a campaign to trigger planned interactions based on triggers from tenure, recency, frequency, share of wallet, and monetary value of transactions. These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.
Often it is instructive to think of customer experience research in terms of the brand-customer interface, employing different research tools to study the customer experience from both sides of this interface.
In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:
|Customer Side||Brand Side|
Post-transaction surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey, targeting specific customers shortly after a service interaction. As the name implies, the purpose of this type of survey is to measure satisfaction with a specific transaction.
|Transactional Mystery Shopping
Mystery shopping is about alignment. It is an excellent tool to align sales and service behaviors to the brand. Mystery shopping focuses on the behavioral side of the equation, answering the question: are our employees exhibiting the sales and service behaviors that will engage customers to the brand?
|Overall Satisfaction Surveys
Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction. These surveys give managers a feel for satisfaction, engagement, image and positioning across the entire customer base, not just active customers.
|Alternative Delivery Channel Shopping
Website mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these additional channels.
Employee surveys often measure employee satisfaction and engagement. However, they can also be employed to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identifies perceptual gaps between management and frontline personnel.
In the growth phase, one may measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:
|Customer Side||Brand Side|
Awareness of the brand, its products and services, is central planned service interactions. Managers need to know how awareness and attitudes change as a result of these planned experiences.
|Cross-Sell Mystery Shopping
In these unique mystery shops, mystery shoppers are seeded into the lead/referral process. The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction.
|Wallet Share Surveys
These surveys are used to evaluate customer engagement with and loyalty to the brand. Specifically, to determine if customers consider the brand their primary provider, and identify potential road blocks to wallet share growth.
Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:
|Customer Side||Brand Side|
|Lost Customer Surveys
Lost customer surveys identify sources of run-off or churn to provide insight into improving customer retention.
|Life Cycle Mystery Shopping
Shoppers interact with the company over a period of time, across multiple touch points, providing broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across multiple channels.
Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.
Call to Action – Make the Most of the Research
Research without call to action may be interesting, but not very useful. Regardless of the research choices you make, be sure to build call to action elements into research design.
For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.
For surveys of customers, we recommend testing the effectiveness of the onboarding process by benchmarking three loyalty attitudes:
- Would Recommend: The likelihood of the customer recommending the brand to a friend relative or colleague.
- Customer Advocacy: The extent to which the customer agrees with the statement, “you care about me, not just the bottom line?”
- Primary Provider: Does the customer consider the brand their primary provider for similar services?
As you contemplate campaigns to build planned experiences into your customer experience, it doesn’t matter what specific model you use. The above model is simply for illustrative purposes. As you build your own model, be sure to design customer experience research into the planned experiences to monitor both the presence and effectiveness of these planned experiences.
Despite the fact that many businesses spend thousands of dollars annually on such research, an astonishing number of mystery shopping programs fail outright or limp along year after year, perennially under-performing against expectations.
Companies recite a litany of complaints, including:
- disputed findings by employees and managers;
- questioning of mystery shoppers’ skills and credibility;
- more internal administration than planned;
- flat trend lines and undifferentiated scores;
- little or no correlation between mystery shopping results and customer satisfaction ratings;
- lack of timeliness and responsiveness from mystery shopping vendors; and,
- difficulty demonstrating return on investment.
There is nothing inherently faulty about the mystery shopping methodology, which is simply a type of observational research. It can and does provide tremendous value when it is designed and executed well.
Here are some best practices to help you avoid these common pitfalls:
Define clear objectives. Considering the high price tag that comes with mystery shopping research, it’s incumbent upon company managers to define their goals in specific and measurable terms.
Keep it simple. In the interest of internal consensus, mystery shopping programs are often designed by committee, which can lead to the program becoming hopelessly complicated and cumbersome. Unrealistic scenarios and long, complex questionnaires are common, creating great frustration for mystery shoppers and program administrators. In such cases the likelihood of shopper exposure is increased and the accuracy of the observations suffers. Simpler designs work better and provide more value.
Hire a vendor that can be a partner. Large companies often employ an excruciating bidding process that rarely identifies the best vendor for their needs. They issue lengthy RFPs for mystery shopping that are meant to weed out the weakest contenders, but by asking bidders to commit to overly detailed and inappropriate specifications they effectively eliminate more sophisticated companies at the same time. The typical RFP process creates an environment in which mystery shopping vendors over-promise in order to make the first cut, thus setting themselves up for failure if they win the account. In addition, it treats mystery shopping research as a commodity, regarding it as a bulk purchase of data rather than a high-value quality improvement tool. Companies have more success when they research the market carefully and identify the companies that have the knowledge and commitment to help them build a truly valuable program.
Obtain buy-in from the front-line. When mystery shopping initiatives fail to meet their potential it is often because the people who are accountable for the results — front-line employees, supervisors, store managers, and regional managers — were never properly introduced to the program. As a result there may be internal resistance, creating an unnecessary distraction from the achievement of the company’s service improvement goals. To ensure success, employees throughout the organization must be fully informed and bought into the mystery shopping program before it is launched. Pre-launch efforts should include training on how to read mystery shopping reports, how to use the information effectively, and how to set goals for improvement.
Provide adequate internal administration. Few companies anticipate the amount of administration necessary to run a successful mystery shopping program. It requires a strong administrator to keep the company focused and on board, and to make sure that recalcitrant field managers are not able to undermine the program before it stabilizes and begins to realize its potential value.
Plan for change. Even well-designed and administered mystery shopping research requires periodic adjustment. Performance scores eventually flatten out or cluster together, diminishing the value of the program as a tool for rewarding top performers and continuously improving quality. Periodic reviews should be worked into the program design so it can be kept relevant and useful, and so the bar can be repeatedly raised on service quality and employee performance.
Meeting the Demand
The consumer demand for better service is growing all the time. Companies struggle to meet this demand in the face of high employee turnover, shrinking profit margins, and increasing competition. At the same time the business landscape is becoming more and more complex, with 24-hour, multi-channel service now a basic consumer expectation.
Mystery shopping is among the more powerful tools available to companies seeking to improve their service quality. Providing objective data about service execution across locations and delivery channels allows managers to identify specific areas for improvement and to reward employees in a consistent, relevant manner.
Probably the most common problem facing the customer experience researcher is the actionability or usefulness of the research. All too often, while there may be lots of data available, managers lack methodologies to transition research into action, and identify clear paths to maximize return on investments in the customer experience.
This is particularly true with mystery shopping. When done correctly, mystery shopping can be a valuable tool. However, often managers collect data about the service behaviors of their employees, but lack a clear means of identifying which behaviors to focus improvement efforts on, or identifying service attributes have the most potential for ROI.
What managers need is a tool to help them prioritize the service behaviors on which to focus improvement efforts. One such tool is an analytical technique called Gap Analysis.
Gap Analysis compares performance of individual service attributes relative to their importance, providing a frame of reference for prioritizing which areas require attention and resources.
To perform Gap Analysis, each service attribute measured is plotted across two axes. The first axis is the performance axis. On this axis the performance of each attribute is plotted. The second axis is the importance axis. Each attribute is assigned an importance rating based on its correlation to purchase intent. Service attributes with strong correlations to purchase intent are deemed more important and service attributes with low correlations to purchase intent are deemed less important.
This two-axis plot creates four quadrants:
- Quadrant 1: Areas of high importance and low performance (where there is high potential of realizing return on investments in improving performance).
- Quadrant 2: Areas of high importance and high performance. These are service attributes to maintain.
- Quadrant 3: Areas of low importance and low performance. These are service attributes to address if resources are available.
- Quadrant 4: Areas of low importance and high performance, these are service attributes which requ
ire no real attention as their performance exceeds their importance.
To illustrate this concept, consider the following example quadrant chart where seven service quality attributes are plotted according to their performance and importance. The “cross-hairs” defining the quadrants are the mid-point (or average) of both the importance and performance measures. In this case the mid-point of the performance measures is 74%, and the mid-point of the importance axis is 2.9.
According to this example, two service attributes reside in the first quadrant (high importance and low performance). These attributes are introduce product or service by using targeted question and mention any other product or service. These two attributes, therefore, are the two that should be focused on first, as improvements in these should yield the most ROI in terms of improving purchase intent.
No attributes are in the second quadrant (high importance and high performance), and one attribute, offer further assistance, resides in quadrant three (an area to address if resources are available). The remaining four attributes reside in the fourth quadrant, where performance exceeds importance, and therefore do not require any immediate attention.
In this example, the manager now has a valuable indicator regarding which service attributes they should focus their improvement efforts. Directing attention to the attributes in Quadrant 1 should have the highest likelihood realizing ROI in terms of the customer experience improving purchase intent.