Tag Archive | Actionable Research Design

Loyalty: The Foundation of Brand Perception

Customer loyalty is the business attribute with the strongest correlation to profitability. Loyalty lowers sales and acquisition costs per customer by amortizing these costs across a longer lifecycle, leading to extraordinary financial results. A 5% increase in customer loyalty can translate, depending on the industry, into a 25% to 85% increase in profits.


Many customer experience managers want to include a measure of loyalty in their customer experience research.  Indeed loyalty and how brand perception drives loyalty is the foundation of any brand perception research.  However, loyalty is a behavior measured longitudinally over time, and surveys best measure customer attitudes.  As a result, researchers typically use attitudinal proxies for customer loyalty.  Generally the two most common proxies are either a “would recommend” or a “customer advocacy” question.

  1. Would Recommend:   A “would recommend” question is typically Net Promoter (NPS) or some other measure of the customer’s likelihood of referring to a friend, relative or colleague.  It stands to reason, if one is going to refer others to a brand, they will remain loyal as well.  Promoters’ willingness to put their reputational risk on the line is founded on a feeling of loyalty and trust.
  1. Customer Advocacy: A customer advocacy question asks if the customer agrees with the following statement, “the brand cares about me, not just the bottom line.”  The concept of trust is perhaps more evident in customer advocacy.  Customers who agree with this statement trust the brand to do right by them, and not subjugate their best interests to profits.  Customers who trust the brand to do the right thing are more likely to remain loyal.

We’ve seen some loyalty surveys (particular those employing the NPS methodology), which only ask the loyalty proxy with little or no other areas of investigation.  We believe this is a bad practice for a number of reasons:

  1. Customer Experience: Customers who have affirmatively taken the action of clicking on the survey want to give you their opinion (they want to participate in the survey), and based on their experience are expecting a multiple question survey.  Presenting them with just one rating scale risks alienating them as they may feel they didn’t get an appropriate opportunity to share their opinion, and ultimately feel it was not worth their time to participate.  Secondly, some customers may conclude the survey system is broken in some way as it only presented them with one question, resulting in customer confusion.
  1. Actionable Research Results: A survey consisting of one NPS rating is not going to yield any information from which to draw conclusions about how customers feel about the brand.  It will produce an average rating and frequency of promoters and detractors, but no context in which to interpret the results.

Establishing and measuring loyalty proxies are an important first step in evaluating brand perception.  Additional areas of investigation should include indentifying and comparing customer impressions of the brand to your desired brand personality, and evaluate customer engagement or wallet share.


In a subsequent post, we will address ways to measure the brand personality.

Also, in a subsequent post, we will explore ways to measure engagement/wallet share.


Click Here For More Information About Kinesis' Research Services

Build Call to Action into Your Brand Perception Research

147953294 Resize

These days, post-transaction surveys are ubiquitous.  Brands large and small take advantage of internet-based survey technology to evaluate the customer experience at almost every touch point.  Similarly, loyalty proxy methodologies such as Net Promoter (NPS) are very much in vogue.  However, many NPS surveys are fielded in a post-transaction context (potentially exposing the research to sampling bias as a result of only hearing from customers who have recently conducted a transaction), and are not designed in a manner that will give managers appropriate information upon which to take action on the research.

At their core, loyalty proxies are brand perception research – not transactional.  We believe it is a best practice to define the sample frame as the entire customer base, as opposed to customers who have recently interacted with the brand.  Ultimately, these surveys are image and perception research of the brand across the entire customer base.

Happily, this perception research offers an excellent opportunity to gather customer perceptions of the brand, compare them to your desired brand image, as well as measure engagement or wallet share.  An excellent survey instrument to accomplish this is a survey divided into three parts:

  • Loyalty Proxy: Consisting of the NPS rating or some other appropriate measure and 1 or 2 follow up questions to explore why the customer gave the NPS rating they did.
  • Image perception: consisting of 3 or 4 questions to determine how customers perceive the brand.
  • Engagement/Wallet Share: consisting of 3 or 4 questions to determine if the customer considers the brand their primary provider, and to gauge share of wallet of various financial products & services across the brand and its competitors.

This research plan will not only yield an NPS, but it will provide insight into why the customers assigned the NPS they did, evaluate the extent to which the entire customer base’s impressions of the brand matches your desired brand image, as well as identify how the brand is perceived by promoters and detractors. This plan will also yield valuable insight into share of wallet, and how wallet share differs for promoters and detractors.

Such a survey need not be long, the above objectives can be accomplished with 10 – 12 questions and will probably take less than 5 minutes for the customer to complete.

In a subsequent posts, we will explore each of these 3-parts of the survey in more detail:


Click Here For More Information About Kinesis' Research Services

Taking Action on Mystery Shop Results

Title Bar

Previously we examined best practices in mystery shop sample planning.

Call to Action Analysis

A best practice in mystery shop design is to build in call to action elements designed to identify key sales and service behaviors which correlate to a desired customer experience outcome.  This Key Driver Analysis determines the relationship between specific behaviors and a desired outcome.  For most brands and industries, the desired outcomes are purchase intent or return intent (customer loyalty).  This approach helps brands identify and reinforce sales and service behaviors which drive sales or loyalty – behaviors that matter.

Key Driver Graphic 1

Earlier we suggested anticipating the analysis in questionnaire design in a mystery shop best practice.  Here is how the three main design elements discussed provide input into call to action analysis.

HowShoppers are asked if they had been an actual customer, how the experience influenced their return intent.  Cross-tabulating positive and negative return intent will identify how the responses of mystery shoppers who reported a positive influence on return intent vary from those who reported a negative influence.  This yields a ranking of the importance of each behavior by the strength of its relationship to return intent.

WhyIn addition, paired with this rating is a follow-up question asking, why the shopper rated their return intent as they did.  The responses to this question are grouped and classified into similar themes, and cross-tabulated by the return intent rating described above.  The result of this analysis produces a qualitative determination of what sales and service practices drive return intent.
WhatThe final step in the analysis is identifying which behaviors have the highest potential for ROI in terms of driving return intent.  This is achieved by comparing the importance of each behavior (as defined above) and its performance (the frequency in which it is observed).  Mapping this comparison in a quadrant chart, like the one to the below, provides a means for identifying behaviors with relatively high importance and low performance, which will yield the highest potential for ROI in terms of driving return intent.

Gap Graphic


This analysis helps brands focus training, coaching, incentives, and other motivational tools directly on the sales and service behaviors that will produce the largest return on investment – behaviors that matter.

Taking Action

Part of Balanced Scorecard

A best practice in mystery shopping is to integrate customer experience metrics from both sides of the brand-customer interface as part of an incentive plan.  The exact nature of the compensation plan should depend on broader company culture and objectives.  In our experience, a best practice is a balanced score card approach which incorporates customer experience metrics along with financial, internal business processes (cycle time, productivity, employee satisfaction, etc.), as well as innovation and learning metrics.

Within these four broad categories of measurement, Kinēsis recommends managers select the specific metrics (such as ROI, mystery shop scores, customer satisfaction, and cycle time), which will best measure performance relative to company goals. Discipline should be used, however. Too many can be difficult to absorb. Rather, a few metrics of key significance to the organization should be collected and tracked in a balanced score card.


Best in class mystery shop programs identify employees in need of coaching.  Event-triggered reports should identify employees who failed to perform targeted behaviors.  For example, if it is important for a brand to track cross- and up-selling attempts in a mystery shop, a Coaching Report should be designed to flag any employees who failed to cross- or up-sell.  Managers simply consult this report to identify which employees are in need of coaching with respect to these key behaviors – behaviors that matter.

Click here for the next installment in this series: Mystery Shop Program Launch.

Click Here for Mystery Shopping Best Practices


Best Practices in Bank Customer Experience Measurement Design: Customer Surveys

Post Transaction Surveys

Many banks conduct periodic customer satisfaction research to assess the opinions and experiences of their customer base. While this information can be useful, it tends to be very broad in scope, offering little practical information to the front-line.  A best practice is a more targeted, event-driven approach collecting feedback from customers about specific service encounters soon after the interaction occurs.

These surveys can be performed using a variety of data collection methodologies, including e-mail, phone, point-of-sale invite, web intercept, in-person intercept and even US mail.  Fielding surveys using e-mail methodology with its immediacy and relatively low cost, offers the most potential for return on investment.   Historically, there have been legitimate concerns about the representativeness of sample selection using email.  However, as the incidence of email collection of banks increases, there is less concern about sample selection bias.

The process for fielding such surveys is fairly simple.  On a daily basis, a data file (in research parlance “sample”) is generated containing the customers who have completed a service interaction across any channel.  This data file should be deduped, cleaned against a do not contact list, and cleaned against customers who have been surveyed recently (typically three months depending on the channel).  At this point, if you were to send the survey invitations, the bank would quickly exhaust the sample, potentially running out of eligible customers for future surveys.   To avoid this, a target of the required number of completed surveys should be set per business unit, and a random selection process employed to select just enough customers to reach this target without surveying every customer. [1]

So what are some of the purposes banks use these surveys for?   Generally, they fall into a number of broad categories:

Post-Transaction: Teller & Contact Center: Post-transaction surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey, targeting specific customers shortly after a service interaction.  As the name implies, the purpose of this type of survey is to measure satisfaction with a specific transaction.

New Account & On-Boarding:  New account surveys measure satisfaction with the account opening process, as well as determine the reasons behind new customers’ selection of the bank for a new deposit account or loan – providing valuable insight into new customer identification and acquisition.

Closed Account Surveys:  Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention.

Call to Action

Research without a call to action may be informative, but not very useful.  Call to action elements should be built into research design, which provide a road map for clients to maximize the ROI on customer experience measurement.

Finally, post-transaction surveys support other behavioral research tools.  Properly designed surveys yield insight into customer expectations, which provide an opportunity for a learning feedback loop to support observational research, such as mystery shopping, where customer expectations are used to inform service standards which are in turn measured through mystery shopping.

For more posts in this series, click on the following links:


[1] Kinesis uses an algorithm which factors in the targeted quota, response rate, remaining days in the month and number of surveys completed to select just enough customers to reach the quota without exhausting the sample.

Click Here For More Information About Kinesis' Bank CX Research Services

The 5 Service Dimensions All Customers Care About

Reprinted with permission from Chris Arlen, of Service Performance.


Not All Dimensions Are Equal

All dimensions are important to customers, but some more than others.

Service providers need to know which are which to avoid majoring in minors. At the same time they can’t focus on only one dimension and let the others suffer.

SERVQUAL research showed dimensions’ importance to each other by asking customers to assign 100 points across all five dimensions.*

Here’s their importance to customers.

The 5 Service Dimensions Customers Care About


What’s this mean for service providers?

#1 Just Do It

RELIABILITY: Do what you say you’re going to do when you said you were going to do it.

Customers want to count on their providers. They value that reliability. Don’t providers yearn to find out what customers value? This is it.It’s three times more important to be reliable than have shiny new equipment or flashy uniforms.

Doesn’t mean you can have ragged uniforms and only be reliable. Service providers have to do both. But providers first and best efforts are better spent making service reliable.

Whether it’s periodics on schedule, on-site response within Service Level Agreements (SLAs), or Work Orders completed on time.

#2 Do It Now

RESPONSIVENESS: Respond quickly, promptly, rapidly, immediately, instantly.

Waiting a day to return a call or email doesn’t make it. Even if customers are chronically slow in getting back to providers, responsiveness is more than 1/5th of their service quality assessment.

Service providers benefit by establishing internal SLAs for things like returning phone calls, emails and responding on-site. Whether it’s 30 minutes, 4 hours, or 24 hours, it’s important customers feel providers are responsive to their requests. Not just emergencies, but everyday responses too.


Call centers typically track caller wait times. Service providers can track response times. And their attainment of SLAs or other Key Performance Indicators (KPIs) of responsiveness. This is great performance data to present to customers in Departmental Performance Reviews.

#3 Know What Your Doing

ASSURANCE: Service providers are expected to be the experts of the service they’re delivering. It’s a given.

SERVQUAL research showed it’s important to communicate that expertise to customers. If a service provider is highly skilled, but customers don’t see that, their confidence in that provider will be lower. And their assessment of that provider’s service quality will be lower.


Service providers must communicate their expertise and competencies – before they do the work. This can be done in many ways that are repeatedly seen by customers, such as:

  • Display industry certifications on patches, badges or buttons worn by employees
  • Include certification logos on emails, letters & reports
  • Put certifications into posters, newsletters & handouts

By communicating competencies, providers can help manage customer expectations. And influence their service quality assessment in advance.

#4 Care about Customers as much as the Service

EMPATHY: Services can be performed completely to specifications. Yet customers may not feel provider employees care about them during delivery. And this hurts customers’ assessments of providers’ service quality.

For example, a day porter efficiently cleans up a spill in a lobby. However, during the clean up doesn’t smile, make eye contact, or ask the customer if there is anything else they could do for them. In this hypothetical the provider’s service was performed fully. But the customer didn’t feel the provider employee cared. And it’s not necessarily the employees fault. They may not know how they’re being judged. They may be overwhelmed, inadequately trained, or disinterested.


Providers’ service delivery can be as important as how it was done. Provider employees should be trained how to interact with customers and their end-users. Even a brief session during initial orientation helps.  Anything to help them understand their impact on customers’ assessment of service quality.

#5 Look Sharp

TANGIBLES: Even though this is the least important dimension, appearance matters. Just not as much as the other dimensions.

Service providers will still want to make certain their employees appearance, uniforms, equipment, and work areas on-site (closets, service offices, etc.) look good. The danger is for providers to make everything look sharp, and then fall short on RELIABILITY or RESPONSIVENESS.

At the End of the Day

Customers’ assessments include expectations and perceptions across all five SERVQUAL dimensions. Service providers need to work on all five, but emphasize them in order of importance. If sacrifices must be made, use these dimensions as a guide for which ones to rework.

Also, providers can use SERVQUAL dimensions in determining specific customer and site needs. By asking questions around these dimensions, providers can learn how they play out at a particular location/bid opportunity. What dimensions are you in?

* For a description of the SERVQUAL methodology, see the following post: SERVQUAL Model: A Multi-Item Tool for Comparing Customer Perceptions vs. Expectations

Click Here For More Information About Kinesis' Research Services

SERVQUAL Model: A Multi-Item Tool for Comparing Customer Perceptions vs. Expectations

5 Dimensions

Looking for a tried and true model to understand your service quality?

The SERVQUAL model is an empiric model that has been around for nearly 30 years. While not new, it is a foundation of many of the service quality and customer experience concepts in use today. It is a gap model designed to measure gaps between customer perceptions relative to customer expectations.

SERQUAL describes the customer experience in terms of five dimensions:

1. TANGIBLES – Appearance of physical facilities, equipment, personnel, and communication materials
2. RELIABILITY – Ability to perform the promised service dependably and accurately
3. RESPONSIVENESS – Willingness to help customers and provide prompt service
4. ASSURANCE – Knowledge and courtesy of employees and their ability to convey trust and confidence
5. EMPATHY – Caring, individualized attention the firm provides its customers

Each of these five dimensions is measured using a survey instrument consisting of individual attributes which role up into each dimension.

For example, each of the five dimensions may consist of the following individual attributes:

• Appearance/cleanliness of physical facilities
• Appearance/cleanliness of personnel
• Appearance/cleanliness of communication/marketing materials
• Appearance/cleanliness of equipment

• Perform services as promised/right the first time
• Perform services on time
• Follow customer’s instructions
• Show interest in solving problems

• Telephone calls/other inquiries answered promptly
• Willingness to help/answer questions
• Problems resolved quickly

• Knowledgeable employees/job knowledge
• Employees instill confidence in customer
• Employee efficiency
• Employee recommendations
• Questioning to understand needs

• Interest in helping
• Individualized/personal attention
• Ease of understanding/use understandable terms
• Understand my needs/recommending products to best fit my needs
• The employees have my best interests at heart

Call to Action

Research without a call to action may be informative, but not very useful. By measuring both customer perceptions and expectations, SERVQUAL gives managers the ability to prioritize investments in the customer experience based not only on their performance, but performance relative to customer expectations.

The first step in taking action on SERVQUAL results is to calculate a Gap Score by simply subtracting the expectation rating from the perception rating for each attribute (Gap Score = Perception – Expectation). This step alone will give you a basis for ranking each attribute based on its gap between customer perceptions and expectations.

Service Quality Score

In addition to ranking service attributes, the Gap Score can be used to calculate both a Service Quality Score based on the relative importance assigned by customers to each of the five service quality dimensions.

The first step in calculating a Service Quality Score is to average the Gap Score of each attribute within each dimension. This will give you the Gap Score for each dimension (GSD). Averaging the dimension Gap Scores will yield an Unweighted Service Quality Score.

From this unweighted score it is a three step process to calculate a Weighted Service Quality Score.

First, determine importance weights by asking customers to allocate a fixed number of points (typically 100) across each of the five dimensions based on how important the dimension is to them. This point allocation will yield a weight for each dimension based on its importance.

The second step is to multiply the Gap Score for each dimension (GSD) by its importance weight. The final step is to simply sum this product across all five dimensions; this will yield a Weighted Service Quality Score.

Click here for a more detailed step by step description of score calculation.

What does all this mean?  See the following post for discussion of the implications of SERVQUAL for customer experience managers: The 5 Service Dimensions All Customers Care About.

Keys to Customer Experience Research Success: The Professional Art of Questionnaire Design

In a previous post we discussed the importance of research objectives in program design. A natural progression of this subject is using research objectives to design a successful questionnaire.

All too often, I find clients who have gone online, found a questionnaire and implemented it into a survey process, in effect, handing research design over to an anonymous author on the Internet who has given no consideration to their specific needs. Inexperience with both the art and science of questionnaire design, conspires to cause them to miss out on building a research tool customized to their specific need.

While questionnaire design is a professional skill fraught with many perils for the inexperienced, the following process will eliminate some common mistakes.

First, define research objectives. Do not skip this step. Defining research objectives prior to making any other decisions about the program is by far the most effective way to make sure your program stays on track, on budget, and produces results that drive business success. See the previous post regarding research objectives. Once a set of objectives has been defined questionnaire design naturally falls out of the process; simply write a survey question for each objective.

For example, consider the following objective set:

1. Determine the level of customer satisfaction and provide a reference point for other satisfaction-based analysis.
2. Identify which service attributes drive satisfaction and which investments yield the greatest improvement in customer satisfaction.
3. Identify moments of truth where the danger of customer attrition is highest.
4. Track changes in customer satisfaction over time.

For each objective write a survey question. For the first objective, (overall satisfaction) write an overall satisfaction question. For objective #2 (attribute satisfaction) develop a list of service attributes and measure satisfaction relative to each. Continue the process for each objective for which a survey question can be written.

Question order is important and the placement of every question should be considered to avoid introducing bias into the survey as a result of question order. Generally, we like to place overall satisfaction questions early in the survey to avoid biasing the results with later attribute questions.

Similarly, question phrasing needs to be carefully considered to avoid biasing the responses. Keep phrasing neutral to avoid biasing the respondents one way or the other. Sure there is a temptation to use overly positive language with your customers, but this really is a bad practice.

Finally, anticipate the analysis. As you write the questionnaire, consider how the results will be reported and analyzed. Anticipating the analysis will make sure the survey instrument captures the data needed for the desired analysis.

Research design is a professional art. If you are not sure what you are doing, seek a professional to help you rather than field poor research with a do-it-yourself tool.

Click Here For More Information About Kinesis' Research Services

Keys to Customer Experience Research Success – Start with the Objectives

How do you make research actionable?

With the advent of do-it-yourself survey tools, there is a trend away from professional research design processes. One can search on line for a questionnaire, grab it off the internet, and field it on the cheap with a do-it-yourself survey tool with no consideration of the research needs at hand. This, in effect, hands research design over to an anonymous author on the Internet who has given no consideration to your specific needs.

Defining research objectives prior to making any other decisions about the program is by far the most effective way to make sure your program stays on track, on budget, and produces results that drive business success. It sounds very simple, and for the most part it is, however, I’m always surprised when I ask potential clients what their research objectives are how many cannot list anything other than the most general of objectives.

Defining research objectives is a fairly simple process. First, generate a list of everything you want to know as a result of the research.

For example, you may come up with the following list:

  • How satisfied are our customers?
  • Which key factors drive satisfaction among our customers?
  • What are the causes of customer dissatisfaction?
  • How can we measure customer satisfaction over time?
  • Which business processes can most improve customer satisfaction and increase our financial returns?
  • How can we measure the relationships between customer satisfaction, profitability and purchase or retention behavior?
  • How can we evaluate our customers’ referral activity?
  • How can we measure the value of our customers’ purchasing behavior?
  • How can we identify changes in our customers’ purchasing or referral behaviors over time?

Note, these are not survey questions; they are questions to which you want answers. This is what you want to know.

Once you have developed a list of what you want to know as a result of the research, the next step is to map each of your questions to a specific research objective. For each question you should write a clear objective starting with a verb such as: determine, identify, track, link, measure, etc. Starting with verbs is excellent way to make sure you can take action on the results.

So, continuing with the example, the above list of questions may map into the following set of research objectives:

What do you want to know? Objectives
How satisfied are our customers? Determine the level of customer satisfaction and provide a reference point for other satisfaction-based analysis.
Which key factors drive satisfaction among our customers? Identify which service attributes drive satisfaction and which investments yield the greatest improvement in customer satisfaction.
What are the causes of customer dissatisfaction? Identify moments of truth where the danger of customer attrition is highest.
How can we measure customer satisfaction over time? Track changes in customer satisfaction over time.  Determine if changes in satisfaction are significant.
Which business processes can most improve customer satisfaction and increase our financial returns? Link key service attributes to specific business processes.  Identify which processes maximize ROI.
How can we measure the relationships between customer satisfaction, profitability and purchase or retention behavior? Identify the relationship between customer satisfaction and customer behaviors such as retention, purchase behavior, and likelihood of referral, which drive profitability.
How can we evaluate our customers’ referral activity? Conduct loyalty-based customer satisfaction analysis, using net promoters and customer advocacy as a measurement for customer loyalty.
How can we measure the value of our customers’ purchasing behavior? Determine the relationship between customer satisfaction and purchase behavior.  Identify the ROI of satisfaction-based management.Make a financial case to all stakeholders (management, employees and shareholders) that the customer experience impacts financial performance.
How can we identify changes in our customers’ purchasing or referral behaviors over time? Continue to track the relationship between satisfaction and purchase behavior. Analyze satisfaction by customer segments and the financial value of each individual segment.

Once a clear set of research objectives is defined, you now have a road map to inform all subsequent decisions about sample frame, data collection, survey instrument, and analysis plan. Each of these issues deserves more attention than can be addressed in what is intended to be a brief blog post. In future posts, we will look into each of these issues individually.

Click Here For More Information About Kinesis' Research Services