Archive by Author | Eric Larse

Two Questions….Lots of Insights: Turn Customer Experience Observations into Valuable Insight

Customer experience researchers are constantly looking for ways to make their observations relevant, to turn observations into insight. Observing a behavior or service attribute is one thing, linking observations to insight that will maximize return on customer experience investments is another. One way to link customer experience observations to insights that will drive ROI is to explore the influence of customer experience attributes to key business outcomes such as loyalty and wallet share.

The first step is to gather impressions of a broad array of customer experience attributes, such as: accuracy, cycle time, willingness to help, etc. Make this list as long as you reasonably can without making the survey instrument too long.

For additional thoughts on survey length and research design, see the following blog posts:

Click Here: Maximizing Response Rates: Get Respondents to Complete the Survey

Click Here: Keys to Customer Experience Research Success – Start with the Objectives

The next step is to explore the relationship of these service attributes to loyalty and share of wallet.

Two Questions – Lots of Insight

In our experience, two questions: a “would recommend” and primary provider question, yield valuable insight into the relative importance of specific service attributes. Together, these two questions form the foundation of a two-dimensional analytical framework to determine the relative importance of specific service attributes in driving loyalty and wallet share.

Loyalty Question

Research has determined the business attribute with the highest correlation to profitability is customer loyalty. Customer loyalty lowers sales and acquisition costs per customer by amortizing these costs across a longer lifetime – leading to some extraordinary financial results.

Measuring customer loyalty in the context of a survey is difficult. Surveys best measure attitudes and perceptions. Loyalty is a behavior not an attitude. Survey researchers therefore need to find a proxy measurement to determine customer loyalty. A researcher might measure customer tenure under the assumption that length of relationship predicts loyalty. However, customer tenure is a poor proxy. A customer with a long tenure may leave, or a new customer may be very satisfied and highly loyal.

Likelihood of referral captures a measurement of the customer’s likelihood to refer a brand to a friend, relative or colleague. It stands to reason, if one is going to refer others to a brand, they will remain loyal as well, because customers who are promoters of a brand are putting their reputational risk on the line. This willingness to put their reputational risk on the line is founded on a feeling of loyalty and trust.

Any likelihood of referral question can be used, depending on the specifics of your objectives. Kinesis has had success with both a “yes/no” question, “Would you refer us to a friend, relative or colleague?” and the Net Promoter methodology. The Net Promoter methodology asks for a rating of the likelihood of referral to a friend, relative or colleague on an 11-point (0-10) scale. Customers with a likelihood of 0-6 are labeled “detractors,” those with ratings of 7 and 8 and identified as “passive referrers,” while those who assign a rating of 9 and 10 are labeled “promoters.”

In our experience asking the “yes/no” question: “Would you refer us to a friend, relative or colleague?” produces starker differences in this two-dimensional analysis making it easier to identify which service attributes have a stronger relationship to both loyalty and engagement.

Engagement Question

Similar to loyalty, customer engagement or wallet share can lead to some extraordinary financial results. Wallet share is the percentage of what a customer spends with a given brand over a specific period of time.

Also similar to loyalty, measuring engagement or wallet share in a survey is difficult. There are several ways to measure engagement: one methodology is to use some formula such as the Wallet Allocation Rule which uses customer responses to rank brands in the same product category and employs this rank to estimate wallet share, or to use a simple yes/no primary provider question.

Methodology

Using these loyalty and engagement measures together, we can now cross tabulate the array of service attribute ratings by these two measures. This cross tabulation groups the responses into four segments: 1) Engaged & Loyal, 2) Disengaged yet Loyal, 3) Engaged yet Disloyal, 4) Disengaged & Disloyal. We can now make comparisons of the responses by these four segments to gain insight into how each of these four segments experience their relationship with the brand.

These four segments represent: the ideal, opportunity, recovery and attrition.

Loyalty Engagement_2

Ideal – Engaged Promoters: This is the ideal customer segment. These customers rely on the brand for the majority of their in category purchases and represent lower attrition risk. In short, they are perfectly positioned to provide the financial benefits of customer loyalty. Comparing attribute ratings for customers in this segment to the others will identify both areas of strength, but at the same time, identify attributes which are less important in terms of driving this ideal state, informing future decisions on investment in these attributes.

Opportunity – Disengaged Promoter: This customer segment represents an opportunity. These customers like the brand and are willing to put their reputation at risk for it. However, there is an opportunity for cross-sell to improve share of wallet. Comparing attribute ratings of the opportunity segment to the ideal will identify service attributes with the highest potential for ROI in terms of driving wallet share.

Recovery – Engaged Detractor: This segment represents significant risk. The combination of above average share of wallet, and low commitment to put their reputational risk on the line is flat out dangerous as it puts profitable share of wallet at risk. Comparing attribute ratings of customers in the recovery segment to both the ideal and the opportunity segments will identify the service attributes with the highest potential for ROI in terms of improving loyalty.

Attrition – Disengaged Detractor: This segment represents the greatest risk of attrition. With no willingness to put reputational risk on the line, and little commitment to placing share of wallet with the brand, retention strategies may be too late for them. Additionally, they most likely are unprofitable. Comparing the service attributes of customers in this segment to the others will identify elements of the customer experience which drive attrition and may warrant increased investment, as well as, elements that do not appear to matter very much in terms driving runoff, and may not warrant investment.

By making comparisons across each of these segments, researchers give managers a basis to make informed decisions about which service attributes have the strongest relationship to loyalty and engagement. Thus identifying which behaviors have the highest potential for ROI in terms of driving customer loyalty and engagement. This two-dimensional analysis is one way to turn customer experience observations into insight.

Click Here For More Information About Kinesis' Research Services

It’s Personal: Drivers of Member Purchase Intent as a Result of the Branch Experience

What do potential members want as a result of a visit to your branch?  Or, perhaps more importantly, what drives potential members to want to open an account as a result of a visit to your branch?

To answer these questions, Kinesis conducted research into the efficacy of the branch sales process and identified several service and sales attributes that drive member purchase intent.  In our observational research of 100 credit union new account presentations, mystery shoppers were asked to describe what impressed them positively as a result of the visit to the credit union.   Excluding the branch atmosphere, the five most common themes contained in these open-ended comments were:

  • Interest in Helping/ Personalized Service/ Attention to Needs,
  • Professional/ Courteous/ Not Pushy,
  • Friendly Employees, and
  • Product Knowledge of/ Confidence in the Representative

To understand the relative importance of these behaviors with respect to purchase intent, shoppers were asked to rate their purchase intent as a result of the presentation.  Kinesis used this rating to group these shops into two groups (those with positive and negative purchase intent) and compared the results of these two groups to each other.  Of these positive impressions, three have strong relationships to purchase intent. They are present with greater frequency in shops with positive purchase intent compared to those with negative purchase intent.

 

Reason for Positive Purchase Intent

Relative Frequency Positive to Negative Purchase Intent
Product Knowledge of/ Confidence in the Representative 2.7
Interest in Helping/ Personalized Service/ Attention to Needs 2.5
Friendly Employee 2.3

The representative’s product knowledge was cited 2.7 times more frequently in shops with positive purchase intent compared to shops with negative purchase intent.  Similarly, attention to needs and personalized service was present 2.5 times more frequently in shops with positive purchase intent compared to those with negative purchase intent.  Finally, shoppers were 2.3 times more likely to cite the friendliness of branch personnel in shops with positive purchase intent relative to negative.

Member experiences which focus on personal attention, interest in helping, personalized service, professional, courteous and friendly encounters drive purchase intent as a result of a visit to a credit union.

Click here for more information on Kinesis' Credit Union Member Experience Research

Maximizing Response Rates: Get Respondents to Complete the Survey

Previously we discussed ways researchers can increase the likelihood of respondents opening an email survey invitation. Additionally, in a subsequent post we discussed how to get respondents to actually click on the survey link and participate in the survey.

This post is a discussion of ways to keep respondents motivated to complete the entire survey once they have entered it.

147953294 Resize

At its core, the key to completion rates is an easy to complete and credible survey that delivers on all promises offered in the invitation email.

Survey Length

From time to time various service providers of mine send me a survey invite, and I’m often surprised how many of them impose upon me, their customer, to complete a 30 or 40 minute survey.  First of all, they never disclose the survey length in advance, which communicates a complete lack of respect for my time.  In addition to just plain being an imposition, it is also a bad research practice.  Ten minutes into the survey I’m either pressed for time, frustrated, or just plain bored, and either exit the survey or frivolously complete the remaining questions without any real consideration of my opinions on the questions they are asking – completely undermining the reliability of my responses.  This is just simply a bad research practice, in addition to being inconsiderate of the end customer’s time.

We recommend keeping survey length short, no more than 10 to 12 minutes – in some cases such as a post-transaction survey – 5 minutes.

If research objectives require a long survey, rather than impose a ridiculously long survey on your customers producing frivolous results, break a 30 – 40 minute survey into two, or better yet, three parts fielding each part to a portion of your targeted sample frame.

Additionally, skip logic should be employed to avoid asking questions that are not applicable to a given respondent, thus decreasing the volume of questions you present to the end customer.

Finally, include a progress bar to keep respondents informed of how far along they are on the survey.

Ease of Completion

The last thing you want respondents feeling when they complete your survey is frustration.  First of all, if the sample frame is made up of your customers, the primary thing you are accomplishing is upsetting your customers and damaging your brand.  And also, creating bad research results because frustrated respondents are not in the proper mindset to give you well considered answers.

Frustration can come from awkward design, question wording, poor programming, and insufficient response choices.  Survey wording and vocabulary should be simple and jargon free, response choices should be comprehensive, and of course the survey programming should be thoroughly proofed and pretested.

Pretesting is a process where the survey is prefielded to a portion of the sample frame to test how they respond to the survey, significant portions of the questionnaire unanswered or a high volume of “other” or “none of the above” responses could signal trouble with survey design.

Convenience

Survey completion should be easy.  Survey entry should work across a variety platforms, browsers and devices.

Additionally, respondents should be allowed to take the survey on their own time, even leaving the survey while saving their answers to date and allowing reentry when it is more convenient for them.

Click Here For More Information About Kinesis' Research Services

Maximizing Response Rates: Get Respondents to Start the Survey

147953294 Resize

It is incumbent on researchers fielding self-administered surveys to maximize response rates.  This reduces the potential for response bias, where the survey results may not accurately reflect the opinions of the entire population of targeted respondents. Previously we discussed ways researchers can increase the likelihood of respondents opening an email survey invitation.  This post addresses how to get respondents to actually click on the survey link and participate in the survey.

Make the Invite Easy to Read

Don’t bury the lead.   The opening sentence must capture the respondent’s attention and make the investment in effort to read the invitation.   Keep in mind most people skim emails.  Keep text of the invitation short, paying close attention to paragraph length.  The email should be easy to skim.

Give a Reward

Offering respondents a reward for participation is an excellent way to motivate participation.  Tangible incentives like a drawing, coupon, or gift card, if appropriate and within the budget, are excellent tools to maximize response rates.   However, rewards do not necessarily need to be tangible.  Intangible rewards can also prove to be excellent motivators.  People, particularly customers who they have a relationship with the brand, want to be helpful.  Expressing the importance of their option, and communicating how the brand will use the survey to improve its offering to customers like the respondent is an excellent avenue to leverage intangible rewards to motivate participation.

Survey Length

Intangible rewards are often sufficient if the respondent’s cost to participate in the survey is minimal.  Perhaps the largest cost to a potential respondent is the time required to complete the survey.  Give them an accurate estimate of the time it takes to complete the survey – and keep it short.  We recommend no more than 10 minutes, more preferably five to six.   If the research objectives require a longer survey instrument, break the survey into two or three shorter surveys and deliver them separately to different targeted respondents.  Do not field excessively long surveys or mis-quote the estimated time to complete the survey – it is rude to impose on your respondents not to mention disastrous to your participation rates – and it’s unethical to mis-quote the survey length.  As with getting the participants to open the email – creditability plays a critical role in getting them to click on the survey.

Credibility

One of the best ways to garner credibility with the survey invite is to assure the participant confidentiality.  This is particularly important for customer surveys, where the customers interact commonly with employees.  For example, a community bank where customers may interact with bank employees not only in the context of banking but broadly in the community, must ensure customers that their survey response will be kept strictly confidential.

Personalizing the survey with appropriate merge fields is also an excellent way to garner credibility.

Survey Entry

Make it as easy as possible for the participant to enter the survey.  Program a link to the survey, and make sure it is both visible and presented early in the survey.  Again, most people skim the contents of emails, so place the link in the top 1/3 of the email and make it clear that it is a link to enter the survey.

In designing survey invitations, remember to write short, concise, easy to read emails that both leverage respondent’s reward centers (tangible or intangible), and credibly estimate the short time required to complete the survey.  This approach will help maximize response rates and avoid some of the pitfalls of response bias. Click here for the next post in this series in prompting respondents to complete the survey.

Click Here For More Information About Kinesis' Research Services

Maximizing Response Rates: Get Respondent to Open the Email

147953294 Resize

In fielding surveys researchers must be aware of the concepts of error and bias and how they can creep into a survey, potentially making the survey unreliable in ways that cannot be predicted.  For example, one source of error is statistical error, where not enough respondents are surveyed to make the results statistically reliable.  Another source of error, or bias, is response bias caused by not having a random sample of the targeted population.

A key concept of survey research is randomness of sample selection, in essence to give each member of the targeted survey population an equal chance of being surveyed.  Response rates are important in self administered surveys (such as an email surveys), because it is possible non-responders (people who for some reason choose not to complete the survey) have different opinions than those who choose to participate in the survey.  As a result, the survey is not purely random.  If non-responders are somehow different than responders, the survey results will reflect that difference – thus biasing the research.   It is therefore incumbent on researchers to maximize the survey response rate.

Say for example, a bank wants to survey customers after they have completed an online transaction.  If customers who love the bank’s online capabilities are more likely to participate in the survey than those who do not like the bank’s online capabilities, the survey results will be biased in favor of a positive view of the bank’s online offering because it is not a representative sample – it is skewed toward customers with the positive view.

It is, again, incumbent on researchers to maximize the response rate as much as possible in self-administered email surveys.

Pre-Survey Awareness Campaign

One strategy to maximize response rates (particularly in a customer survey context) is a pre-survey awareness campaign to make customers aware of the coming survey and encourage participation.  Such a campaign can take many forms, such as:

  1. Letter on company letterhead, signed by a high profile senior executive.
  2. Statement or billing inserts
  3. Email in advance of the survey

Each of these three are excellent ways to introduce the survey to respondents and maximize response rates.

Email Content

The next steps in maximizing response rates in email surveys is passing SPAM filter tests, and prompting the recipient to open the email.  The core concept here is credibility – to make the email appear as credible as possible.

The first step to maintaining credibility is to avoid getting caught in SPAM filter tests, the email content should avoid the following:

  • Words common in SPAM, like “win” or “free”
  • The use of ALL CAPS
  • Excessive punctuation
  • Special characters

Additionally, do not spoof emails.  Spoofing is the forgery of an email header to make it appear it originated from a source other than the actual source.   Send emails from your server.  (Sometimes Kinesis has clients who want the email to appear to originate from their server.  In such cases, we receive the sample from the client, append a unique identifier and send it back to the client to actually be mailed from their servers.)

Perhaps the best strategy to maintain the credibility of the email invite is to conform to Marketing Research Association (MRA) guidelines.  These guidelines include:

  • Clearly identify the researcher, including phone number, mailing address, and email
  • Post privacy policies online and include a link to these policies
  • Include a link to opt out of future emails

From and Subject Lines

Both the FROM and SUBJECT lines are critical in getting the respondent to open the email.

The FROM line has be as credible and recognizable as possible, avoiding vague or generic terms like “feedback”.  For surveys of customers, the company name or the name of a recognizable representative of the company should be used.

The SUBJECT line must communicate the subject of the email in a credible way that will make the respondent want to open the email.  Keep it brief (50 characters or less), clear, concise and credible.

Survey Timing

Not only is the content of the email important, but the timing of delivery plays a role in response rates.  In our experience sending the survey invitation in the middle of the week (Tuesday – Thursday) during daytime hours increases the likelihood that the email will be noticed by the respondent.

Reminder Emails

After an appropriate amount of time (typically for our clients 5 days), reminder emails should be sent, politely reminding the respondent of the previous invitation, and highlighting the importance of their opinion.  One, perhaps two, reminder emails are appropriate, but do not send more than two.

To maximize the probability that respondents will receive and open the email focus on sending a credible email mid-week, one which will pass SPAM filter tests, contain accurate credible and compelling SUBJECT and FROM lines, and send polite reminder emails to non-responders.

But opening the email is just the first step.  The actual objective is to get the respondents to open and complete the survey. Click here for the next post in this series in prompting respondents to participate in the survey.

 

Click Here For More Information About Kinesis' Research Services

What is a Good Mystery Shop Score?

This is perhaps the most common question I’m asked by clients old and new alike.  There seems to be a common misconception among both clients and providers, that any one number, say 90% is a “good” mystery shop score.  Beware of anyone who glibly throws out a specific number without any consideration of the context.  They are either ignorant, glib or both.  Like most things in life, the answer to this question is much more complex.

Most mystery shopping programs score shops according to some scoring methodology to distill the mystery shop results down into a single number.  Scoring methodologies vary, but the most common methodology is to assign points earned for each behavior measured and divide the total points earned by the total points possible, yielding a percent of points earned relative to points possible.

It amazes me how many mystery shop providers I’ve heard pull a number out of the air, again say 90%, and quote that as the benchmark with no thought given to the context of the question.  The fact of the matter is much more complex.   Context is key.  What constitutes a good score varies dramatically from client-to-client, program-to-program and is based on the specifics of the evaluation.  One program may be an easy evaluation, measuring easy behaviors, where a score must be near perfect to be considered “good” – others may be difficult evaluations measuring more difficult behaviors, in this case a good score will be well below perfect.  The best practice in determining what constitutes a good mystery shop score is to consider the distribution of your shop scores as a whole, determine the percentile rank of each shop (the proportion of shops that fall below a given score), and set an appropriate cut off point.   For example, if management decides the 60th percentile is an appropriate standard (6 out of 10 shops are below it), and a shop score of 86% is in the 60th percentile, then a shop score of 86% is a “good” shop score.

Bell Curve

Again, context is key.  What constitutes a good score varies dramatically from client-to-client, program-to-program and is based on the specifics of the evaluation.  Discount the advice of anyone in the industry who glibly throws out a number stating it’s a good score, without considering the context.

Click Here for Mystery Shopping Best Practices

 

 

Click Here for Mystery Shopping Best Practices

 

 

Mystery_Shopping_Page

Best Practices in Mystery Shop Scoring

FocalPoint3

Most mystery shopping programs score shops according to some scoring methodology to distill the mystery shop results down into a single number.  Scoring methodologies vary, but the most common methodology is to assign points earned for each behavior measured and divide the total points earned by the total points possible, yielding a percentage of points earned relative to points possible.

Drive Desired Behaviors

Some behaviors are more important than others.  As a result, best in class mystery shop programs weight behaviors by assigning more points possible to those deemed more important.  Best practices in mystery shop weighting begin by assigning weights according to management standards (behaviors deemed more important, such as certain sales or customer education behaviors), or according to their importance to their relationship to a desired outcome such as purchase intent or loyalty.  Service behaviors with stronger relationships to the desired outcome receive stronger weight.

One tool to identify behavioral relationships to desired outcomes is Key Driver Analysis.  See the attached post for a discussion of Key Driver Analysis.

Don’t Average Averages

It is a best practice in mystery shopping to calculate the score for each business unit independently (employee, store, region, division, corporate), rather than averaging business unit scores together (such as calculating a region’s score by averaging the individual stores or even shop scores for the region).  Averaging averages will only yield a mathematically correct score if all shops have exactly the same points possible, and if all business units have exactly the same number of shops.  However, if the shop has any skip logic, where some questions are only answered if specific conditions exist, different shops will have different points possible, and it is a mistake to average them together.  Averaging them together gives shops with skipped questions disproportionate weight.  Rather, points earned should be divided by points possible for each business unit independently.   Just remember – don’t average averages!

Work Toward a Distribution of Shops

When all is said and done, the product of a best in class mystery shop scoring methodology will produce a distribution of shop scores, particularly on the low end of the distribution.

Distribution

Mystery shop programs with tight distributions around the average shop score offer little opportunity to identify areas for improvement.  All the shops end up being very similar to each other, making it difficult to identify problem areas and improve employee behaviors.  Distributions with scores skewed to the low end, make it much easier to identify poor shops and offer opportunities for improvement via employee coaching.  If questionnaire design and scoring create scores with tight distributions, consider a redesign.

Most mystery shopping programs score shops according to some scoring methodology.  In designing a mystery shop score methodology best in class programs focus on driving desired behaviors, do not average averages and work toward a distribution of shops.

Good MS Score

 

 

Click Here for Mystery Shopping Best Practices

 

 

Click Here for Mystery Shopping Best Practices

 

 

Mystery_Shopping_Page

Mystery Shop Key Driver Analysis

Best in class mystery shop programs provide managers a means of applying coaching, training, incentives, and other motivational tools directly on the sales and service behaviors that matter most in terms of driving the desired customer experience outcome.  One tool to identify which sales and service behaviors are most important is Key Driver Analysis.

Key Driver Analysis determines the relationship between specific behaviors and a desired outcome.  For most brands and industries, the desired outcomes are purchase intent or return intent (customer loyalty).  This analytical tool helps mangers identify and reinforce sales and service behaviors which drive sales or loyalty – behaviors that matter.

As with all research, it is a best practice to anticipate the analysis when designing a mystery shop program.  In anticipating the analytical needs of Key Driver Analysis identify what specific desired outcome you want from the customer as a result of the experience.

  • Do you want the customer to purchase something?
  • Do you want them return for another purchase?

The answer to these questions will anticipate the analysis and build in mechanisms for Key Driver Analysis to identify which behaviors are more important in driving this desired outcome – which behaviors matter most.

Next, ask shoppers if they had been an actual customer, how the experience influenced their return intent.  Group shops by positive and negative return intent to identify how mystery shops with positive return intent differ from those with negative.  This yields a ranking of the importance of each behavior by the strength of its relationship to return intent.

Additionally, pair the return intent rating with a follow-up question asking, why the shopper rated their return intent as they did.  The responses to this question should be grouped and classified into similar themes, and grouped by the return intent rating described above.  The result of this analysis produces a qualitative determination of what sales and service practices drive return intent.

Finally, Key Driver Analysis produces a means to identify which behaviors have the highest potential for return on investment in terms of driving return intent.  This is achieved by comparing the importance of each behavior (as defined above) and its performance (the frequency in which it is observed).  Mapping this comparison in a quadrant chart, provides a means for identifying behaviors with relatively high importance and low performance – behaviors which will yield the highest potential for return on investment in terms of driving return intent.

Gap_Analysis

 

Behaviors with the highest potential for return on investment can then be inserted into a feedback loop into the mystery shop scoring methodology by informing decisions with respect to weighting specific mystery shop questions, assigning more weight to behaviors with the highest potential for return on investment.

Employing Key Driver Analysis gives managers a means of focusing training, coaching, incentives, and other motivational tools directly on the sales and service behaviors that will produce the largest return on investment. See the attached post for further discussion of mystery shop scoring.

Click Here for Mystery Shopping Best Practices

 

 

Click Here for Mystery Shopping Best Practices

 

 

Mystery_Shopping_Page

Guest Return Intent Drivers in the Restaurant Experience

Young couple in restaurant

The business attribute with the highest correlation to profitability is loyalty.  Loyalty lowers sales and acquisition costs per guest by amortizing these costs across a longer lifetime – leading to some extraordinary financial results.  However, the question remains, what service attributes drive guest loyalty?

To answer this question from a behavioral standpoint Kinesis conducted 400 restaurant mystery shops with the purpose of determining which service attributes/behaviors drive guest return intent.  Forty-six service attributes were observed across five dimensions of the guest experience: environment, food & beverage quality, greeting, personal attention and timing of food and beverage delivery.

The attributes measured grouped into these five dimensions as follows:

Environment

  • Table maintained appropriately throughout the meal
  • Dining room clean, organized and well maintained
  • Exterior building, parking lot, walkways and planters clean
  • Silverware, china, glassware and your table clean
  • Men’s restroom clean and stocked with supplies
  • Lighting fixtures clean and working
  • Lobby area clean and organized
  • Menus clean and in good condition
  • Women’s restroom clean and stocked with supplies
  • Bar clean, organized and well maintained
  • Room temperature level comfortable

Food & Beverage Quality

  • Entrees presented attractively, and tasted good
  • Appetizer presented attractively, and tasted good
  • Drinks attractively presented, and tasted good
  • Dessert presented attractively, and tasted good

Greeting

  • Greeting made feel welcome
  • Prompt greeting
  • Staff members greet with a friendly smile as being seated
  • Thanked and encouraged to visit again
  • Ask specific questions about your experience upon leaving

Service: Personal Attention

  • Server attentive and prompt throughout the meal
  • Server discuss the beverage menu, suggest an item or ask about your preferences
  • Server discuss the appetizer menu, suggest an item or ask about your preferences
  • Server promote daily specials
  • Host carry on a conversation as being seated
  • Server discuss the beverage menu or ask about preferences
  • Receive appetizer in a timely manner
  • Manager engage guests in conversation
  • Server smiling and enjoying time with all the guests
  • Acknowledged by a server in a timely manner
  • Attentive to needs while in the bar area
  • Server discuss the dessert menu, suggest an item or ask about preferences
  • Server knowledgeable and confident when responding to questions
  • Manager present
  • Server try and entice you to order their favorite appetizer(s)
  • Resolve any service, food or beverage issues

Service: Timing

  • Food and beverage service timed well
  • Receive entrees in a timely manner
  • Receive starter soup/ salad in a timely manner
  • Receive appetizer in a timely manner
  • Manager engage guests in conversation
  • Receive drink orders in timely manner
  • Receive dessert in a timely manner
  • Cashed out in a timely manner
  • Acknowledge and get order in a timely manner
  • Drinks arrive in a timely manner

 

In order to determine the relationship of these attributes to return intent, Kinesis asked mystery shoppers if, based on the guest experience, they intended to return to the restaurant.  This independent variable was then used as a basis for cross-tabulation to determine the frequency with which the behaviors were observed in shops with positive return intent and negative return intent.

The results of this cross tabulation is as follows:

Environment Shops with …
Positive Return Intent Negative Return Intent
Table maintained appropriately throughout the meal 96% 73%
Dining room clean, organized and well maintained 100% 90%
Exterior building, parking lot, walkways and planters clean 100% 94%
Silverware, china, glassware and your table clean 98% 94%
Men’s restroom clean and stocked with supplies 96% 91%
Lighting fixtures clean and working 98% 95%
Lobby area clean and organized 100% 98%
Menus clean and in good condition 99% 97%
Women’s restroom clean and stocked with supplies 93% 92%
Bar clean, organized and well maintained 99% 98%
Room temperature level comfortable 95% 94%

 

Food & Beverage Quality Shops with …
Positive Return Intent Negative Return Intent
Entrees presented attractively, and tasted good 98% 58%
Appetizer presented attractively, and tasted good 97% 88%
Drinks attractively presented, and tasted good 97% 88%
Dessert presented attractively, and tasted good 97% 97%

 

Greeting Positive Return Intent Negative Return Intent
Thanked and encouraged to visit again 95% 63%
Ask specific questions about your experience upon leaving 35% 8%
Greeting made feel welcome 93% 70%
Prompt greeting 93% 76%
Staff members greet with a friendly smile as being seated 60% 44%

 

Service: Personal Attention Positive Return Intent Negative Return Intent
Server attentive and prompt throughout the meal 93% 45%
Server discuss the beverage menu, suggest an item or ask about your preferences 80% 43%
Server discuss the appetizer menu, suggest an item or ask about your preferences 68% 33%
Server promote daily specials 64% 33%
Host carry on a conversation as being seated 70% 41%
Server discuss the beverage menu or ask about preferences 63% 35%
Manager engage guests in conversation 73% 47%
Server smiling and enjoying time with all the guests 97% 73%
Acknowledged by a server in a timely manner 96% 73%
Attentive to needs while in the bar area 92% 72%
Server discuss the dessert menu, suggest an item or ask about preferences 81% 65%
Acknowledge and get order in a timely manner 94% 80%
Server knowledgeable and confident when responding to questions 98% 86%
Manager present 43% 31%
Server try and entice you to order their favorite appetizer(s) 64% 57%
Resolve any service, food or beverage issues 53% 67%

 

Service: Timing Positive Return Intent Negative Return Intent
Food and beverage service timed well 92% 51%
Receive entrees in a timely manner 92% 59%
Server promote daily specials 64% 33%
Receive starter soup/ salad in a timely manner 91% 60%
Receive appetizer in a timely manner 93% 65%
Receive drink orders in timely manner 96% 73%
Receive dessert in a timely manner 95% 77%
Cashed out in a timely manner 97% 81%
Acknowledge and get order in a timely manner 94% 80%
Drinks arrive in a timely manner 98% 85%

 

Putting all this together, the ten attributes with the largest difference between shops with positive and negative return intent are:

Top 10 Attributes
Dimension Attributes Difference
Service: Personal Attention Server attentive and prompt throughout the meal 48%
Service: Timing Food and beverage service timed well 41%
Food Entrees presented attractively, and tasted good 40%
Service: Personal Attention Server discuss the beverage menu, suggest an item or ask about your preferences 37%
Service: Personal Attention Server discuss the appetizer menu, suggest an item or ask about your preferences 35%
Service: Timing Receive entrees in a timely manner 33%
Service: Personal Attention Server promote daily specials 31%
Greeting Thanked and encouraged to visit again 31%
Service: Timing Receive starter soup/ salad in a timely manner 30%
Service: Personal Attention Host carry on a conversation as being seated 29%

 

Of the ten attributed with the strongest relationship to return intent, five belong to the personal attention dimension, three belong to the timing dimension, the food & beverage quality and greeting dimensions round out the top ten with one attribute each.

Directing our attention from specific attributes to broader dimensions, the following chart shows the average difference in shops with positive return intent to shops with negative return intent:

Return Intent Gaps

Outside of the timing of food and beverage delivery, the dimensions of the customer experience with the strongest correlation to return intent are the greeting and personal attention, followed by food and beverage quality and the physical environment.

Click Here For More Information About Kinesis' Research Services

Beyond Loyalty: Engagement/Wallet Share

In two earlier posts we discussed 1) including a loyalty proxy as part of your brand perception research and 2) determining the extent to which your desired brand image is reflected in how customers actually perceive the brand.

Now, we expand the research plan to move beyond loyalty and brand perception, and investigate customer engagement, or the extent to which customers are engaged with the brand through share of wallet.

Wallet Share

Comparison to Competitors

The first step in measuring customer engagement is capturing top-of-mind comparisons of your brand to competitors.  There are many ways to achieve this research objective, perhaps the simplest is to present the respondent with a list of statements regarding the 4-P’s of marketing (product, promotion, place and price) and asking customers to compare your performance relative to your competitors.

The statements you present to customers should be customized around your industry and business objectives, but they may look something like the following:

  • Their products and services are competitive
  • They are more customer-centric
  • They have lower fees
  • They have better service
  • They offer better technology
  • They are more nimble and flexible
  • They are more innovative

Similar to the brand perception statements discussed in the previous post, these competitor comparison statements can be used to determine which of these service attributes have the most potential for ROI in terms of driving loyalty, again, by cross tabulating responses to the customer loyalty proxy.

Primary Provider

The next step in researching customer engagement is to determine if the customer considers you or another brand their primary provider.  This is easily achieved by presenting the customer with a list of providers, including yourself, and asking them which of these the customer consider their primary provider.

Finally, we can tie industry comparisons to primary provider by asking why they consider their selection as a primary provider.  This is best accomplished by using the same list of competitor comparison statements above, and asking which of these statements are the reasons they consider their selection to be the primary provider.

Similar to the brand perception statements discussed in the previous post, these competitor comparison statements can be used to determine which of these service attributes have the most potential for ROI in terms of driving loyalty, by cross-tabulating responses to these statements to the loyalty segments.

 

Click Here For More Information About Kinesis' Research Services