This post considers the implications of cross-channel consistency for customer experience researchers. The first research implication of inter-channel consistency is to understand that researchers must investigate service delivery consistency at its cause.
The range of choices available to customers here in the 21-st century is incredible. Gone are the Henry Ford days when you, as he put it, “could have any color you want as long as it’s black.” Modern customers have an array of choices available to them not only in the brands but in delivery channels. Modern brands must serve channels in the channel of the customer’s choice, be it on-line, mobile, contact center, or in-person. As customer choice expands cross-channel consistency has become more and more important.
The problem for customer experience researchers is that this channel expansion requires a broad tool box of research techniques, as different channels require unique systems and processes appropriate to the channel. Systems and processes for on-line channels are different than those for in-person channels. These different systems and processes often lead to the siloing of channels, which may help make individual channels more efficient, but run the risk in inconsistencies in the customer experience from one channel to the other.
Customers, however, don’t look at a brand as a collection of siloed channels. Customers do not care about organizational charts. They expect a consistent customer experience regardless of channels. Customers expect cross-channel consistency.
If senior management has defined the customer experience organization-wide, the researcher’s role in coordinating research tools is much easier. If management has not defined the customer experience organization-wide, the researcher’s role is nearly impossible.
The first step in defining the customer experience organization-wide is writing a clear customer experience mission statement which clearly communicates how customers should experience the brand, and how management wants customers to feel as a result of the experience. Next, the customer experience should be defined in terms of broad dimensions and specific attributes which constitute the desired customer experience and emotional reaction to the brand.
For illustration, let’s consider the following example:
A bank may define their customer experience with four broad dimensions, which can be described as:
- Relationship Building
- Sales Process
- Product Knowledge
- Customer Knowledge
Next, the customer experience leadership of this bank must define each of these broad dimensions in terms of specific attributes which combine to make up the dimensions. For example, each of the above four dimensions may be defined by the following attributes:
|Relationship Building||Establish trust
Commitment to customer needs
Perceived as trusted advisor
|Sales Process||Referral to appropriate partner|
|Product Knowledge||Understanding of a range of products
Understand features and benefits
Explain benefits in ways that are meaningful to customers
|Customer Knowledge||Needs analysis|
Once each of the above dimensions has been defined in terms of specific attributes, the next step in translating the customer experience definition to action is to define a set of empirical behaviors which support each attribute.
For example, establishing trust is an attribute of relationship building.
Relationship Building –> Establish Trust
Under this example, a set of behaviors is defined which are designed to establish trust. For example, these behaviors may be:
- Maintain eye contact
- Speak clearly
- Maintain smile
- Thank for business
- Ask “What else may we assist you with today?”
- Encourage future business
Now, each of these six behaviors is mapped across each channel. So, for example, this bank may map these behaviors across channels as follows:
Behaviors Which Support Establishing Trust:
|New Accounts||Teller||Contact Center|
|Maintain eye contact||Maintain eye contact||—|
|Speak clearly||Speak clearly||Speak clearly|
|Maintain smile||Maintain smile||Sound as if they were smiling through the phone|
|Thank for business||Thank for business||Thank for business|
|Ask “What else may we assist you with today?”||Ask “What else may we assist you with today?”||Ask “What else they could do to assist you today?”|
|Encourage future business||Encourage future business||Encourage future business|
Repeating this process of mapping behaviors to each of the attributes will produce a complete list of employee behaviors appropriate to each channel in support of management’s broader customer experience objectives.
Customer experience researchers are constantly looking for ways to make their observations relevant, to turn observations into insight. Observing a behavior or service attribute is one thing, linking observations to insight that will maximize return on customer experience investments is another. One way to link customer experience observations to insights that will drive ROI is to explore the influence of customer experience attributes to key business outcomes such as loyalty and wallet share.
The first step is to gather impressions of a broad array of customer experience attributes, such as: accuracy, cycle time, willingness to help, etc. Make this list as long as you reasonably can without making the survey instrument too long.
For additional thoughts on survey length and research design, see the following blog posts:
The next step is to explore the relationship of these service attributes to loyalty and share of wallet.
Two Questions – Lots of Insight
In our experience, two questions: a “would recommend” and primary provider question, yield valuable insight into the relative importance of specific service attributes. Together, these two questions form the foundation of a two-dimensional analytical framework to determine the relative importance of specific service attributes in driving loyalty and wallet share.
Research has determined the business attribute with the highest correlation to profitability is customer loyalty. Customer loyalty lowers sales and acquisition costs per customer by amortizing these costs across a longer lifetime – leading to some extraordinary financial results.
Measuring customer loyalty in the context of a survey is difficult. Surveys best measure attitudes and perceptions. Loyalty is a behavior not an attitude. Survey researchers therefore need to find a proxy measurement to determine customer loyalty. A researcher might measure customer tenure under the assumption that length of relationship predicts loyalty. However, customer tenure is a poor proxy. A customer with a long tenure may leave, or a new customer may be very satisfied and highly loyal.
Likelihood of referral captures a measurement of the customer’s likelihood to refer a brand to a friend, relative or colleague. It stands to reason, if one is going to refer others to a brand, they will remain loyal as well, because customers who are promoters of a brand are putting their reputational risk on the line. This willingness to put their reputational risk on the line is founded on a feeling of loyalty and trust.
Any likelihood of referral question can be used, depending on the specifics of your objectives. Kinesis has had success with both a “yes/no” question, “Would you refer us to a friend, relative or colleague?” and the Net Promoter methodology. The Net Promoter methodology asks for a rating of the likelihood of referral to a friend, relative or colleague on an 11-point (0-10) scale. Customers with a likelihood of 0-6 are labeled “detractors,” those with ratings of 7 and 8 and identified as “passive referrers,” while those who assign a rating of 9 and 10 are labeled “promoters.”
In our experience asking the “yes/no” question: “Would you refer us to a friend, relative or colleague?” produces starker differences in this two-dimensional analysis making it easier to identify which service attributes have a stronger relationship to both loyalty and engagement.
Similar to loyalty, customer engagement or wallet share can lead to some extraordinary financial results. Wallet share is the percentage of what a customer spends with a given brand over a specific period of time.
Also similar to loyalty, measuring engagement or wallet share in a survey is difficult. There are several ways to measure engagement: one methodology is to use some formula such as the Wallet Allocation Rule which uses customer responses to rank brands in the same product category and employs this rank to estimate wallet share, or to use a simple yes/no primary provider question.
Using these loyalty and engagement measures together, we can now cross tabulate the array of service attribute ratings by these two measures. This cross tabulation groups the responses into four segments: 1) Engaged & Loyal, 2) Disengaged yet Loyal, 3) Engaged yet Disloyal, 4) Disengaged & Disloyal. We can now make comparisons of the responses by these four segments to gain insight into how each of these four segments experience their relationship with the brand.
These four segments represent: the ideal, opportunity, recovery and attrition.
Ideal – Engaged Promoters: This is the ideal customer segment. These customers rely on the brand for the majority of their in category purchases and represent lower attrition risk. In short, they are perfectly positioned to provide the financial benefits of customer loyalty. Comparing attribute ratings for customers in this segment to the others will identify both areas of strength, but at the same time, identify attributes which are less important in terms of driving this ideal state, informing future decisions on investment in these attributes.
Opportunity – Disengaged Promoter: This customer segment represents an opportunity. These customers like the brand and are willing to put their reputation at risk for it. However, there is an opportunity for cross-sell to improve share of wallet. Comparing attribute ratings of the opportunity segment to the ideal will identify service attributes with the highest potential for ROI in terms of driving wallet share.
Recovery – Engaged Detractor: This segment represents significant risk. The combination of above average share of wallet, and low commitment to put their reputational risk on the line is flat out dangerous as it puts profitable share of wallet at risk. Comparing attribute ratings of customers in the recovery segment to both the ideal and the opportunity segments will identify the service attributes with the highest potential for ROI in terms of improving loyalty.
Attrition – Disengaged Detractor: This segment represents the greatest risk of attrition. With no willingness to put reputational risk on the line, and little commitment to placing share of wallet with the brand, retention strategies may be too late for them. Additionally, they most likely are unprofitable. Comparing the service attributes of customers in this segment to the others will identify elements of the customer experience which drive attrition and may warrant increased investment, as well as, elements that do not appear to matter very much in terms driving runoff, and may not warrant investment.
By making comparisons across each of these segments, researchers give managers a basis to make informed decisions about which service attributes have the strongest relationship to loyalty and engagement. Thus identifying which behaviors have the highest potential for ROI in terms of driving customer loyalty and engagement. This two-dimensional analysis is one way to turn customer experience observations into insight.
Previously we discussed ways researchers can increase the likelihood of respondents opening an email survey invitation. Additionally, in a subsequent post we discussed how to get respondents to actually click on the survey link and participate in the survey.
This post is a discussion of ways to keep respondents motivated to complete the entire survey once they have entered it.
At its core, the key to completion rates is an easy to complete and credible survey that delivers on all promises offered in the invitation email.
From time to time various service providers of mine send me a survey invite, and I’m often surprised how many of them impose upon me, their customer, to complete a 30 or 40 minute survey. First of all, they never disclose the survey length in advance, which communicates a complete lack of respect for my time. In addition to just plain being an imposition, it is also a bad research practice. Ten minutes into the survey I’m either pressed for time, frustrated, or just plain bored, and either exit the survey or frivolously complete the remaining questions without any real consideration of my opinions on the questions they are asking – completely undermining the reliability of my responses. This is just simply a bad research practice, in addition to being inconsiderate of the end customer’s time.
We recommend keeping survey length short, no more than 10 to 12 minutes – in some cases such as a post-transaction survey – 5 minutes.
If research objectives require a long survey, rather than impose a ridiculously long survey on your customers producing frivolous results, break a 30 – 40 minute survey into two, or better yet, three parts fielding each part to a portion of your targeted sample frame.
Additionally, skip logic should be employed to avoid asking questions that are not applicable to a given respondent, thus decreasing the volume of questions you present to the end customer.
Finally, include a progress bar to keep respondents informed of how far along they are on the survey.
Ease of Completion
The last thing you want respondents feeling when they complete your survey is frustration. First of all, if the sample frame is made up of your customers, the primary thing you are accomplishing is upsetting your customers and damaging your brand. And also, creating bad research results because frustrated respondents are not in the proper mindset to give you well considered answers.
Frustration can come from awkward design, question wording, poor programming, and insufficient response choices. Survey wording and vocabulary should be simple and jargon free, response choices should be comprehensive, and of course the survey programming should be thoroughly proofed and pretested.
Pretesting is a process where the survey is prefielded to a portion of the sample frame to test how they respond to the survey, significant portions of the questionnaire unanswered or a high volume of “other” or “none of the above” responses could signal trouble with survey design.
Survey completion should be easy. Survey entry should work across a variety platforms, browsers and devices.
Additionally, respondents should be allowed to take the survey on their own time, even leaving the survey while saving their answers to date and allowing reentry when it is more convenient for them.
It is incumbent on researchers fielding self-administered surveys to maximize response rates. This reduces the potential for response bias, where the survey results may not accurately reflect the opinions of the entire population of targeted respondents. Previously we discussed ways researchers can increase the likelihood of respondents opening an email survey invitation. This post addresses how to get respondents to actually click on the survey link and participate in the survey.
Make the Invite Easy to Read
Don’t bury the lead. The opening sentence must capture the respondent’s attention and make the investment in effort to read the invitation. Keep in mind most people skim emails. Keep text of the invitation short, paying close attention to paragraph length. The email should be easy to skim.
Give a Reward
Offering respondents a reward for participation is an excellent way to motivate participation. Tangible incentives like a drawing, coupon, or gift card, if appropriate and within the budget, are excellent tools to maximize response rates. However, rewards do not necessarily need to be tangible. Intangible rewards can also prove to be excellent motivators. People, particularly customers who they have a relationship with the brand, want to be helpful. Expressing the importance of their option, and communicating how the brand will use the survey to improve its offering to customers like the respondent is an excellent avenue to leverage intangible rewards to motivate participation.
Intangible rewards are often sufficient if the respondent’s cost to participate in the survey is minimal. Perhaps the largest cost to a potential respondent is the time required to complete the survey. Give them an accurate estimate of the time it takes to complete the survey – and keep it short. We recommend no more than 10 minutes, more preferably five to six. If the research objectives require a longer survey instrument, break the survey into two or three shorter surveys and deliver them separately to different targeted respondents. Do not field excessively long surveys or mis-quote the estimated time to complete the survey – it is rude to impose on your respondents not to mention disastrous to your participation rates – and it’s unethical to mis-quote the survey length. As with getting the participants to open the email – creditability plays a critical role in getting them to click on the survey.
One of the best ways to garner credibility with the survey invite is to assure the participant confidentiality. This is particularly important for customer surveys, where the customers interact commonly with employees. For example, a community bank where customers may interact with bank employees not only in the context of banking but broadly in the community, must ensure customers that their survey response will be kept strictly confidential.
Personalizing the survey with appropriate merge fields is also an excellent way to garner credibility.
Make it as easy as possible for the participant to enter the survey. Program a link to the survey, and make sure it is both visible and presented early in the survey. Again, most people skim the contents of emails, so place the link in the top 1/3 of the email and make it clear that it is a link to enter the survey.
In designing survey invitations, remember to write short, concise, easy to read emails that both leverage respondent’s reward centers (tangible or intangible), and credibly estimate the short time required to complete the survey. This approach will help maximize response rates and avoid some of the pitfalls of response bias. Click here for the next post in this series in prompting respondents to complete the survey.
In fielding surveys researchers must be aware of the concepts of error and bias and how they can creep into a survey, potentially making the survey unreliable in ways that cannot be predicted. For example, one source of error is statistical error, where not enough respondents are surveyed to make the results statistically reliable. Another source of error, or bias, is response bias caused by not having a random sample of the targeted population.
A key concept of survey research is randomness of sample selection, in essence to give each member of the targeted survey population an equal chance of being surveyed. Response rates are important in self administered surveys (such as an email surveys), because it is possible non-responders (people who for some reason choose not to complete the survey) have different opinions than those who choose to participate in the survey. As a result, the survey is not purely random. If non-responders are somehow different than responders, the survey results will reflect that difference – thus biasing the research. It is therefore incumbent on researchers to maximize the survey response rate.
Say for example, a bank wants to survey customers after they have completed an online transaction. If customers who love the bank’s online capabilities are more likely to participate in the survey than those who do not like the bank’s online capabilities, the survey results will be biased in favor of a positive view of the bank’s online offering because it is not a representative sample – it is skewed toward customers with the positive view.
It is, again, incumbent on researchers to maximize the response rate as much as possible in self-administered email surveys.
Pre-Survey Awareness Campaign
One strategy to maximize response rates (particularly in a customer survey context) is a pre-survey awareness campaign to make customers aware of the coming survey and encourage participation. Such a campaign can take many forms, such as:
- Letter on company letterhead, signed by a high profile senior executive.
- Statement or billing inserts
- Email in advance of the survey
Each of these three are excellent ways to introduce the survey to respondents and maximize response rates.
The next steps in maximizing response rates in email surveys is passing SPAM filter tests, and prompting the recipient to open the email. The core concept here is credibility – to make the email appear as credible as possible.
The first step to maintaining credibility is to avoid getting caught in SPAM filter tests, the email content should avoid the following:
- Words common in SPAM, like “win” or “free”
- The use of ALL CAPS
- Excessive punctuation
- Special characters
Additionally, do not spoof emails. Spoofing is the forgery of an email header to make it appear it originated from a source other than the actual source. Send emails from your server. (Sometimes Kinesis has clients who want the email to appear to originate from their server. In such cases, we receive the sample from the client, append a unique identifier and send it back to the client to actually be mailed from their servers.)
Perhaps the best strategy to maintain the credibility of the email invite is to conform to Marketing Research Association (MRA) guidelines. These guidelines include:
- Clearly identify the researcher, including phone number, mailing address, and email
- Post privacy policies online and include a link to these policies
- Include a link to opt out of future emails
From and Subject Lines
Both the FROM and SUBJECT lines are critical in getting the respondent to open the email.
The FROM line has be as credible and recognizable as possible, avoiding vague or generic terms like “feedback”. For surveys of customers, the company name or the name of a recognizable representative of the company should be used.
The SUBJECT line must communicate the subject of the email in a credible way that will make the respondent want to open the email. Keep it brief (50 characters or less), clear, concise and credible.
Not only is the content of the email important, but the timing of delivery plays a role in response rates. In our experience sending the survey invitation in the middle of the week (Tuesday – Thursday) during daytime hours increases the likelihood that the email will be noticed by the respondent.
After an appropriate amount of time (typically for our clients 5 days), reminder emails should be sent, politely reminding the respondent of the previous invitation, and highlighting the importance of their opinion. One, perhaps two, reminder emails are appropriate, but do not send more than two.
To maximize the probability that respondents will receive and open the email focus on sending a credible email mid-week, one which will pass SPAM filter tests, contain accurate credible and compelling SUBJECT and FROM lines, and send polite reminder emails to non-responders.
But opening the email is just the first step. The actual objective is to get the respondents to open and complete the survey. Click here for the next post in this series in prompting respondents to participate in the survey.
Establishing and measuring loyalty proxies is important, but your brand perception research should not end there. Brand perception research should produce insight beyond loyalty. It should determine the extent to which customers impressions of the brand are aligned with your desired brand image. Additionally, perceptions of the brand among the most loyal and engaged customers should be compared to those who are deemed less loyal or engaged to identify opportunities to improve perceptions of the brand among customers at either risk of defection, or not fully engaged
In a subsequent post, we will address ways to measure engagement/wallet share.
The first step in measuring your brand perception is to define your desired brand. Ask yourself: if your brand were a person, what personality characteristics would you like your customers to describe you with? What adjectives would you want used to describe your brand?
In addition to describing your brand personality with adjectives, come up with a list of statements that describe your desired personality. For example, you may include statements such as:
- We are easy to do business with.
- We are knowledgeable.
- We are like a trusted friend.
- We are interested in customers as people, not just the bottom line.
- We are committed to the community.
So, we defined the brand in terms of personality adjectives and statements. Both will be used in designing the survey instrument.
The Survey Instrument
Unaided Top-of Mind
The first step in the survey instrument, is asking customers for their unaided top-of-mind perceptions of the brand. This will uncover the first thing that comes to customers’ minds about your brand prior to the effects of any bias introduced by the research instrument itself. There are many ways to capture unaided top-of-mind impressions. We like a simple approach, where you ask the customer for the one word that they would use to describe the company. This research question will yield a list adjectives that can be quantified by frequency and used to determine the extent to which customers top-of-mind impressions match the desired brand image.
After we have defined top of mind impressions of the brand, we recommend comparing brand perception to your desired brand identified in the brand definition exercise described above. This is a fairly simple process of presenting the customers with your list of brand personality adjectives and asking the customer which of these adjectives would the customer use to describe the company.
The next step in comparing the reality of brand perception to your branding goals is to ask the customers to what extent do they agree with each of the brand personality statements described above. As with the list of adjectives, this holds a mirror up to your desired image and measures the extent to which customers agree that you are perceived in the manner that you want to be.
Identifying Attributes with the Most ROI Potential
The value of these brand perception statements goes beyond just evaluating if you live up to your brand. Used in conjunction with the loyalty proxies discussed in the previous post, they become tools to determine which of these brand personality attributes will yield the most ROI in terms of improving customer loyalty. This is achieved with a simple cross-tabulation of agreement with these statements by customer loyalty segment. For example, if NPS is used as the loyalty proxy, then we simply compare agreement to these statements from promoters to detractors to determine which attributes have the largest gaps between promoters and detractors. Those with the largest gaps have the most ROI potential in terms of customer loyalty.
Customer loyalty is the business attribute with the strongest correlation to profitability. Loyalty lowers sales and acquisition costs per customer by amortizing these costs across a longer lifecycle, leading to extraordinary financial results. A 5% increase in customer loyalty can translate, depending on the industry, into a 25% to 85% increase in profits.
Many customer experience managers want to include a measure of loyalty in their customer experience research. Indeed loyalty and how brand perception drives loyalty is the foundation of any brand perception research. However, loyalty is a behavior measured longitudinally over time, and surveys best measure customer attitudes. As a result, researchers typically use attitudinal proxies for customer loyalty. Generally the two most common proxies are either a “would recommend” or a “customer advocacy” question.
- Would Recommend: A “would recommend” question is typically Net Promoter (NPS) or some other measure of the customer’s likelihood of referring to a friend, relative or colleague. It stands to reason, if one is going to refer others to a brand, they will remain loyal as well. Promoters’ willingness to put their reputational risk on the line is founded on a feeling of loyalty and trust.
- Customer Advocacy: A customer advocacy question asks if the customer agrees with the following statement, “the brand cares about me, not just the bottom line.” The concept of trust is perhaps more evident in customer advocacy. Customers who agree with this statement trust the brand to do right by them, and not subjugate their best interests to profits. Customers who trust the brand to do the right thing are more likely to remain loyal.
We’ve seen some loyalty surveys (particular those employing the NPS methodology), which only ask the loyalty proxy with little or no other areas of investigation. We believe this is a bad practice for a number of reasons:
- Customer Experience: Customers who have affirmatively taken the action of clicking on the survey want to give you their opinion (they want to participate in the survey), and based on their experience are expecting a multiple question survey. Presenting them with just one rating scale risks alienating them as they may feel they didn’t get an appropriate opportunity to share their opinion, and ultimately feel it was not worth their time to participate. Secondly, some customers may conclude the survey system is broken in some way as it only presented them with one question, resulting in customer confusion.
- Actionable Research Results: A survey consisting of one NPS rating is not going to yield any information from which to draw conclusions about how customers feel about the brand. It will produce an average rating and frequency of promoters and detractors, but no context in which to interpret the results.
Establishing and measuring loyalty proxies are an important first step in evaluating brand perception. Additional areas of investigation should include indentifying and comparing customer impressions of the brand to your desired brand personality, and evaluate customer engagement or wallet share.