A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Stabilizing Relationships
Part 3: Onboarding Research: Research Techniques to Track Effectiveness of Stabilizing New Customer Relationships
As we explored in an earlier post, Three Types of Customer Experiences CX Managers Must Understand, there are three types of customer interactions: Planned, Stabilizing, and Critical.
Stabilizing interactions are service encounters which promote customer retention, particularly in the early stages of the relationship. It is incumbent on an integrated digital-first banking model to stabilize new customers, without relying on the local branch to build the relationship. It is important, therefore, to get the onboarding process right in a systematic way.
New customers are at the highest risk of defection, as they have had less opportunity to confirm the provider meets their expectations. Turnover by new customers is particularly damaging to profits because many defections occur prior to recouping acquisition costs, resulting in a net loss on the customer relationship. As a result, customer experience managers should stabilize the customer relationship early to ensure a return on acquisition costs.
Systematic education drives customer expectations beyond simply informing customers about additional products and services; it also informs new customers how to use services more effectively and efficiently – this is going to be critical in a digital first integrated strategy. Customers need to know how to navigate these channels effectively.
The first step in designing a research plan for the onboarding process is to define the process itself. Ask yourself, what type of stabilizing customer experiences do we expect at both the initial account opening and at discrete time periods thereafter (be it 30 days, 90 days, 1-year)? Understanding the expectations of the onboarding process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.
Kinesis recommends measuring the onboarding process by auditing the performance of the process and its influence on the customer relationship from the bank and customer perspective.
Bank Perspective: Performance Audits
Performance audits are a type of mystery shop, and an effective tool to audit the performance of the onboarding process.
First, mystery shop the initial account opening (across a channels: digital, contact center and branch) to evaluate its efficacy and effectiveness. Be sure to link these observations to a dependent variable, such as purchase intent, to determine which service attributes drive purchase intent. This will inform decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.
Beyond auditing the initial account opening experience, a performance audit of the onboarding process should test the presence and timing of specific onboarding events expected at discrete time periods. As an example, you may expect the following onboarding process after a new account is opened:
|At Opening||Internet Banking Presentation
Mobile Banking Presentation
Contact Center Presentation
|1-10 Days||Welcome Letter
Internet Banking Password
Overdraft Protection Brochure
Mobile Banking E-Mail
|30-45 Days||First Statement
Credit Card Offer
Auto Loan Brochure
Mortgage/Home Equity Loan Brochure
In this example, the bank’s customer experience managers have designed a process to increase awareness of digital channels, introduce the integrated layered service concept, and introduce additional services offered. An integrated research plan would recruit mystery shoppers for a long-term evaluation of the presence, timing, and effectiveness of each event in the onboarding process.
In parallel to auditing the presence and timing of onboarding events, research should be conducted to evaluate the effectiveness of the process in stabilizing the customer relationship by surveying new customers at distinct intervals after customer acquisition. We recommend testing the effectiveness of the onboarding process by benchmarking three loyalty attitudes:
- Would Recommend: The likelihood of the customer recommending the brand to a friend, relative or colleague.
- Customer Advocacy: The extent to which the customer agrees with the statement, “You care about me, not just the bottom line?”
- Primary Provider: Does the customer consider you their primary provider for financial services?
These three measures, tracked together throughout the onboarding process, will give managers a measure of the effectiveness of stabilizing the relationship.
Again, new customers are at an elevated risk of defection. Therefore, it is important to stabilize the customer relationship early on to ensure ROI on acquisition costs. A well-designed research process will give managers an important audit of both the presence and timing of onboarding events, as well as track customer engagement and loyalty early in their tenure.
In the next post, we will explore the third type of experience – experiences with a significant amount of influence on the customer relationship – critical experiences.
Mystery shopping not in pursuit of an overall customer experience objective may be interesting, it may be successful in motivating certain service behaviors, but ultimately will fail in maximizing return on investment.
Consider the following proposition:
“Every time a customer interacts with a brand, the customer learns something about the brand, and based on what they learn, adjust their behavior in either profitable or unprofitable ways.”
These behavioral adjustments could be profitable: positive word of mouth, complain less, less expensive channel use, increased wallet share, loyalty, or purchase intent, etc.. Or…these adjustments could be unprofitable: negative word of mouth, more complaints, decreased wallet share, purchase intent or loyalty, etc.
There is power in this proposition. Understanding it is the key to managing the customer experience in a profitable way. Unlocking this power gives managers a clear objective for the customer experience in terms of what you want the customer to learn from it and react to it. Ultimately, it becomes a guidepost for all aspects of customer experience management – including customer experience measurement.
In designing customer experience measurement tools, ask yourself:
- What is the overall objective of the customer experience?
- How do you want the customer to feel as a result of the experience?
- How do you want the customer to act as a result of the experience?
- Do you want the customer to have increased purchase intent?
- Do you want the customer to have increased return intent?
- Do you want the customer to have increased loyalty?
The answer to the above series of questions will become the guideposts for designing a customer experience which will achieve your objectives.
The answers to the above questions will serve as a basis for evaluating the customer experience against your objectives. In research terms, the answer to this question or questions will become the dependent variable(s) of your customer experience research – the variables influenced or dependent on the specific attributes of the customer experience.
For example, let’s assume your objective of the customer experience is increased return intent. As part of a mystery shopping program, ask a question designed to capture return intent – a question like, “Had this been an actual visit, how did the experience during this shop influence your intent to return for another transaction?” This is the dependent variable.
The next step is to determine the relationship between every service behavior or attribute and the dependent variable (return intent). The strength of this relationship is a measure of the importance of each behavior or attribute in terms of driving return intent. It provides a basis from which to make informed decisions as to which behaviors or attributes deserve more investment in terms of training, incentives, and rewards.
This is what Kinesis calls Key Driver Analysis, an analysis technique designed to identify service behaviors and attributes which are key drivers of your key objectives of the customer experience. In the end, providing an informed basis for which to make decisions about investments in the customer experience.
Customer experience researchers are constantly looking for ways to make their observations relevant, to turn observations into insight. Observing a behavior or service attribute is one thing, linking observations to insight that will maximize return on customer experience investments is another. One way to link customer experience observations to insights that will drive ROI is to explore the influence of customer experience attributes to key business outcomes such as loyalty and wallet share.
The first step is to gather impressions of a broad array of customer experience attributes, such as: accuracy, cycle time, willingness to help, etc. Make this list as long as you reasonably can without making the survey instrument too long.
For additional thoughts on survey length and research design, see the following blog posts:
The next step is to explore the relationship of these service attributes to loyalty and share of wallet.
Two Questions – Lots of Insight
In our experience, two questions: a “would recommend” and primary provider question, yield valuable insight into the relative importance of specific service attributes. Together, these two questions form the foundation of a two-dimensional analytical framework to determine the relative importance of specific service attributes in driving loyalty and wallet share.
Research has determined the business attribute with the highest correlation to profitability is customer loyalty. Customer loyalty lowers sales and acquisition costs per customer by amortizing these costs across a longer lifetime – leading to some extraordinary financial results.
Measuring customer loyalty in the context of a survey is difficult. Surveys best measure attitudes and perceptions. Loyalty is a behavior not an attitude. Survey researchers therefore need to find a proxy measurement to determine customer loyalty. A researcher might measure customer tenure under the assumption that length of relationship predicts loyalty. However, customer tenure is a poor proxy. A customer with a long tenure may leave, or a new customer may be very satisfied and highly loyal.
Likelihood of referral captures a measurement of the customer’s likelihood to refer a brand to a friend, relative or colleague. It stands to reason, if one is going to refer others to a brand, they will remain loyal as well, because customers who are promoters of a brand are putting their reputational risk on the line. This willingness to put their reputational risk on the line is founded on a feeling of loyalty and trust.
Any likelihood of referral question can be used, depending on the specifics of your objectives. Kinesis has had success with both a “yes/no” question, “Would you refer us to a friend, relative or colleague?” and the Net Promoter methodology. The Net Promoter methodology asks for a rating of the likelihood of referral to a friend, relative or colleague on an 11-point (0-10) scale. Customers with a likelihood of 0-6 are labeled “detractors,” those with ratings of 7 and 8 and identified as “passive referrers,” while those who assign a rating of 9 and 10 are labeled “promoters.”
In our experience asking the “yes/no” question: “Would you refer us to a friend, relative or colleague?” produces starker differences in this two-dimensional analysis making it easier to identify which service attributes have a stronger relationship to both loyalty and engagement.
Similar to loyalty, customer engagement or wallet share can lead to some extraordinary financial results. Wallet share is the percentage of what a customer spends with a given brand over a specific period of time.
Also similar to loyalty, measuring engagement or wallet share in a survey is difficult. There are several ways to measure engagement: one methodology is to use some formula such as the Wallet Allocation Rule which uses customer responses to rank brands in the same product category and employs this rank to estimate wallet share, or to use a simple yes/no primary provider question.
Using these loyalty and engagement measures together, we can now cross tabulate the array of service attribute ratings by these two measures. This cross tabulation groups the responses into four segments: 1) Engaged & Loyal, 2) Disengaged yet Loyal, 3) Engaged yet Disloyal, 4) Disengaged & Disloyal. We can now make comparisons of the responses by these four segments to gain insight into how each of these four segments experience their relationship with the brand.
These four segments represent: the ideal, opportunity, recovery and attrition.
Ideal – Engaged Promoters: This is the ideal customer segment. These customers rely on the brand for the majority of their in category purchases and represent lower attrition risk. In short, they are perfectly positioned to provide the financial benefits of customer loyalty. Comparing attribute ratings for customers in this segment to the others will identify both areas of strength, but at the same time, identify attributes which are less important in terms of driving this ideal state, informing future decisions on investment in these attributes.
Opportunity – Disengaged Promoter: This customer segment represents an opportunity. These customers like the brand and are willing to put their reputation at risk for it. However, there is an opportunity for cross-sell to improve share of wallet. Comparing attribute ratings of the opportunity segment to the ideal will identify service attributes with the highest potential for ROI in terms of driving wallet share.
Recovery – Engaged Detractor: This segment represents significant risk. The combination of above average share of wallet, and low commitment to put their reputational risk on the line is flat out dangerous as it puts profitable share of wallet at risk. Comparing attribute ratings of customers in the recovery segment to both the ideal and the opportunity segments will identify the service attributes with the highest potential for ROI in terms of improving loyalty.
Attrition – Disengaged Detractor: This segment represents the greatest risk of attrition. With no willingness to put reputational risk on the line, and little commitment to placing share of wallet with the brand, retention strategies may be too late for them. Additionally, they most likely are unprofitable. Comparing the service attributes of customers in this segment to the others will identify elements of the customer experience which drive attrition and may warrant increased investment, as well as, elements that do not appear to matter very much in terms driving runoff, and may not warrant investment.
By making comparisons across each of these segments, researchers give managers a basis to make informed decisions about which service attributes have the strongest relationship to loyalty and engagement. Thus identifying which behaviors have the highest potential for ROI in terms of driving customer loyalty and engagement. This two-dimensional analysis is one way to turn customer experience observations into insight.
Previously we discussed ways researchers can increase the likelihood of respondents opening an email survey invitation. Additionally, in a subsequent post we discussed how to get respondents to actually click on the survey link and participate in the survey.
This post is a discussion of ways to keep respondents motivated to complete the entire survey once they have entered it.
At its core, the key to completion rates is an easy to complete and credible survey that delivers on all promises offered in the invitation email.
From time to time various service providers of mine send me a survey invite, and I’m often surprised how many of them impose upon me, their customer, to complete a 30 or 40 minute survey. First of all, they never disclose the survey length in advance, which communicates a complete lack of respect for my time. In addition to just plain being an imposition, it is also a bad research practice. Ten minutes into the survey I’m either pressed for time, frustrated, or just plain bored, and either exit the survey or frivolously complete the remaining questions without any real consideration of my opinions on the questions they are asking – completely undermining the reliability of my responses. This is just simply a bad research practice, in addition to being inconsiderate of the end customer’s time.
We recommend keeping survey length short, no more than 10 to 12 minutes – in some cases such as a post-transaction survey – 5 minutes.
If research objectives require a long survey, rather than impose a ridiculously long survey on your customers producing frivolous results, break a 30 – 40 minute survey into two, or better yet, three parts fielding each part to a portion of your targeted sample frame.
Additionally, skip logic should be employed to avoid asking questions that are not applicable to a given respondent, thus decreasing the volume of questions you present to the end customer.
Finally, include a progress bar to keep respondents informed of how far along they are on the survey.
Ease of Completion
The last thing you want respondents feeling when they complete your survey is frustration. First of all, if the sample frame is made up of your customers, the primary thing you are accomplishing is upsetting your customers and damaging your brand. And also, creating bad research results because frustrated respondents are not in the proper mindset to give you well considered answers.
Frustration can come from awkward design, question wording, poor programming, and insufficient response choices. Survey wording and vocabulary should be simple and jargon free, response choices should be comprehensive, and of course the survey programming should be thoroughly proofed and pretested.
Pretesting is a process where the survey is prefielded to a portion of the sample frame to test how they respond to the survey, significant portions of the questionnaire unanswered or a high volume of “other” or “none of the above” responses could signal trouble with survey design.
Survey completion should be easy. Survey entry should work across a variety platforms, browsers and devices.
Additionally, respondents should be allowed to take the survey on their own time, even leaving the survey while saving their answers to date and allowing reentry when it is more convenient for them.
It is incumbent on researchers fielding self-administered surveys to maximize response rates. This reduces the potential for response bias, where the survey results may not accurately reflect the opinions of the entire population of targeted respondents. Previously we discussed ways researchers can increase the likelihood of respondents opening an email survey invitation. This post addresses how to get respondents to actually click on the survey link and participate in the survey.
Make the Invite Easy to Read
Don’t bury the lead. The opening sentence must capture the respondent’s attention and make the investment in effort to read the invitation. Keep in mind most people skim emails. Keep text of the invitation short, paying close attention to paragraph length. The email should be easy to skim.
Give a Reward
Offering respondents a reward for participation is an excellent way to motivate participation. Tangible incentives like a drawing, coupon, or gift card, if appropriate and within the budget, are excellent tools to maximize response rates. However, rewards do not necessarily need to be tangible. Intangible rewards can also prove to be excellent motivators. People, particularly customers who they have a relationship with the brand, want to be helpful. Expressing the importance of their option, and communicating how the brand will use the survey to improve its offering to customers like the respondent is an excellent avenue to leverage intangible rewards to motivate participation.
Intangible rewards are often sufficient if the respondent’s cost to participate in the survey is minimal. Perhaps the largest cost to a potential respondent is the time required to complete the survey. Give them an accurate estimate of the time it takes to complete the survey – and keep it short. We recommend no more than 10 minutes, more preferably five to six. If the research objectives require a longer survey instrument, break the survey into two or three shorter surveys and deliver them separately to different targeted respondents. Do not field excessively long surveys or mis-quote the estimated time to complete the survey – it is rude to impose on your respondents not to mention disastrous to your participation rates – and it’s unethical to mis-quote the survey length. As with getting the participants to open the email – creditability plays a critical role in getting them to click on the survey.
One of the best ways to garner credibility with the survey invite is to assure the participant confidentiality. This is particularly important for customer surveys, where the customers interact commonly with employees. For example, a community bank where customers may interact with bank employees not only in the context of banking but broadly in the community, must ensure customers that their survey response will be kept strictly confidential.
Personalizing the survey with appropriate merge fields is also an excellent way to garner credibility.
Make it as easy as possible for the participant to enter the survey. Program a link to the survey, and make sure it is both visible and presented early in the survey. Again, most people skim the contents of emails, so place the link in the top 1/3 of the email and make it clear that it is a link to enter the survey.
In designing survey invitations, remember to write short, concise, easy to read emails that both leverage respondent’s reward centers (tangible or intangible), and credibly estimate the short time required to complete the survey. This approach will help maximize response rates and avoid some of the pitfalls of response bias. Click here for the next post in this series in prompting respondents to complete the survey.
In fielding surveys researchers must be aware of the concepts of error and bias and how they can creep into a survey, potentially making the survey unreliable in ways that cannot be predicted. For example, one source of error is statistical error, where not enough respondents are surveyed to make the results statistically reliable. Another source of error, or bias, is response bias caused by not having a random sample of the targeted population.
A key concept of survey research is randomness of sample selection, in essence to give each member of the targeted survey population an equal chance of being surveyed. Response rates are important in self administered surveys (such as an email surveys), because it is possible non-responders (people who for some reason choose not to complete the survey) have different opinions than those who choose to participate in the survey. As a result, the survey is not purely random. If non-responders are somehow different than responders, the survey results will reflect that difference – thus biasing the research. It is therefore incumbent on researchers to maximize the survey response rate.
Say for example, a bank wants to survey customers after they have completed an online transaction. If customers who love the bank’s online capabilities are more likely to participate in the survey than those who do not like the bank’s online capabilities, the survey results will be biased in favor of a positive view of the bank’s online offering because it is not a representative sample – it is skewed toward customers with the positive view.
It is, again, incumbent on researchers to maximize the response rate as much as possible in self-administered email surveys.
Pre-Survey Awareness Campaign
One strategy to maximize response rates (particularly in a customer survey context) is a pre-survey awareness campaign to make customers aware of the coming survey and encourage participation. Such a campaign can take many forms, such as:
- Letter on company letterhead, signed by a high profile senior executive.
- Statement or billing inserts
- Email in advance of the survey
Each of these three are excellent ways to introduce the survey to respondents and maximize response rates.
The next steps in maximizing response rates in email surveys is passing SPAM filter tests, and prompting the recipient to open the email. The core concept here is credibility – to make the email appear as credible as possible.
The first step to maintaining credibility is to avoid getting caught in SPAM filter tests, the email content should avoid the following:
- Words common in SPAM, like “win” or “free”
- The use of ALL CAPS
- Excessive punctuation
- Special characters
Additionally, do not spoof emails. Spoofing is the forgery of an email header to make it appear it originated from a source other than the actual source. Send emails from your server. (Sometimes Kinesis has clients who want the email to appear to originate from their server. In such cases, we receive the sample from the client, append a unique identifier and send it back to the client to actually be mailed from their servers.)
Perhaps the best strategy to maintain the credibility of the email invite is to conform to Marketing Research Association (MRA) guidelines. These guidelines include:
- Clearly identify the researcher, including phone number, mailing address, and email
- Post privacy policies online and include a link to these policies
- Include a link to opt out of future emails
From and Subject Lines
Both the FROM and SUBJECT lines are critical in getting the respondent to open the email.
The FROM line has be as credible and recognizable as possible, avoiding vague or generic terms like “feedback”. For surveys of customers, the company name or the name of a recognizable representative of the company should be used.
The SUBJECT line must communicate the subject of the email in a credible way that will make the respondent want to open the email. Keep it brief (50 characters or less), clear, concise and credible.
Not only is the content of the email important, but the timing of delivery plays a role in response rates. In our experience sending the survey invitation in the middle of the week (Tuesday – Thursday) during daytime hours increases the likelihood that the email will be noticed by the respondent.
After an appropriate amount of time (typically for our clients 5 days), reminder emails should be sent, politely reminding the respondent of the previous invitation, and highlighting the importance of their opinion. One, perhaps two, reminder emails are appropriate, but do not send more than two.
To maximize the probability that respondents will receive and open the email focus on sending a credible email mid-week, one which will pass SPAM filter tests, contain accurate credible and compelling SUBJECT and FROM lines, and send polite reminder emails to non-responders.
But opening the email is just the first step. The actual objective is to get the respondents to open and complete the survey. Click here for the next post in this series in prompting respondents to participate in the survey.
Best in class mystery shop programs provide managers a means of applying coaching, training, incentives, and other motivational tools directly on the sales and service behaviors that matter most in terms of driving the desired customer experience outcome. One tool to identify which sales and service behaviors are most important is Key Driver Analysis.
Key Driver Analysis determines the relationship between specific behaviors and a desired outcome. For most brands and industries, the desired outcomes are purchase intent or return intent (customer loyalty). This analytical tool helps mangers identify and reinforce sales and service behaviors which drive sales or loyalty – behaviors that matter.
As with all research, it is a best practice to anticipate the analysis when designing a mystery shop program. In anticipating the analytical needs of Key Driver Analysis identify what specific desired outcome you want from the customer as a result of the experience.
- Do you want the customer to purchase something?
- Do you want them return for another purchase?
The answer to these questions will anticipate the analysis and build in mechanisms for Key Driver Analysis to identify which behaviors are more important in driving this desired outcome – which behaviors matter most.
Next, ask shoppers if they had been an actual customer, how the experience influenced their return intent. Group shops by positive and negative return intent to identify how mystery shops with positive return intent differ from those with negative. This yields a ranking of the importance of each behavior by the strength of its relationship to return intent.
Additionally, pair the return intent rating with a follow-up question asking, why the shopper rated their return intent as they did. The responses to this question should be grouped and classified into similar themes, and grouped by the return intent rating described above. The result of this analysis produces a qualitative determination of what sales and service practices drive return intent.
Finally, Key Driver Analysis produces a means to identify which behaviors have the highest potential for return on investment in terms of driving return intent. This is achieved by comparing the importance of each behavior (as defined above) and its performance (the frequency in which it is observed). Mapping this comparison in a quadrant chart, provides a means for identifying behaviors with relatively high importance and low performance – behaviors which will yield the highest potential for return on investment in terms of driving return intent.
Behaviors with the highest potential for return on investment can then be inserted into a feedback loop into the mystery shop scoring methodology by informing decisions with respect to weighting specific mystery shop questions, assigning more weight to behaviors with the highest potential for return on investment.
Employing Key Driver Analysis gives managers a means of focusing training, coaching, incentives, and other motivational tools directly on the sales and service behaviors that will produce the largest return on investment. See the attached post for further discussion of mystery shop scoring.
These days, post-transaction surveys are ubiquitous. Brands large and small take advantage of internet-based survey technology to evaluate the customer experience at almost every touch point. Similarly, loyalty proxy methodologies such as Net Promoter (NPS) are very much in vogue. However, many NPS surveys are fielded in a post-transaction context (potentially exposing the research to sampling bias as a result of only hearing from customers who have recently conducted a transaction), and are not designed in a manner that will give managers appropriate information upon which to take action on the research.
At their core, loyalty proxies are brand perception research – not transactional. We believe it is a best practice to define the sample frame as the entire customer base, as opposed to customers who have recently interacted with the brand. Ultimately, these surveys are image and perception research of the brand across the entire customer base.
Happily, this perception research offers an excellent opportunity to gather customer perceptions of the brand, compare them to your desired brand image, as well as measure engagement or wallet share. An excellent survey instrument to accomplish this is a survey divided into three parts:
- Loyalty Proxy: Consisting of the NPS rating or some other appropriate measure and 1 or 2 follow up questions to explore why the customer gave the NPS rating they did.
- Image perception: consisting of 3 or 4 questions to determine how customers perceive the brand.
- Engagement/Wallet Share: consisting of 3 or 4 questions to determine if the customer considers the brand their primary provider, and to gauge share of wallet of various financial products & services across the brand and its competitors.
This research plan will not only yield an NPS, but it will provide insight into why the customers assigned the NPS they did, evaluate the extent to which the entire customer base’s impressions of the brand matches your desired brand image, as well as identify how the brand is perceived by promoters and detractors. This plan will also yield valuable insight into share of wallet, and how wallet share differs for promoters and detractors.
Such a survey need not be long, the above objectives can be accomplished with 10 – 12 questions and will probably take less than 5 minutes for the customer to complete.
In a subsequent posts, we will explore each of these 3-parts of the survey in more detail:
Looking for a tried and true model to understand your service quality?
The SERVQUAL model is an empiric model that has been around for nearly 30 years. While not new, it is a foundation of many of the service quality and customer experience concepts in use today. It is a gap model designed to measure gaps between customer perceptions relative to customer expectations.
SERQUAL describes the customer experience in terms of five dimensions:
1. TANGIBLES – Appearance of physical facilities, equipment, personnel, and communication materials
2. RELIABILITY – Ability to perform the promised service dependably and accurately
3. RESPONSIVENESS – Willingness to help customers and provide prompt service
4. ASSURANCE – Knowledge and courtesy of employees and their ability to convey trust and confidence
5. EMPATHY – Caring, individualized attention the firm provides its customers
Each of these five dimensions is measured using a survey instrument consisting of individual attributes which role up into each dimension.
For example, each of the five dimensions may consist of the following individual attributes:
• Appearance/cleanliness of physical facilities
• Appearance/cleanliness of personnel
• Appearance/cleanliness of communication/marketing materials
• Appearance/cleanliness of equipment
• Perform services as promised/right the first time
• Perform services on time
• Follow customer’s instructions
• Show interest in solving problems
• Telephone calls/other inquiries answered promptly
• Willingness to help/answer questions
• Problems resolved quickly
• Knowledgeable employees/job knowledge
• Employees instill confidence in customer
• Employee efficiency
• Employee recommendations
• Questioning to understand needs
• Interest in helping
• Individualized/personal attention
• Ease of understanding/use understandable terms
• Understand my needs/recommending products to best fit my needs
• The employees have my best interests at heart
Call to Action
Research without a call to action may be informative, but not very useful. By measuring both customer perceptions and expectations, SERVQUAL gives managers the ability to prioritize investments in the customer experience based not only on their performance, but performance relative to customer expectations.
The first step in taking action on SERVQUAL results is to calculate a Gap Score by simply subtracting the expectation rating from the perception rating for each attribute (Gap Score = Perception – Expectation). This step alone will give you a basis for ranking each attribute based on its gap between customer perceptions and expectations.
Service Quality Score
In addition to ranking service attributes, the Gap Score can be used to calculate both a Service Quality Score based on the relative importance assigned by customers to each of the five service quality dimensions.
The first step in calculating a Service Quality Score is to average the Gap Score of each attribute within each dimension. This will give you the Gap Score for each dimension (GSD). Averaging the dimension Gap Scores will yield an Unweighted Service Quality Score.
From this unweighted score it is a three step process to calculate a Weighted Service Quality Score.
First, determine importance weights by asking customers to allocate a fixed number of points (typically 100) across each of the five dimensions based on how important the dimension is to them. This point allocation will yield a weight for each dimension based on its importance.
The second step is to multiply the Gap Score for each dimension (GSD) by its importance weight. The final step is to simply sum this product across all five dimensions; this will yield a Weighted Service Quality Score.
What does all this mean? See the following post for discussion of the implications of SERVQUAL for customer experience managers: The 5 Service Dimensions All Customers Care About.
In a previous post we discussed the importance of research objectives in program design. A natural progression of this subject is using research objectives to design a successful questionnaire.
All too often, I find clients who have gone online, found a questionnaire and implemented it into a survey process, in effect, handing research design over to an anonymous author on the Internet who has given no consideration to their specific needs. Inexperience with both the art and science of questionnaire design, conspires to cause them to miss out on building a research tool customized to their specific need.
While questionnaire design is a professional skill fraught with many perils for the inexperienced, the following process will eliminate some common mistakes.
First, define research objectives. Do not skip this step. Defining research objectives prior to making any other decisions about the program is by far the most effective way to make sure your program stays on track, on budget, and produces results that drive business success. See the previous post regarding research objectives. Once a set of objectives has been defined questionnaire design naturally falls out of the process; simply write a survey question for each objective.
For example, consider the following objective set:
1. Determine the level of customer satisfaction and provide a reference point for other satisfaction-based analysis.
2. Identify which service attributes drive satisfaction and which investments yield the greatest improvement in customer satisfaction.
3. Identify moments of truth where the danger of customer attrition is highest.
4. Track changes in customer satisfaction over time.
For each objective write a survey question. For the first objective, (overall satisfaction) write an overall satisfaction question. For objective #2 (attribute satisfaction) develop a list of service attributes and measure satisfaction relative to each. Continue the process for each objective for which a survey question can be written.
Question order is important and the placement of every question should be considered to avoid introducing bias into the survey as a result of question order. Generally, we like to place overall satisfaction questions early in the survey to avoid biasing the results with later attribute questions.
Similarly, question phrasing needs to be carefully considered to avoid biasing the responses. Keep phrasing neutral to avoid biasing the respondents one way or the other. Sure there is a temptation to use overly positive language with your customers, but this really is a bad practice.
Finally, anticipate the analysis. As you write the questionnaire, consider how the results will be reported and analyzed. Anticipating the analysis will make sure the survey instrument captures the data needed for the desired analysis.
Research design is a professional art. If you are not sure what you are doing, seek a professional to help you rather than field poor research with a do-it-yourself tool.