Every time a customer interacts with a brand, they learn something about that brand, and adjust their behavior based on what they learn. They will adjust their behavior in ways that are either profitable or unprofitable for the brand. The implication of this proposition is that the customer experience can be managed in such a way to influence customer behavior in profitable ways.
In order to understand how to drive customer behaviors via the customer experience, it is first, is important to define the customer behaviors you wish to influence, and to align marketing message, performance standards, training content, employee incentives and measurement systems to encourage those behaviors.
It is impossible, of course, to plan every customer experience or to ensure that every experience occurs exactly as intended. However, companies can identify the types of experiences that impart the right kind of information to customers at the right times. It is useful to group these experiences into three categories of company/customer interaction: Stabilizing, Critical, and Planned.
Stabilizing interactions promote customer retention, particularly in the early stages of the relationship.
New customers are at the highest risk of defection. As customers become more familiar with a brand they adjust their expectations accordingly, however new customers are more likely to experience disappointment, and thus more likely to defect. Turnover by new customers is particularly hard on profits because many defections occur prior to break-even, resulting in a net loss for the company. Thus, experiences that stabilize the customer relationship early on ensure that a higher proportion of customers will reach positive profitability.
The keys to an effective stabilizing strategy are education, competence and consistency.
Education influences expectations, helping customers develop a realistic expectations. It goes beyond simply informing customers about the products and services offered by the company. It systematically informs new customers how to use the brand’s services more effectively and efficiently, how to obtain assistance, how to complain, and what to expect as the relationship progresses. In addition to influencing expectations, systematic education leads to greater efficiency in the way customers interact with the company, thus driving down the cost of customer service and support.
Critical interactions are service encounters that lead to memorable customer experiences. While most service is routine, from time to time a situation arises that is out of the ordinary: a complaint, a question, a special request, a chance for an employee to go the extra mile. The outcomes of these critical incidents can be either positive or negative, depending upon the way the company responds to them; however, they are seldom neutral. The longer a customer remains with a company, the greater the likelihood that one or more critical interactions will have occurred.
Because they are memorable and unusual, critical interactions tend to have a powerful effect on the customer relationship. We often think of as “moments of truth where the brand has an opportunity to solidify the relationship earning a loyal customer or risk the customer’s defection. Positive outcomes lead to “customer delight” and word-of-mouth endorsements, while negative outcomes lead to customer defections, diminished share of wallet and unfavorable word-of-mouth.
The key to an effective critical interaction strategy is opportunity. Systems and processes must be in a position to react to these critical moments of truth.
An effective customer experience strategy should include systems for recording critical interactions, analyzing trends and patterns, and feeding that information back to the organization. Employees can then be trained to recognize critical opportunities, and empowered to respond to them in such a way that they will lead to positive outcomes and desired customer behaviors.
Planned interactions are intended to increase customer profitability through up-selling and cross-selling. These interactions are frequently triggered by changes in the customers’ purchasing patterns, account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action from service and sales personnel. Customer experience managers should have a process to record and analyzing the quality of execution of planned interactions with the objective off evaluating the performance of the brand at the customer brand interface – regardless of the channel.
The key to an effective strategy for planned interactions is appropriateness. Triggered requests for increased spending must be made in the context of the customers’ needs and permission; otherwise the requests will come off as clumsy and annoying. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and appropriate approach to planned interactions.
For additional perspectives on research techniques to monitor the customer experience in the stabilizing phase of the relationship, see the post: Onboarding Research: Research Techniques to Track Effectiveness of Stabilizing New Customer Relationships.
For additional perspectives on a research methodology to investigate “Critical” experiences, see the post: Critical Incident Technique: A Tool to Identify and Prepare for Your Moments of Truth.
For additional perspectives on research methodologies to investigate “Planned” experiences through out the customer life cycle, see the post: Research Tools to Monitor Planned Interactions Through the Customer Life Cycle.
What if I told you that after all your efforts with marketing (product, positioning and price), there is a one-in-ten chance the branch representatives will undermine the sale?
Now more than ever, it is critical for banks to establish themselves as the primary provider of financial services, not only for deposit accounts but across a variety of financial products and services. Increasing the average products per customer will require a strategic approach to both product design and marketing. However, at the end of this strategic marketing process, there is the human element, where prospective customers must interact with bank employees to complete the sales process.
As part of our services to our clients, Kinesis tracks purchase intent as a result of in-branch sales presentations. According to our research, 10% of in-branch sales presentations observed by mystery shoppers, result in negative purchase intent.
What do these 10% failed sales presentations look like?
Here are some quotes describing the experience:
“There was no personal attention. The banker did not seem to care if I was there or not. At the teller line, there was only one teller that seemed to care that there were several people waiting. No one moved with a sense of urgency. There was no communication materials provided.”
Here’s another example…
“It was painfully obvious that the banker was lacking basic knowledge of the accounts.”
“Brian did not give the impression that he wanted my business. He did not stand up and shake my hand when I went over to his desk. He very rarely made eye contact. I felt like he was just going through the motions. He did not ask for my name or address me by my name. He told me about checking account products but failed to inquire about my situation or determine what needs I have or might have in the future. He did not wrap up the recommendation by going over everything nor did he ask for my business. He did not thank me for coming in.”
In contrast, here is what the shops with positive intent look like:
“The appearance of the bank was comfortable and very busy in a good way. The customers were getting tended to and the associates had the customers’ best interests in mind. The response time was amazing and I felt as if the associate was sincere about wanting me as a customer, but he was not pushy or demanding about it.”
Now…after all the effort and expense of a strategic cross-sell strategy, which of the above experiences do you want your customers to encounter?
Would it be acceptable to you as a marketer to at the end of a strategic marketing campaign, have 10% of the sales presentations undermine its success?
These are rhetorical questions.
Time and time again, in study after study, we consistently observe that purchase intent is driven by two dimensions of the customer experience: reliability and empathy. Customers want bankers who care about them and their needs and have the ability to satisfy those needs. Specifically, our research suggests the following behaviors are strongly related to purchase intent:
- Greeting/Stand to Greet/Acknowledge Wait
- Interest in Helping/Offer Assistance
- Discuss Benefits/Solutions
- Promised Services Get Done
- Express Appreciation/Gracious
- Personalized Comment (such as, How are you?)
- Listen Attentively/Undivided Attention
As part of any strategic marketing campaign to both bring in new customers as well as increase wallet share of existing customers, it is incumbent on the institution to install appropriate customer experience training, sales and service monitoring, linked with incentives and rewards structures to motivate sales and service behaviors which drive purchase intent.
by CHRIS ARLEN
Not All Dimensions Are Equal
All dimensions are important to customers, but some more than others.
Service providers need to know which are which to avoid majoring in minors. At the same time they can’t focus on only one dimension and let the others suffer.
SERVQUAL research showed dimensions’ importance to each other by asking customers to assign 100 points across all five dimensions.*
Here’s their importance to customers.
The 5 Service Dimensions Customers Care About
What’s this mean for service providers?
#1 Just Do It
RELIABILITY: Do what you say you’re going to do when you said you were going to do it.
Customers want to count on their providers. They value that reliability. Don’t providers yearn to find out what customers value? This is it.It’s three times more important to be reliable than have shiny new equipment or flashy uniforms.
Doesn’t mean you can have ragged uniforms and only be reliable. Service providers have to do both. But providers first and best efforts are better spent making service reliable.
Whether it’s periodics on schedule, on-site response within Service Level Agreements (SLAs), or Work Orders completed on time.
#2 Do It Now
RESPONSIVENESS: Respond quickly, promptly, rapidly, immediately, instantly.
Waiting a day to return a call or email doesn’t make it. Even if customers are chronically slow in getting back to providers, responsiveness is more than 1/5th of their service quality assessment.
Service providers benefit by establishing internal SLAs for things like returning phone calls, emails and responding on-site. Whether it’s 30 minutes, 4 hours, or 24 hours, it’s important customers feel providers are responsive to their requests. Not just emergencies, but everyday responses too.
Call centers typically track caller wait times. Service providers can track response times. And their attainment of SLAs or other Key Performance Indicators (KPIs) of responsiveness. This is great performance data to present to customers in Departmental Performance Reviews.
#3 Know What Your Doing
ASSURANCE: Service providers are expected to be the experts of the service they’re delivering. It’s a given.
SERVQUAL research showed it’s important to communicate that expertise to customers. If a service provider is highly skilled, but customers don’t see that, their confidence in that provider will be lower. And their assessment of that provider’s service quality will be lower.
RAISE CUSTOMER AWARENESS OF YOUR COMPETENCIES
Service providers must communicate their expertise and competencies – before they do the work. This can be done in many ways that are repeatedly seen by customers, such as:
- Display industry certifications on patches, badges or buttons worn by employees
- Include certification logos on emails, letters & reports
- Put certifications into posters, newsletters & handouts
By communicating competencies, providers can help manage customer expectations. And influence their service quality assessment in advance.
#4 Care about Customers as much as the Service
EMPATHY: Services can be performed completely to specifications. Yet customers may not feel provider employees care about them during delivery. And this hurts customers’ assessments of providers’ service quality.
For example, a day porter efficiently cleans up a spill in a lobby. However, during the clean up doesn’t smile, make eye contact, or ask the customer if there is anything else they could do for them. In this hypothetical the provider’s service was performed fully. But the customer didn’t feel the provider employee cared. And it’s not necessarily the employees fault. They may not know how they’re being judged. They may be overwhelmed, inadequately trained, or disinterested.
SERVICE DELIVERY MATTERS
Providers’ service delivery can be as important as how it was done. Provider employees should be trained how to interact with customers and their end-users. Even a brief session during initial orientation helps. Anything to help them understand their impact on customers’ assessment of service quality.
#5 Look Sharp
TANGIBLES: Even though this is the least important dimension, appearance matters. Just not as much as the other dimensions.
Service providers will still want to make certain their employees appearance, uniforms, equipment, and work areas on-site (closets, service offices, etc.) look good. The danger is for providers to make everything look sharp, and then fall short on RELIABILITY or RESPONSIVENESS.
At the End of the Day
Customers’ assessments include expectations and perceptions across all five SERVQUAL dimensions. Service providers need to work on all five, but emphasize them in order of importance. If sacrifices must be made, use these dimensions as a guide for which ones to rework.
Also, providers can use SERVQUAL dimensions in determining specific customer and site needs. By asking questions around these dimensions, providers can learn how they play out at a particular location/bid opportunity. What dimensions are you in?
* For a description of the SERVQUAL methodology, see the following post: SERVQUAL Model: A Multi-Item Tool for Comparing Customer Perceptions vs. Expectations
Looking for a tried and true model to understand your service quality?
The SERVQUAL model is an empiric model that has been around for nearly 30 years. While not new, it is a foundation of many of the service quality and customer experience concepts in use today. It is a gap model designed to measure gaps between customer perceptions relative to customer expectations.
SERQUAL describes the customer experience in terms of five dimensions:
1. TANGIBLES – Appearance of physical facilities, equipment, personnel, and communication materials
2. RELIABILITY – Ability to perform the promised service dependably and accurately
3. RESPONSIVENESS – Willingness to help customers and provide prompt service
4. ASSURANCE – Knowledge and courtesy of employees and their ability to convey trust and confidence
5. EMPATHY – Caring, individualized attention the firm provides its customers
Each of these five dimensions is measured using a survey instrument consisting of individual attributes which role up into each dimension.
For example, each of the five dimensions may consist of the following individual attributes:
• Appearance/cleanliness of physical facilities
• Appearance/cleanliness of personnel
• Appearance/cleanliness of communication/marketing materials
• Appearance/cleanliness of equipment
• Perform services as promised/right the first time
• Perform services on time
• Follow customer’s instructions
• Show interest in solving problems
• Telephone calls/other inquiries answered promptly
• Willingness to help/answer questions
• Problems resolved quickly
• Knowledgeable employees/job knowledge
• Employees instill confidence in customer
• Employee efficiency
• Employee recommendations
• Questioning to understand needs
• Interest in helping
• Individualized/personal attention
• Ease of understanding/use understandable terms
• Understand my needs/recommending products to best fit my needs
• The employees have my best interests at heart
Call to Action
Research without a call to action may be informative, but not very useful. By measuring both customer perceptions and expectations, SERVQUAL gives managers the ability to prioritize investments in the customer experience based not only on their performance, but performance relative to customer expectations.
The first step in taking action on SERVQUAL results is to calculate a Gap Score by simply subtracting the expectation rating from the perception rating for each attribute (Gap Score = Perception – Expectation). This step alone will give you a basis for ranking each attribute based on its gap between customer perceptions and expectations.
Service Quality Score
In addition to ranking service attributes, the Gap Score can be used to calculate both a Service Quality Score based on the relative importance assigned by customers to each of the five service quality dimensions.
The first step in calculating a Service Quality Score is to average the Gap Score of each attribute within each dimension. This will give you the Gap Score for each dimension (GSD). Averaging the dimension Gap Scores will yield an Unweighted Service Quality Score.
From this unweighted score it is a three step process to calculate a Weighted Service Quality Score.
First, determine importance weights by asking customers to allocate a fixed number of points (typically 100) across each of the five dimensions based on how important the dimension is to them. This point allocation will yield a weight for each dimension based on its importance.
The second step is to multiply the Gap Score for each dimension (GSD) by its importance weight. The final step is to simply sum this product across all five dimensions; this will yield a Weighted Service Quality Score.
What does all this mean? See the following post for discussion of the implications of SERVQUAL for customer experience managers: The 5 Service Dimensions All Customers Care About.
In a previous post we discussed the importance of research objectives in program design. A natural progression of this subject is using research objectives to design a successful questionnaire.
All too often, I find clients who have gone online, found a questionnaire and implemented it into a survey process, in effect, handing research design over to an anonymous author on the Internet who has given no consideration to their specific needs. Inexperience with both the art and science of questionnaire design, conspires to cause them to miss out on building a research tool customized to their specific need.
While questionnaire design is a professional skill fraught with many perils for the inexperienced, the following process will eliminate some common mistakes.
First, define research objectives. Do not skip this step. Defining research objectives prior to making any other decisions about the program is by far the most effective way to make sure your program stays on track, on budget, and produces results that drive business success. See the previous post regarding research objectives. Once a set of objectives has been defined questionnaire design naturally falls out of the process; simply write a survey question for each objective.
For example, consider the following objective set:
1. Determine the level of customer satisfaction and provide a reference point for other satisfaction-based analysis.
2. Identify which service attributes drive satisfaction and which investments yield the greatest improvement in customer satisfaction.
3. Identify moments of truth where the danger of customer attrition is highest.
4. Track changes in customer satisfaction over time.
For each objective write a survey question. For the first objective, (overall satisfaction) write an overall satisfaction question. For objective #2 (attribute satisfaction) develop a list of service attributes and measure satisfaction relative to each. Continue the process for each objective for which a survey question can be written.
Question order is important and the placement of every question should be considered to avoid introducing bias into the survey as a result of question order. Generally, we like to place overall satisfaction questions early in the survey to avoid biasing the results with later attribute questions.
Similarly, question phrasing needs to be carefully considered to avoid biasing the responses. Keep phrasing neutral to avoid biasing the respondents one way or the other. Sure there is a temptation to use overly positive language with your customers, but this really is a bad practice.
Finally, anticipate the analysis. As you write the questionnaire, consider how the results will be reported and analyzed. Anticipating the analysis will make sure the survey instrument captures the data needed for the desired analysis.
Research design is a professional art. If you are not sure what you are doing, seek a professional to help you rather than field poor research with a do-it-yourself tool.
How do you make research actionable?
With the advent of do-it-yourself survey tools, there is a trend away from professional research design processes. One can search on line for a questionnaire, grab it off the internet, and field it on the cheap with a do-it-yourself survey tool with no consideration of the research needs at hand. This, in effect, hands research design over to an anonymous author on the Internet who has given no consideration to your specific needs.
Defining research objectives prior to making any other decisions about the program is by far the most effective way to make sure your program stays on track, on budget, and produces results that drive business success. It sounds very simple, and for the most part it is, however, I’m always surprised when I ask potential clients what their research objectives are how many cannot list anything other than the most general of objectives.
Defining research objectives is a fairly simple process. First, generate a list of everything you want to know as a result of the research.
For example, you may come up with the following list:
- How satisfied are our customers?
- Which key factors drive satisfaction among our customers?
- What are the causes of customer dissatisfaction?
- How can we measure customer satisfaction over time?
- Which business processes can most improve customer satisfaction and increase our financial returns?
- How can we measure the relationships between customer satisfaction, profitability and purchase or retention behavior?
- How can we evaluate our customers’ referral activity?
- How can we measure the value of our customers’ purchasing behavior?
- How can we identify changes in our customers’ purchasing or referral behaviors over time?
Note, these are not survey questions; they are questions to which you want answers. This is what you want to know.
Once you have developed a list of what you want to know as a result of the research, the next step is to map each of your questions to a specific research objective. For each question you should write a clear objective starting with a verb such as: determine, identify, track, link, measure, etc. Starting with verbs is excellent way to make sure you can take action on the results.
So, continuing with the example, the above list of questions may map into the following set of research objectives:
|What do you want to know?||Objectives|
|How satisfied are our customers?||Determine the level of customer satisfaction and provide a reference point for other satisfaction-based analysis.|
|Which key factors drive satisfaction among our customers?||Identify which service attributes drive satisfaction and which investments yield the greatest improvement in customer satisfaction.|
|What are the causes of customer dissatisfaction?||Identify moments of truth where the danger of customer attrition is highest.|
|How can we measure customer satisfaction over time?||Track changes in customer satisfaction over time. Determine if changes in satisfaction are significant.|
|Which business processes can most improve customer satisfaction and increase our financial returns?||Link key service attributes to specific business processes. Identify which processes maximize ROI.|
|How can we measure the relationships between customer satisfaction, profitability and purchase or retention behavior?||Identify the relationship between customer satisfaction and customer behaviors such as retention, purchase behavior, and likelihood of referral, which drive profitability.|
|How can we evaluate our customers’ referral activity?||Conduct loyalty-based customer satisfaction analysis, using net promoters and customer advocacy as a measurement for customer loyalty.|
|How can we measure the value of our customers’ purchasing behavior?||Determine the relationship between customer satisfaction and purchase behavior. Identify the ROI of satisfaction-based management.Make a financial case to all stakeholders (management, employees and shareholders) that the customer experience impacts financial performance.|
|How can we identify changes in our customers’ purchasing or referral behaviors over time?||Continue to track the relationship between satisfaction and purchase behavior. Analyze satisfaction by customer segments and the financial value of each individual segment.|
Once a clear set of research objectives is defined, you now have a road map to inform all subsequent decisions about sample frame, data collection, survey instrument, and analysis plan. Each of these issues deserves more attention than can be addressed in what is intended to be a brief blog post. In future posts, we will look into each of these issues individually.
Frontline customer facing employees are a vastly underutilized resource in terms of understanding the customer experience. They spend the majority of their time in the company-employee interface, and as a result tend to be unrecognized experts in the customer experience. Conversely, often the further management is removed from the customer interface the less they truly understand some details about what is going on.
One tool to both leverage frontline experience and identify any perceptual gaps between management and the frontline is to survey all levels of the organization to gather impressions of the customer experience.
Typically, we start by asking employees to put themselves in the customers’ shoes and to ask how customers would rate their satisfaction with the customer experience, including specific dimensions and attributes of the experience. A key call-to-action element of these surveys tends to be a question asking employees what they think customers would most like or dislike about the service delivery.
Next we focus employees on their own experience, asking the extent to which they believe they have all the tools, training, processes, policies, customer information, coaching, staff levels, empowerment, and support of both their immediate supervisor and senior management to deliver on the company’s service promise. Call-to-action elements can be designed into this portion of the research by asking what, in their experience, leads to customer frustration or disappointment, and soliciting suggestions for improvement. Perhaps most interesting we ask what are some of the strategies the employee uses to make customers happy – this is an excellent source for identifying best practices and potential coaches.
Finally, comparing results across the organization identifies any perceptual gaps between the frontline and management. This can be a very illuminating activity.
And why not leverage employees as a resource to understanding the customer experience? They spend most of their time in the company-customer interface, and are therefore experts of what is actually going on. Secondly, employees and customers generally want the same things.
|Customers want…||Employees want…|
|To get what they are promised||The tools/systems/ policies to do their job|
|Their problems resolved||Empowerment to solve problems|
|Their needs listened to/understood||More/better feedback|
|Knowledgeable employees; adequate information||More training; more/better feedback|
|Employees to take the initiative, take responsibility, represent the company||Empowerment; clear priorities; inclusion in the company’s big picture|
|The company to value their business||Clear priorities; the tools/systems/policies to do their job|
The dominate notion of customer advocacy is not very customer centric. Its focus is on what the customer can do for the bank by referring friends, relatives, and colleagues for their banking needs. A more customer centric notion, with perhaps a stronger relationship to customer loyalty, turns this dominate notion on its head – making the bank an advocate on behalf of the customer. Customers who trust their bank to do the right thing are more likely to remain loyal.
Measuring customer advocacy is both simple and useful; just ask your customers if they agree with the following statement: “My bank cares about me, not just the bottom line.” I call this the customer advocacy statement. Research has demonstrated a positive relationship between agreement with this statement and loyalty to a financial institution. This makes intuitive sense; customers who agree trust the bank to do right by them and will remain loyal.
Here is how we ask the question. As part of a broader survey, we ask our clients’ customers to rate, on an agreement scale, to what extent they agree with the above statement.
Research without clear call to action elements may be interesting, but not very useful. How can a manager put this question to use?
The answer to this is two fold:
First, the response to this question can be correlated to a battery of service attributes. This will yield a means of judging the relative importance of each attribute in terms of the strength of their relationship to loyalty. Mangers now have a basis to make informed decisions as to which investments will yield the most ROI in terms of improving customer loyalty.
Second, investigate all cases where agreement to this question is low. These are customers at risk. A researcher can drill into the survey responses of these customers to determine what caused the low rating. Tracking the causes will inform management of potential causes of runoff that require attention.
Research without a call to action may be informative, but not very useful. One way to build a call to action element into your customer experience research is to add a measure of customer loyalty. Loyalty can serve as a basis for evaluating which elements of the service mix are most important in terms of driving customer loyalty, and as result, have more potential ROI.
Measuring customer loyalty, however, in the context of a survey is difficult. Surveys best measure attitudes and perceptions. Loyalty is a behavior. Kinesis has had success with a model for estimating customer loyalty based on two measurements:
- Promoter: This is measured with the likelihood of referral to a friend relative or colleague, using a numeric scale.
- Trust: Trust is measured by capturing agreement with the statement, “the company cares about me, not just the bottom line.” Again answered in a numeric scale.
These two measures are combined together to calculate a loyalty index, which visually is the linear distance of the plot of these two measurements from the highest possible value for each scale (cases where promoter and trust received the highest possible rating).
Mathematically, this index can be calculated with the following equation:
T = Trust rating
P = Promoter rating
ST = Number of points on the Trust scale
SP = Number of points on the Promoter scale
Note this index measures the distance from the ideal or most loyal state. Lower values estimate stronger loyalty.
Calculating a loyalty index has value, but limited utility. A loyalty index alone does not give management much direction upon which to take action. One strategy to increase the actionably of the research is to use this index as a means to identify the service attributes that drive customer loyalty. Not all service attributes are equal; some play a larger role than others in driving customer loyalty.
So…how does the research determine an attribute’s role or relationship to customer loyalty? One tool is to capture satisfaction ratings of specific service attributes and determine their correlation to the loyalty statistic. The Pearson correlation coefficient is a measure of the strength of a linear association between two variables.
The following table contains a hypothetical list of service attributes and their correlation to the loyalty index. Note lower values of the loyalty index indicate stronger loyalty, so the Pearson correlations to the attribute satisfaction ratings are negative. The closer the correlation is to -1 equates to a stronger relationship to loyalty.
|Pearson Correlation to Loyalty Index|
|Perform services as promised/right the first time||-0.62|
|Show interest in solving problems||-0.61|
|Problems resolved quickly||-0.56|
|Willingness to help/answer questions||-0.55|
|Perform services on time||-0.54|
|Employees instill confidence in customer||-0.52|
|Questioning to understand needs||-0.45|
|Appearance/cleanliness of personnel||-0.42|
|Knowledgeable employees/job knowledge||-0.41|
|Appearance/cleanliness of physical facilities||-0.37|
As this table illustrates, the service attributes with the strongest correlation to the loyalty index are: perform services as promised/right the first time (-0.62), show interest in solving problems (-0.61), and employee efficiency (-0.58). Under this hypothetical example, the hypothetical managers can conclude that of the attributes measured, these three are the strongest drivers of customer loyalty. They now can use this research to make informed judgments as to where investments in the service mix will yield the most ROI.
Correlating service attributes to loyalty is not the end of the analysis; the next step is to further put this research to action by layering in the overall performance of each attribute relative to its relationship to loyalty.