Archive | Acting On Research RSS for this section

Research Tools to Monitor Planned Interactions Through the Customer Life Cycle

As we explored in an earlier post, 3 Types of Customer Interactions Every Customer Experience Manager Must Understand, there are three types of customer interactions: Stabilizing, Critical, and Planned.

The third of these, “planned” interactions, are intended to increase customer profitability through up-selling and cross-selling.

These interactions are frequently triggered by changes in the customer’s purchasing patterns, account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action from service and sales personnel. Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating the performance of the brand at the customer brand interface – regardless of the channel.

The key to an effective strategy for planned interactions is appropriateness. Triggered requests for increased spending must be made in the context of the customer’s needs and permission; otherwise, the requests will come off as clumsy and annoying. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and appropriate approach to planned interactions.

Research Plan for Planned Interactions

The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign. Ask yourself, what customer interactions are planned based on customer behavior? Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.

For example, after acquisition and onboarding, assume a brand has a campaign to trigger planned interactions based on triggers from tenure, recency, frequency, share of wallet, and monetary value of transactions. These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.

LifeCycle

 

Engagement Phase

Often it is instructive to think of customer experience research in terms of the brand-customer interface, employing different research tools to study the customer experience from both sides of this interface.

In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:

Customer Side Brand Side
Post-Transaction Surveys

Post-transaction surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey, targeting specific customers shortly after a service interaction. As the name implies, the purpose of this type of survey is to measure satisfaction with a specific transaction.

Transactional Mystery Shopping

Mystery shopping is about alignment.  It is an excellent tool to align sales and service behaviors to the brand. Mystery shopping focuses on the behavioral side of the equation, answering the question: are our employees exhibiting the sales and service behaviors that will engage customers to the brand?

Overall Satisfaction Surveys

Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction.  These surveys give managers a feel for satisfaction, engagement, image and positioning across the entire customer base, not just active customers.

Alternative Delivery Channel Shopping

Website mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these additional channels.

Employee Surveys

Employee surveys often measure employee satisfaction and engagement. However, they can also be employed to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identifies perceptual gaps between management and frontline personnel.

 

Growth Phase

In the growth phase, one may measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:

Customer Side Brand Side
Awareness Surveys

Awareness of the brand, its products and services, is central planned service interactions.  Managers need to know how awareness and attitudes change as a result of these planned experiences.

Cross-Sell  Mystery Shopping

In these unique mystery shops, mystery shoppers are seeded into the lead/referral process.  The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction.

Wallet Share Surveys

These surveys are used to evaluate customer engagement with and loyalty to the brand.  Specifically, to determine if customers consider the brand their primary provider, and identify potential road blocks to wallet share growth.

 

Retention Phase

Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:

Customer Side Brand Side
Lost Customer Surveys

Lost customer surveys identify sources of run-off or churn to provide insight into improving customer retention.

Life Cycle Mystery Shopping

Shoppers interact with the company over a period of time, across multiple touch points, providing broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across multiple channels.

Comment Listening

Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.

 

Call to Action – Make the Most of the Research

Research without call to action may be interesting, but not very useful.  Regardless of the research choices you make, be sure to build call to action elements into research design.

For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.

For surveys of customers, we recommend testing the effectiveness of the onboarding process by benchmarking three loyalty attitudes:

  • Would Recommend: The likelihood of the customer recommending the brand to a friend relative or colleague.
  • Customer Advocacy: The extent to which the customer agrees with the statement, “you care about me, not just the bottom line?”
  • Primary Provider: Does the customer consider the brand their primary provider for similar services?

As you contemplate campaigns to build planned experiences into your customer experience, it doesn’t matter what specific model you use.  The above model is simply for illustrative purposes.  As you build your own model, be sure to design customer experience research into the planned experiences to monitor both the presence and effectiveness of these planned experiences.


 

Click Here For More Information About Kinesis' Research Services

3 Types of Customer Interactions Every Customer Experience Manager Must Understand

86519740

Every time a customer interacts with a brand, they learn something about that brand, and adjust their behavior based on what they learn.  They will adjust their behavior in ways that are either profitable or unprofitable for the brand.  The implication of this proposition is that the customer experience can be managed in such a way to influence customer behavior in profitable ways.

In order to understand how to drive customer behaviors via the customer experience, it is first, is important to define the customer behaviors you wish to influence, and to align marketing message, performance standards, training content, employee incentives and measurement systems to encourage those behaviors.

It is impossible, of course, to plan every customer experience or to ensure that every experience occurs exactly as intended. However, companies can identify the types of experiences that impart the right kind of information to customers at the right times. It is useful to group these experiences into three categories of company/customer interaction:  Stabilizing, Critical, and Planned.

Stabilizing

Stabilizing interactions promote customer retention, particularly in the early stages of the relationship.

New customers are at the highest risk of defection.  As customers become more familiar with a brand they adjust their expectations accordingly, however new customers are more likely to experience disappointment, and thus more likely to defect. Turnover by new customers is particularly hard on profits because many defections occur prior to break-even, resulting in a net loss for the company. Thus, experiences that stabilize the customer relationship early on ensure that a higher proportion of customers will reach positive profitability.

The keys to an effective stabilizing strategy are education, competence and consistency.

Education influences expectations, helping customers develop a realistic expectations.  It goes beyond simply informing customers about the products and services offered by the company. It systematically informs new customers how to use the brand’s services more effectively and efficiently, how to obtain assistance, how to complain, and what to expect as the relationship progresses. In addition to influencing expectations, systematic education leads to greater efficiency in the way customers interact with the company, thus driving down the cost of customer service and support.

Critical

Critical interactions are service encounters that lead to memorable customer experiences.  While most service is routine, from time to time a situation arises that is out of the ordinary: a complaint, a question, a special request, a chance for an employee to go the extra mile. The outcomes of these critical incidents can be either positive or negative, depending upon the way the company responds to them; however, they are seldom neutral. The longer a customer remains with a company, the greater the likelihood that one or more critical interactions will have occurred.

Because they are memorable and unusual, critical interactions tend to have a powerful effect on the customer relationship. We often think of as “moments of truth where the brand has an opportunity to solidify the relationship earning a loyal customer or risk the customer’s defection.  Positive outcomes lead to “customer delight” and word-of-mouth endorsements, while negative outcomes lead to customer defections, diminished share of wallet and unfavorable word-of-mouth.

The key to an effective critical interaction strategy is opportunity. Systems and processes must be in a position to react to these critical moments of truth.

An effective customer experience strategy should include systems for recording critical interactions, analyzing trends and patterns, and feeding that information back to the organization. Employees can then be trained to recognize critical opportunities, and empowered to respond to them in such a way that they will lead to positive outcomes and desired customer behaviors.

Planned

Planned interactions are intended to increase customer profitability through up-selling and cross-selling. These interactions are frequently triggered by changes in the customers’ purchasing patterns, account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action from service and sales personnel.  Customer experience managers should have a process to record and analyzing the quality of execution of planned interactions with the objective off evaluating the performance of the brand at the customer brand interface – regardless of the channel.

The key to an effective strategy for planned interactions is appropriateness. Triggered requests for increased spending must be made in the context of the customers’ needs and permission; otherwise the requests will come off as clumsy and annoying. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and appropriate approach to planned interactions.

 

For additional perspectives on research techniques to monitor the customer experience in the stabilizing phase of the relationship, see the post: Onboarding Research: Research Techniques to Track Effectiveness of Stabilizing New Customer Relationships.

For additional perspectives on a research methodology to investigate “Critical” experiences, see the post: Critical Incident Technique: A Tool to Identify and Prepare for Your Moments of Truth.

For additional perspectives on research methodologies to investigate “Planned” experiences through out the customer life cycle, see the post: Research Tools to Monitor Planned Interactions Through the Customer Life Cycle.
 

Click Here For More Information About Kinesis' Research Services

The Human Element: Sales and Service, Bank’s Last Link in the Marketing Chain

What if I told you that after all your efforts with marketing (product, positioning and price), there is a one-in-ten chance the branch representatives will undermine the sale?

Now more than ever, it is critical for banks to establish themselves as the primary provider of financial services, not only for deposit accounts but across a variety of financial products and services.  Increasing the average products per customer will require a strategic approach to both product design and marketing.  However, at the end of this strategic marketing process, there is the human element, where prospective customers must interact with bank employees to complete the sales process.

Bank teller waiting on customer

As part of our services to our clients, Kinesis tracks purchase intent as a result of in-branch sales presentations.  According to our research, 10% of in-branch sales presentations observed by mystery shoppers, result in negative purchase intent.

What do these 10% failed sales presentations look like?

Here are some quotes describing the experience:

“There was no personal attention.  The banker did not seem to care if I was there or not.  At the teller line, there was only one teller that seemed to care that there were several people waiting.  No one moved with a sense of urgency.  There was no communication materials provided.”

Here’s another example…

“It was painfully obvious that the banker was lacking basic knowledge of the accounts.”

Yet another…

“Brian did not give the impression that he wanted my business.  He did not stand up and shake my hand when I went over to his desk.  He very rarely made eye contact.  I felt like he was just going through the motions. He did not ask for my name or address me by my name. He told me about checking account products but failed to inquire about my situation or determine what needs I have or might have in the future. He did not wrap up the recommendation by going over everything nor did he ask for my business. He did not thank me for coming in.”

In contrast, here is what the shops with positive intent look like:

“The appearance of the bank was comfortable and very busy in a good way. The customers were getting tended to and the associates had the customers’ best interests in mind. The response time was amazing and I felt as if the associate was sincere about wanting me as a customer, but he was not pushy or demanding about it.”

Now…after all the effort and expense of a strategic cross-sell strategy, which of the above experiences do you want your customers to encounter?

Would it be acceptable to you as a marketer to at the end of a strategic marketing campaign, have 10% of the sales presentations undermine its success?

These are rhetorical questions.

Time and time again, in study after study, we consistently observe that purchase intent is driven by two dimensions of the customer experience: reliability and empathy.  Customers want bankers who care about them and their needs and have the ability to satisfy those needs. Specifically, our research suggests the following behaviors are strongly related to purchase intent:

  • Friendly/Smile/Courteous
  • Greeting/Stand to Greet/Acknowledge Wait
  • Interest in Helping/Offer Assistance
  • Discuss Benefits/Solutions
  • Promised Services Get Done
  • Accuracy
  • Professionalism
  • Express Appreciation/Gracious
  • Personalized Comment (such as, How are you?)
  • Listen Attentively/Undivided Attention

As part of any strategic marketing campaign to both bring in new customers as well as increase wallet share of existing customers, it is incumbent on the institution to install appropriate customer experience training, sales and service monitoring, linked with incentives and rewards structures to motivate sales and service behaviors which drive purchase intent.




Click Here For More Information About Kinesis' Bank CX Research Services

The 5 Service Dimensions All Customers Care About

Reprinted with permission from Chris Arlen, of Service Performance.

by CHRIS ARLEN

Not All Dimensions Are Equal

All dimensions are important to customers, but some more than others.

Service providers need to know which are which to avoid majoring in minors. At the same time they can’t focus on only one dimension and let the others suffer.

SERVQUAL research showed dimensions’ importance to each other by asking customers to assign 100 points across all five dimensions.*

Here’s their importance to customers.

The 5 Service Dimensions Customers Care About

SERVQUAL

What’s this mean for service providers?

#1 Just Do It

RELIABILITY: Do what you say you’re going to do when you said you were going to do it.

Customers want to count on their providers. They value that reliability. Don’t providers yearn to find out what customers value? This is it.It’s three times more important to be reliable than have shiny new equipment or flashy uniforms.

Doesn’t mean you can have ragged uniforms and only be reliable. Service providers have to do both. But providers first and best efforts are better spent making service reliable.

Whether it’s periodics on schedule, on-site response within Service Level Agreements (SLAs), or Work Orders completed on time.

#2 Do It Now

RESPONSIVENESS: Respond quickly, promptly, rapidly, immediately, instantly.

Waiting a day to return a call or email doesn’t make it. Even if customers are chronically slow in getting back to providers, responsiveness is more than 1/5th of their service quality assessment.

Service providers benefit by establishing internal SLAs for things like returning phone calls, emails and responding on-site. Whether it’s 30 minutes, 4 hours, or 24 hours, it’s important customers feel providers are responsive to their requests. Not just emergencies, but everyday responses too.

REPORTING RESPONSIVENESS

Call centers typically track caller wait times. Service providers can track response times. And their attainment of SLAs or other Key Performance Indicators (KPIs) of responsiveness. This is great performance data to present to customers in Departmental Performance Reviews.

#3 Know What Your Doing

ASSURANCE: Service providers are expected to be the experts of the service they’re delivering. It’s a given.

SERVQUAL research showed it’s important to communicate that expertise to customers. If a service provider is highly skilled, but customers don’t see that, their confidence in that provider will be lower. And their assessment of that provider’s service quality will be lower.

RAISE CUSTOMER AWARENESS OF YOUR COMPETENCIES

Service providers must communicate their expertise and competencies – before they do the work. This can be done in many ways that are repeatedly seen by customers, such as:

  • Display industry certifications on patches, badges or buttons worn by employees
  • Include certification logos on emails, letters & reports
  • Put certifications into posters, newsletters & handouts

By communicating competencies, providers can help manage customer expectations. And influence their service quality assessment in advance.

#4 Care about Customers as much as the Service

EMPATHY: Services can be performed completely to specifications. Yet customers may not feel provider employees care about them during delivery. And this hurts customers’ assessments of providers’ service quality.

For example, a day porter efficiently cleans up a spill in a lobby. However, during the clean up doesn’t smile, make eye contact, or ask the customer if there is anything else they could do for them. In this hypothetical the provider’s service was performed fully. But the customer didn’t feel the provider employee cared. And it’s not necessarily the employees fault. They may not know how they’re being judged. They may be overwhelmed, inadequately trained, or disinterested.

SERVICE DELIVERY MATTERS

Providers’ service delivery can be as important as how it was done. Provider employees should be trained how to interact with customers and their end-users. Even a brief session during initial orientation helps.  Anything to help them understand their impact on customers’ assessment of service quality.

#5 Look Sharp

TANGIBLES: Even though this is the least important dimension, appearance matters. Just not as much as the other dimensions.

Service providers will still want to make certain their employees appearance, uniforms, equipment, and work areas on-site (closets, service offices, etc.) look good. The danger is for providers to make everything look sharp, and then fall short on RELIABILITY or RESPONSIVENESS.

At the End of the Day

Customers’ assessments include expectations and perceptions across all five SERVQUAL dimensions. Service providers need to work on all five, but emphasize them in order of importance. If sacrifices must be made, use these dimensions as a guide for which ones to rework.

Also, providers can use SERVQUAL dimensions in determining specific customer and site needs. By asking questions around these dimensions, providers can learn how they play out at a particular location/bid opportunity. What dimensions are you in?

* For a description of the SERVQUAL methodology, see the following post: SERVQUAL Model: A Multi-Item Tool for Comparing Customer Perceptions vs. Expectations


Click Here For More Information About Kinesis' Research Services

SERVQUAL Model: A Multi-Item Tool for Comparing Customer Perceptions vs. Expectations

5 Dimensions

Looking for a tried and true model to understand your service quality?

The SERVQUAL model is an empiric model that has been around for nearly 30 years. While not new, it is a foundation of many of the service quality and customer experience concepts in use today. It is a gap model designed to measure gaps between customer perceptions relative to customer expectations.

SERQUAL describes the customer experience in terms of five dimensions:

1. TANGIBLES – Appearance of physical facilities, equipment, personnel, and communication materials
2. RELIABILITY – Ability to perform the promised service dependably and accurately
3. RESPONSIVENESS – Willingness to help customers and provide prompt service
4. ASSURANCE – Knowledge and courtesy of employees and their ability to convey trust and confidence
5. EMPATHY – Caring, individualized attention the firm provides its customers

Each of these five dimensions is measured using a survey instrument consisting of individual attributes which role up into each dimension.

For example, each of the five dimensions may consist of the following individual attributes:

Tangibles
• Appearance/cleanliness of physical facilities
• Appearance/cleanliness of personnel
• Appearance/cleanliness of communication/marketing materials
• Appearance/cleanliness of equipment

Reliability
• Perform services as promised/right the first time
• Perform services on time
• Follow customer’s instructions
• Show interest in solving problems

Responsiveness
• Telephone calls/other inquiries answered promptly
• Willingness to help/answer questions
• Problems resolved quickly

Assurance
• Knowledgeable employees/job knowledge
• Employees instill confidence in customer
• Employee efficiency
• Employee recommendations
• Questioning to understand needs

Empathy
• Interest in helping
• Individualized/personal attention
• Ease of understanding/use understandable terms
• Understand my needs/recommending products to best fit my needs
• The employees have my best interests at heart

Call to Action

Research without a call to action may be informative, but not very useful. By measuring both customer perceptions and expectations, SERVQUAL gives managers the ability to prioritize investments in the customer experience based not only on their performance, but performance relative to customer expectations.

The first step in taking action on SERVQUAL results is to calculate a Gap Score by simply subtracting the expectation rating from the perception rating for each attribute (Gap Score = Perception – Expectation). This step alone will give you a basis for ranking each attribute based on its gap between customer perceptions and expectations.

Service Quality Score

In addition to ranking service attributes, the Gap Score can be used to calculate both a Service Quality Score based on the relative importance assigned by customers to each of the five service quality dimensions.

The first step in calculating a Service Quality Score is to average the Gap Score of each attribute within each dimension. This will give you the Gap Score for each dimension (GSD). Averaging the dimension Gap Scores will yield an Unweighted Service Quality Score.

From this unweighted score it is a three step process to calculate a Weighted Service Quality Score.

First, determine importance weights by asking customers to allocate a fixed number of points (typically 100) across each of the five dimensions based on how important the dimension is to them. This point allocation will yield a weight for each dimension based on its importance.

The second step is to multiply the Gap Score for each dimension (GSD) by its importance weight. The final step is to simply sum this product across all five dimensions; this will yield a Weighted Service Quality Score.

Click here for a more detailed step by step description of score calculation.

What does all this mean?  See the following post for discussion of the implications of SERVQUAL for customer experience managers: The 5 Service Dimensions All Customers Care About.

Keys to Customer Experience Research Success: The Professional Art of Questionnaire Design

In a previous post we discussed the importance of research objectives in program design. A natural progression of this subject is using research objectives to design a successful questionnaire.

All too often, I find clients who have gone online, found a questionnaire and implemented it into a survey process, in effect, handing research design over to an anonymous author on the Internet who has given no consideration to their specific needs. Inexperience with both the art and science of questionnaire design, conspires to cause them to miss out on building a research tool customized to their specific need.

While questionnaire design is a professional skill fraught with many perils for the inexperienced, the following process will eliminate some common mistakes.

First, define research objectives. Do not skip this step. Defining research objectives prior to making any other decisions about the program is by far the most effective way to make sure your program stays on track, on budget, and produces results that drive business success. See the previous post regarding research objectives. Once a set of objectives has been defined questionnaire design naturally falls out of the process; simply write a survey question for each objective.

For example, consider the following objective set:

1. Determine the level of customer satisfaction and provide a reference point for other satisfaction-based analysis.
2. Identify which service attributes drive satisfaction and which investments yield the greatest improvement in customer satisfaction.
3. Identify moments of truth where the danger of customer attrition is highest.
4. Track changes in customer satisfaction over time.

For each objective write a survey question. For the first objective, (overall satisfaction) write an overall satisfaction question. For objective #2 (attribute satisfaction) develop a list of service attributes and measure satisfaction relative to each. Continue the process for each objective for which a survey question can be written.

Question order is important and the placement of every question should be considered to avoid introducing bias into the survey as a result of question order. Generally, we like to place overall satisfaction questions early in the survey to avoid biasing the results with later attribute questions.

Similarly, question phrasing needs to be carefully considered to avoid biasing the responses. Keep phrasing neutral to avoid biasing the respondents one way or the other. Sure there is a temptation to use overly positive language with your customers, but this really is a bad practice.

Finally, anticipate the analysis. As you write the questionnaire, consider how the results will be reported and analyzed. Anticipating the analysis will make sure the survey instrument captures the data needed for the desired analysis.

Research design is a professional art. If you are not sure what you are doing, seek a professional to help you rather than field poor research with a do-it-yourself tool.


Click Here For More Information About Kinesis' Research Services

Keys to Customer Experience Research Success – Start with the Objectives

How do you make research actionable?

With the advent of do-it-yourself survey tools, there is a trend away from professional research design processes. One can search on line for a questionnaire, grab it off the internet, and field it on the cheap with a do-it-yourself survey tool with no consideration of the research needs at hand. This, in effect, hands research design over to an anonymous author on the Internet who has given no consideration to your specific needs.

Defining research objectives prior to making any other decisions about the program is by far the most effective way to make sure your program stays on track, on budget, and produces results that drive business success. It sounds very simple, and for the most part it is, however, I’m always surprised when I ask potential clients what their research objectives are how many cannot list anything other than the most general of objectives.

Defining research objectives is a fairly simple process. First, generate a list of everything you want to know as a result of the research.

For example, you may come up with the following list:

  • How satisfied are our customers?
  • Which key factors drive satisfaction among our customers?
  • What are the causes of customer dissatisfaction?
  • How can we measure customer satisfaction over time?
  • Which business processes can most improve customer satisfaction and increase our financial returns?
  • How can we measure the relationships between customer satisfaction, profitability and purchase or retention behavior?
  • How can we evaluate our customers’ referral activity?
  • How can we measure the value of our customers’ purchasing behavior?
  • How can we identify changes in our customers’ purchasing or referral behaviors over time?

Note, these are not survey questions; they are questions to which you want answers. This is what you want to know.

Once you have developed a list of what you want to know as a result of the research, the next step is to map each of your questions to a specific research objective. For each question you should write a clear objective starting with a verb such as: determine, identify, track, link, measure, etc. Starting with verbs is excellent way to make sure you can take action on the results.

So, continuing with the example, the above list of questions may map into the following set of research objectives:

What do you want to know? Objectives
How satisfied are our customers? Determine the level of customer satisfaction and provide a reference point for other satisfaction-based analysis.
Which key factors drive satisfaction among our customers? Identify which service attributes drive satisfaction and which investments yield the greatest improvement in customer satisfaction.
What are the causes of customer dissatisfaction? Identify moments of truth where the danger of customer attrition is highest.
How can we measure customer satisfaction over time? Track changes in customer satisfaction over time.  Determine if changes in satisfaction are significant.
Which business processes can most improve customer satisfaction and increase our financial returns? Link key service attributes to specific business processes.  Identify which processes maximize ROI.
How can we measure the relationships between customer satisfaction, profitability and purchase or retention behavior? Identify the relationship between customer satisfaction and customer behaviors such as retention, purchase behavior, and likelihood of referral, which drive profitability.
How can we evaluate our customers’ referral activity? Conduct loyalty-based customer satisfaction analysis, using net promoters and customer advocacy as a measurement for customer loyalty.
How can we measure the value of our customers’ purchasing behavior? Determine the relationship between customer satisfaction and purchase behavior.  Identify the ROI of satisfaction-based management.Make a financial case to all stakeholders (management, employees and shareholders) that the customer experience impacts financial performance.
How can we identify changes in our customers’ purchasing or referral behaviors over time? Continue to track the relationship between satisfaction and purchase behavior. Analyze satisfaction by customer segments and the financial value of each individual segment.

Once a clear set of research objectives is defined, you now have a road map to inform all subsequent decisions about sample frame, data collection, survey instrument, and analysis plan. Each of these issues deserves more attention than can be addressed in what is intended to be a brief blog post. In future posts, we will look into each of these issues individually.


Click Here For More Information About Kinesis' Research Services

Does Your Frontline Understand the Customer Experience Better than the CEO?

Frontline customer facing employees are a vastly underutilized resource in terms of understanding the customer experience. They spend the majority of their time in the company-employee interface, and as a result tend to be unrecognized experts in the customer experience. Conversely, often the further management is removed from the customer interface the less they truly understand some details about what is going on.

One tool to both leverage frontline experience and identify any perceptual gaps between management and the frontline is to survey all levels of the organization to gather impressions of the customer experience.

Typically, we start by asking employees to put themselves in the customers’ shoes and to ask how customers would rate their satisfaction with the customer experience, including specific dimensions and attributes of the experience. A key call-to-action element of these surveys tends to be a question asking employees what they think customers would most like or dislike about the service delivery.

Next we focus employees on their own experience, asking the extent to which they believe they have all the tools, training, processes, policies, customer information, coaching, staff levels, empowerment, and support of both their immediate supervisor and senior management to deliver on the company’s service promise. Call-to-action elements can be designed into this portion of the research by asking what, in their experience, leads to customer frustration or disappointment, and soliciting suggestions for improvement. Perhaps most interesting we ask what are some of the strategies the employee uses to make customers happy – this is an excellent source for identifying best practices and potential coaches.

Finally, comparing results across the organization identifies any perceptual gaps between the frontline and management. This can be a very illuminating activity.

And why not leverage employees as a resource to understanding the customer experience? They spend most of their time in the company-customer interface, and are therefore experts of what is actually going on. Secondly, employees and customers generally want the same things.

Customers want… Employees want…
To get what they are promised The tools/systems/ policies to do their job
Their problems resolved Empowerment to solve problems
Their needs listened to/understood More/better feedback
Knowledgeable employees; adequate information More training; more/better feedback
Employees to take the initiative, take responsibility, represent the company Empowerment; clear priorities; inclusion in the company’s big picture
The company to value their business Clear priorities; the tools/systems/policies to do their job

 


Click Here For More Information About Kinesis' Employee Engagement Research

Turning Customer Advocacy on Its Head

The dominate notion of customer advocacy is not very customer centric. Its focus is on what the customer can do for the bank by referring friends, relatives, and colleagues for their banking needs. A more customer centric notion, with perhaps a stronger relationship to customer loyalty, turns this dominate notion on its head – making the bank an advocate on behalf of the customer. Customers who trust their bank to do the right thing are more likely to remain loyal.

My bank cares about me not just bottom line

Measuring customer advocacy is both simple and useful; just ask your customers if they agree with the following statement: “My bank cares about me, not just the bottom line.” I call this the customer advocacy statement. Research has demonstrated a positive relationship between agreement with this statement and loyalty to a financial institution. This makes intuitive sense; customers who agree trust the bank to do right by them and will remain loyal.

Here is how we ask the question. As part of a broader survey, we ask our clients’ customers to rate, on an agreement scale, to what extent they agree with the above statement.

Research without clear call to action elements may be interesting, but not very useful. How can a manager put this question to use?

The answer to this is two fold:

First, the response to this question can be correlated to a battery of service attributes. This will yield a means of judging the relative importance of each attribute in terms of the strength of their relationship to loyalty. Mangers now have a basis to make informed decisions as to which investments will yield the most ROI in terms of improving customer loyalty.

Second, investigate all cases where agreement to this question is low. These are customers at risk. A researcher can drill into the survey responses of these customers to determine what caused the low rating. Tracking the causes will inform management of potential causes of runoff that require attention.


Click Here For More Information About Kinesis' Research Services

4 Ways to Understand & Monitor Moments of Truth

Moment of TruthEvery time a customer interacts with a provider, they learn something either positive or negative, and adjust their behavior accordingly based on what they learn.

The customer value equation is an on-going process by which the customer keeps a running total of all the benefits of a product or service (both tangible and intangible) and subtracts the sum of all the costs associated with the product or service (tangible and intangible). If the product of this equation is positive they will start or maintain a relationship with the provider.

But is this a continuous process? Or do many customers travel through the customer journey in a state of inertia until they reach critical points in the customer journey where they feed knowledge gained, at these critical points, into the customer value equation?

The fact of the matter is not all points along the customer journey are equal. In every customer journey there are specific of “moments of truth” where customers form or change their opinion of the provider, either positively or negatively, based on their experience. Moments of truth can be quite varied and occur in a skilled sales presentation, when a shop owner stays open late help dad buy the perfect gift, or when a hold time is particularly long.

In designing tools to monitor the customer experience, managers must be aware of potential moments of truth and design tools to monitor these critical points in the customer journey. Some of these tools include:

Mystery Shopping: Mystery shopping allows managers to test their service experience in a controlled manner. Do you have a concern about how your employees respond to specific customer complaints or problems? – Send in a mystery shopper with that specific problem and evaluate the response. Are you concerned about cross-sell skills? – Send in a mystery shopper with an obvious cross-sell need and evaluate how it is handled. With mystery shoppers managers can design controlled tests to evaluate how employees react when presented with specific moments of truth.

Customer Comments: Historically, comment tools have taken the form of cards; however, increasingly these tools are migrating onto online and mobile platforms. The self-administered nature of comment tools make them very poor solutions for a customer survey, as we tend to hear from an unrepresentative sample of customers who are either extremely happy or extremely unhappy.

However, this highly self-administered nature of comment tools makes them perfect to monitor moments of truth. Customers on the extreme end of either scale probably are at a moment of truth in the journey. In designing comment tools, be sure to limit the amount of categorized questions and rating scales; rather give the customer plenty of “white space” to tell you exactly what is on their mind. Over time, an analysis of these comments will give you insight into the nature and causes of moments of truth.

Social Media: Similar to collecting comments from customers, social media can be an excellent tool for identifying common causes of moments of truth. Customers who take to social media to mention a product or service are likely to be highly motivated – again, at the extreme ends of the satisfaction spectrum.

Survey Tracking: Finally, ongoing satisfaction tracking of all customers can be a source of intelligence regarding moments of truth. To turn a satisfaction tracking study into a moment of truth monitor, focus your attention on the bottom of the satisfaction curve. If a customer assigns a satisfaction rating of “1” or “2” on a 5-point scale, drill into these customers’ responses on a case by case basis to determine what caused the low rating – this will most likely reveal a moment of truth.

Here are four ideas to identify and monitor moments of truth.

How do you monitor your moments of truth?


Clink Here for Mystery Shopping Best Practices


Click Here For More Information About Kinesis' Research Services