Archive | Employee Surveys RSS for this section

A New Normal: Implications for Bank Customer Experience Measurement Post Pandemic – Planned Interactions

Part 2: Research Tools to Monitor Planned Interactions through the Customer Lifecycle

As we explored in an earlier post, Three Types of Customer Experiences CX Managers Must Understand, there are three types of customer interactions: Planned, Stabilizing, and Critical.

Planned interactions are intended to increase customer profitability through the customer lifecycle by engaging customers with relevant planned interactions and content in an integrated omni-channel environment.  Planned interactions will continue to grow in importance as the financial service industry shifts to an integrated digital first model.

These planned interactions are frequently triggered by changes in account usage, financial situation, family profile, etc.  CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action toward planned interactions.  Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating their effectiveness – regardless of the channel.

The key to an effective strategy for planned interactions is relevance. Triggered requests for increased engagement must be made in the context of the customer’s needs and with their permission; otherwise, the requests will come off as clumsy and annoying, and give the impression the bank is not really interested in the customer’s individual needs.  By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and relevant approach to planned interactions.

Research Plan for Planned Interactions

The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign.  Ask yourself, what customer interactions are planned through these layers of integrated channels.  Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.

For example, after acquisition and onboarding, assume a bank has a campaign to trigger planned interactions based on triggers from past engagement.  These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.

Engagement Phase

Often it is instructive to think of customer experience research in terms of the bank-customer interface, employing different research tools to study the customer experience from both sides of this interface.

In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:

Customer Side Brand Side
Post-Event Surveys
 
These post-experience surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey.  They can be performed across all channels, digital, contact center and in-person.  As the name implies, the purpose of this type of survey is to measure experience with a specific customer experience.
Employee Surveys

Ultimately, employees are at the center of the integrated customer experience model.
 
Employee surveys often measure employee satisfaction and engagement. However, there is far more value to be gleaned from employees.  We employ them to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.
 
They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identify perceptual gaps between management and frontline personnel.
Overall Satisfaction Surveys
 
Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction.  They give managers valuable insight into overall satisfaction, engagement, image and positioning across the entire customer base, not just active customers.
Digital Delivery Channel Shopping
 
Be it a website or mobile app, digital mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these digital channels.
  Transactional Mystery Shopping
 
Mystery shopping is about alignment.  It is an excellent tool to align the customer experience to the brand. Best-in-class mystery shopping answers the question: is our customer experience consistent with our brand objectives?  Historically, mystery shopping has been in the in-person channel, however we are seeing increasing mystery shopping to contact center agents.

Growth Phase

In the growth phase, we measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:

Customer Side Brand Side
Awareness Surveys
 
Awareness of the brand, its products and services, is central to planned service interactions.  Managers need to know how awareness and attitudes change as a result of these planned experiences.
Cross-Sell  Mystery Shopping
 
In these unique mystery shops, mystery shoppers are seeded into the lead/referral process.  The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction.
 
These shops work very well in planned sales interactions within the contact center environment. 
Wallet Share Surveys
 
These surveys are used to evaluate customer engagement with and loyalty to the institution.  Specifically, they determine if customers consider the institution their primary provider of financial services, and identify potential road blocks to wallet share growth.
 

Retention Phase

Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:

Customer Side Brand Side
Critical Incident Technique (CIT)
 
CIT is a qualitative research methodology designed to uncover details surrounding a service encounter that a customer found particularly satisfying or dissatisfying.  This research technique identifies these common critical incidents, their impact on the customer experience, and customer engagement, giving managers an informed perspective upon which to prepare employees to recognize moments of truth, and respond in ways that will lead to positive outcomes.
Employee Surveys
 
Employees observe firsthand the relationship with the customer.  They are a valuable resource of customer experience information, and can provide a lot of context into the types of bad experiences customers frequently experience.
Lost Customer Surveys
 
Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention.
Life Cycle Mystery Shopping
 
If an integrated channel approach is the objective, one should measure the customer experience in an integrated manner.
 
In lifecycle shops, shoppers interact with the bank over a period of time, across multiple touch points (digital, contact center and in-person).  This lifecycle approach provides broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across all channels.
Comment Listening
 
Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.
 

Call to Action – Make the Most of the Research

For customer experience surveys, we recommend testing the effectiveness of planned interactions by benchmarking three loyalty attitudes:

  • Would Recommend: The likelihood of the customer recommending the bank to a friend, relative or colleague.
  • Customer Advocacy: The extent to which the customer agrees with the statement, “My bank cares about me, not just the bottom line?”
  • Primary Provider: Does the customer consider the institution their primary provider for financial services?

For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.

As the integrated digital first business model accelerates, planned interactions will continue to grow in importance, and managers of the customer experience should build customer experience monitoring tools to evaluate the efficacy of these planned experiences in terms of driving desired customer attitudes and behaviors.

In the next post, we will take a look at stabilizing experiences, and their implications for customer experience research.

 

 

Click Here For More Information About Kinesis' Bank CX Research Services

Not All Customer Experience Variation is Equal: Common Cause vs. Special Cause Variation

Variability in customer experience scores is common and normal.  Be it a survey of customers, mystery shops, social listening or other customer experience measurement, a certain amount of random variation in the data is normal.  As a result, managers need a means of interpreting any variation in their customer experience measurement to evaluate if the customer experience is truly changing, or if the variation they are seeing is simply random.

In a previous post, we proposed the use of control charts as a tool to track customer experience measurements within upper and lower quality control limits, giving managers a meaningful way to determine if any variation in their customer experience measurement reflects an actual change in the experience as opposed to random variation or chance.

Now, managers need to understand the causes of variation, specifically common and special cause variation.  Common and special cause variation are six sigma concepts, while most commonly used in industrial production, they can be borrowed and employed to the customer experience.

Common Cause Variation:  Much like variation in the roll of dice, common cause variation is natural variation within any system.  Common cause variation is any variation constantly active within a system, and represents statistical “noise” within the system.

Examples of common cause variation in the customer experience are:

  • Poorly defined, poorly designed, inappropriate policies or procedures
  • Poor design or maintenance of computer systems
  • Inappropriate hiring practices
  • Insufficient training
  • Measurement error

Special Cause Variation: Unlike the roll of the dice, special cause variation is not probabilistically predictable within the system, as a result it does not represent statistical “noise” within the system, but is the signal within the system.

Examples of special cause variation include:

  • High demand/ high traffic
  • Poor adjustment of equipment
  • Just having a bad day

When measuring the customer experience it is helpful to consider everything within the context of the company-customer interface.  Every time a sales or service interaction within this interface occurs the customer learns something from the experience and adjusts their behavior as a result of the experience.  Managing the customer experience is the practice of managing what the customers learn from the experience and thus managing their behavior in profitable ways.

A key to managing customer behaviors is understanding common cause and special cause variation and their implications.  Common cause variation is variation built into the system: policies, procedures, equipment, hiring practices, and training.  Special cause variation is more or less how the human element and the system interact.

See earlier post:

Not All Customer Experience Variation is Equal: Use Control Charts to Identify Actual Changes in the Customer Experience

 

Click Here For More Information About Kinesis' Research Services

Customer Experience Measurement in the Coronavirus Age

Earlier in this three-part series we discussed the mechanism and risk of SARS-CoV-2 infection, and the implications of the pandemic on the customer experience.

Perhaps the most important way brands can respond to the moment of truth presented by this crisis is showing true care for: customers, employees, and the community.

Additionally, it is imperative that customers feel safe.  Based on current science, in-person interactions can be relatively safe if followed within CDC and public health guidance including risk mitigation efforts such as: physical distancing, masks, ventilation, length of exposure, and hand washing & sanitizer.

Using these previous posts as a foundation, we can now address the implications of the pandemic on customer experience measurement.

So…. what does all this mean in terms of customer experience measurement?

First, I like to think of the customer experience measurement in terms of the brand-customer interface where customers interact with the brand.  At the center of the customer experience are the various channels which form the interface between the customer and institution. Together, these channels define the brand more than any external messaging. Best-in-class customer experience research programs monitor this interface from multiple directions across all channels to form a comprehensive view of the customer experience.

Customers and front-line employees are the two stakeholders who interact most commonly with each other in the customer-institution interface. As a result, a best practice in understanding this interface is to monitor it directly from each direction: surveying customers from one side, gathering observations from employees on the brand side, and testing for the presence and timing of customer experience attributes through observational research such as mystery shopping.

Measure Customer Comfort and Confidence

First, fundamentally, the American economy is a consumer confidence driven economy.   Consumers need to feel confident in public spaces to participate in public commerce.  Customer experience researchers would be well served by testing for consumer confidence with respect to safety and mitigation strategies.  These mitigation strategies are quickly becoming consumer requirements in terms of confidence in public commerce.

Along the same lines, given the centrality of consumer confidence in our economy, measuring how customers feel about the mitigation strategies put in place by the brand is extremely important.  Such measurements would include measures of appropriateness, effectiveness, and confidence in the mitigation strategies employed.  We recommend two measurements: how customers feel about the safety of the brand’s in-person channel in general, and how they feel about the safety relative to other brands they interact with during the pandemic.  The first is an absolute measure of comfort, the other attempts to isolate the variable of the pandemic, just measuring the brand’s response.

The pandemic is changing consumer behavior. This much is clear.  As such customer experience researchers should endeavor to identify and understand how consumer behavior is changing so they can adjust the customer experience delivery mix to align with these changes.

Testing Mitigation Strategies

Drilling down from broader research issues to mystery shopping specifically, there are several research design issues that should be continued in response to the COVID-19 pandemic.

Measure Customer Confidence in Post-Transaction Surveys with Alerts to Failures:  First, as economic activity waxes and wanes through this coronavirus mitigation effort, consumer confidence will drive economic activity both on a macro and micro-economic level.  Broadly, consumers as a whole will not participate in the in-person economy until they are confident the risk of infection is contained.  Pointedly, at the individual business level, customers will not return to a business if they feel unsafe.  Therefore, market researchers should build measures of comfort or confidence into the post-transaction surveys to measure how the customer felt as a result of the experience.   This will alert managers to potential unsafe practices which must be addressed.  It will also serve as a means of directly measuring the return on investment (ROI) of customer confidence and safety initiatives in terms of the customer experience.

Measure Customer Perception of Mitigation Strategies:  Coronavirus mitigation strategies will become typical attributes of the customer experience.   Beyond simply testing for the presence of these mitigation strategies, customer experience managers should determine customer perceptions of their appropriateness, efficacy, and perhaps most importantly, their confidence in these mitigation strategies.

Gather Employee Observations of Mitigation Strategies:  Frontline employees spend nearly all their time in the brand customer interface.  As such, they have always been a wealth of information about the customer experience, and can be surveyed very efficiently.  The post-pandemic customer experience is no exception. 

First, as we discussed previously, employees have the same personal safety concerns as customers.   Surveys of employees should endeavor to evaluate employees’ confidence in and comfort with coronavirus mitigation strategies. 

Secondly, frontline employees being placed in the middle of the brand-customer interface are in perfect position to give feedback regarding the efficacy of mitigation strategies and the extent to which it fits into the desired customer experience – providing managers with valuable insight into adjustments which may make mitigation strategies fit more precisely into overall the customer experience objectives.

Independently Test for the Presence of Mitigation Strategies:  All in-person channels across all industries will require the adoption of coronavirus mitigation strategies.  Mystery shopping is the perfect tool to test for the presence of mitigation strategies – evaluating such strategies as: designed physical distancing, physical barriers between POS personnel and customers, mask compliance, sanitization, and duration of contact.

Alternative Research Sources for Behavioral Observations:  Some customer experience managers may not want unnecessary people within their in-person channel.  So the question arises, how can employee behaviors be measured without the use of mystery shoppers?  One solution is to solicit behavioral observations directly from actual customers shortly after the in-person service interaction.  Customers can be recruited onsite to provide their observations through the use of QR codes, or in certain industries after the event via e-mail.  The purpose of these surveys is behavioral – asking the customers to recall if a specific behavior or service attribute was present during the encounter.  From a research design standpoint, this practice is a little suspect, as asking people to recall the specifics about an event after the fact, without prior knowledge, is problematic.  Customers are not prepared or prompted to look for and recall specific events.  However, given the unique nature of the circumstances we are under, in some cases there is an argument that the benefits of this approach outweigh the research limitations.

Test Channel Performance and Alignment

The instantaneous need for alternative delivery channels has significantly raised the stakes in cross-channel alignment.  As sales volume shifts to these alternative channels, customer experience researchers need to monitor the customer experience within all channels to measure the efficacy of the experience, as well as alignment of each channel to both each other and the overall brand objectives.

Finally, as more customers migrate to less in-person channels, customer experience researchers should endeavor to measure the customer experience within each channel.  As more late adopters are forced by the pandemic to migrate to these channels, they may bring with them a completely different set of expectations relative to early adopters, therefore managers would be well served to understand the expectations of these newcomers to the alternative channels so they can adjust the customer experience to meet these new customers’ expectations.

As commerce migrates away from conventional in-person channels to alternative delivery channels, the importance of these channels will increase.  As a result, the quality and consistency of delivery in these channels will need to be measured through the use of mystery shoppers.  Some industries are going to be problematic, as their current economics do not currently support alternative delivery.  With time however, economic models will evolve to support alternative channels.

Conclusion

This is a difficult time.  It will be the defining event of our generation.

The pandemic, and our reaction to it, is dramatically changing how humans interact with each other, and the customer experience is no exception.  There is reason to suggests this difficult time could become a new normal.  Managers of the customer experience need to understand the implications of the customer experience in the post-Covid environment, as the implications of the pandemic may never fully subside.  Customer experience managers must consider the implications of this new normal, not only on the customer experience, but on customer experience measurement.

Click Here For More Information About Kinesis' Research Services

Customer Experience Measurement in the Coronavirus Age: The Mechanism and Risk of Infection

Introduction

From Zoom happy hours, canceled events, concerns over how best to educate our children,  economic disruption, and caring for the victims, the SARS-CoV-2 pandemic, and the resulting public heath requirements are changing our lives in ways both big and small, superficial and tragic.  The customer experience is certainly no exception.  Writing about effects of the pandemic, while it unfolds, is a unique challenge – as we are learning more about the virus, its health effects, mitigation strategies, and overall effects on society in real time.  Things change daily and we are all learning on the fly here.  This series of blog posts is an early attempt to discuss the effects of the pandemic on customer experience research.

Before we begin, let me stress one thing.  I am a market researcher who specializes in evaluating the customer experience.  I am not an epidemiologist or doctor, and I have no training or experience in public health.  As a result, I will refrain from expressing scientific or medical theories or opinions of my own.  Any virus related conclusions or opinions expressed in this series of posts will be from credible sources and cited in footnotes.  If at any point it appears I am drawing medical or scientific conclusions of my own, it is unintentional, and should not be regarded as such. 

The need for managers of the customer experience to understand the implications of post-SARS-CoV-2 environment will most likely survive the immediate pandemic.  Changes in customer experience management will probably assume a more permanent nature.  First, this novel coronavirus may never go completely away, but rather become endemic in our society, meaning it could be a constant presence.[1] Second, recent history suggests SARS-CoV-2 is not the only novel-corona virus we are going to face in the coming decades.  Currently there are seven know coronaviruses that infect humans – prior to 2003 there were only four.  In a relatively short period of time, three new coronaviruses have jumped from animals to humans.[2]  The number of known coronaviruses to which humans are susceptible has nearly doubled in 17 years, so it does not require a great leap of the imagination to conclude this is not the last novel virus we will need to deal with.

This pandemic and its predicted aftermath represent a moment of truth for customers and their relationship to the brand.  In an uncertain and risky environment, customers will be even more likely to build relationships with brands they trust.  Forward thinking managers of the customer experience will respond by building more mechanisms to monitor customer perceptions of safety within the in-person channel and fulfillment via expanded alternative channels.

Mechanism of Infection

What we know now is the virus appears to spread primarily through person-to-person contact, via people in close contact with each other or to a lesser extent secondary transfer off contaminated surfaces.

SARS-CoV-2 survives on most surfaces.  Touching an infected surface and touching your eye, nose or mouth represents a risk of infection by transfer.[3]  Although, recent guidance from the CDC suggests transfer is not a significance mode of transmission.[4]  That being said, high touch surfaces such as door handles, elevator buttons, POS machines, and bathroom surfaces, should still be considered a potential risk for transfer infection.  However, the main mechanism of infection is via close personal contact.

When an infected person coughs, sneezes, talks or performs any other activity exhaling air, respiratory droplets are produced.  These droplets can land on the mouths or noses of people nearby, or in some circumstances, hang in the air in an aerosol form and be inhaled into the lungs.[5]  Current evidence suggests most individuals with mild to moderate symptoms can be infectious up to 10 days after symptom onset.  Further complicating this picture, it appears individuals without symptoms can be infectious even without knowing they are infected themselves.[6]

In order for customer experience managers to make informed choices about the customer experience in the post-Covid age, it is important to understand the mechanism of infection.  The infectious dose of a virus is the amount of virus a person needs to be exposed to in order to establish an infection.  The infectious dose varies depending on the virus  (the flu can cause infection after exposure to as few as 10 virus particles, others require exposure to thousands of particles to establish an infection).  Currently, the infectious dose of SARS-CoV-2 is not understood with any precision; however, some experts estimate it at a few hundred to a few thousand virus particles.[7] 

Like fire needs three things to burn (oxygen, fuel and heat), in my layman’s expression, three factors dictate Covid-19 transmission: activity, duration and proximity.

Different activities release different amounts of virus particles into the environment.  On the far end of the spectrum, a cough or sneeze releases about 200-million virus particles.  Furthermore, the force of a cough or sneeze can aerosolize these particles (thus allowing them to hang in the air for a long time), or travel across a room in an instant.  On the other end of the spectrum, breathing normally releases about 20 virus particles per minute, but with less force than a cough or sneeze.  As a result, the particles expelled by breathing will tend to be expelled at a slower speed and travel a shorter distance.  Speaking releases about 200 viral particles per minute.[8]

These rates of exposure are important in terms of understanding the time required to exceed the infectious dose threshold.  Consider the following formula:

The time required to be infected, assuming close proximity with no precautions, is the infectious dose divided by the rate the virus particles are expelled.

Assuming an infectious dose of 1,000 virus particles, very close proximity to someone speaking (close enough to inhale all the particles released by the speaker) would require 5 minutes to exceed the infectious dose:

Similarly, very close proximity to someone breathing normally would require a ten-fold increase in exposure (50 minutes):

Obviously, a single cough or sneeze with 200-million virus particles will instantly exceed the 1,000 particle threshold.

Again, currently, we do not know the infectious dose – estimates range from a few hundred to a few thousand virus particles.  Therefore, the data is insufficient to determine the exact duration of time to acquire an infection.  However, public health authorities do provide guidance.

Risk of Infection

The Centers for Disease Control and Prevention (CDC) advises, that for close contact with an individual in a non-healthcare setting, 15 minutes can be used as a threshold for the time to acquire an infectious dose (note: subsequent to the date of this blog post, the CDC’s guidance has been updated from 15 consecutive minutes to 15 non-consecutive minutes in total over a 24-hour period).[9]

Since currently we do not know SAR-CoV-2’s infectious dose, the key take away is an individual is not going to be infected by a single virus particle.  However, we are not free from risk.  We, as a society, are going to need to weigh the risks.  This will take the form of everyday people making everyday decisions about the risks they are willing to accept – both to themselves personally, and to society as a whole.  “Nothing is without risk, but you can weigh the risks. . . . It’s going to be a series of judgment calls people will make every day,” as  Dr. William Petri a professor of infectious disease at the University of Virginia Medical School, told the Washington Post. [10]

Forward-thinking customer experience brands will consider how individuals and society as a whole weigh these risks and build customer experiences around both customer expectations and responsible civic commitment.  The pandemic represents a moment of truth between brands and their customers.    Building responsible and safe customer experiences will become a core driver of trust in the brand.

Some factors individual consumers and customer experience managers will need to consider as we weigh these risks include: [11]

Distance:  At a minimum the environment and activity should allow for 6 feet separation to be maintained.

Duration:  The duration of the activity should be short enough to minimize infection risk, considering the specific activity (breathing, talking, singing, etc) and other mitigation efforts (distance, masks, ventilation, etc).

Ventilation:  Indoor venues should be well ventilated.  Outdoor venues are naturally well ventilated and, therefore, safer.

Masks:  Mask wearing by individuals will inhibit the spread of virus particles in the air. The CDC recommends wearing cloth face coverings in public settings where other social distancing measures are difficult to maintain (e.g., grocery stores and pharmacies).  Masks are less of a filter to protect the wearer, but they inhibit the spread of virus droplets in the air by the wearer – masks protect others.[12]

Transfer Risk:  Customers and employees should avoid unnecessary contact with high touch objects or surfaces, disinfecting surfaces and hands with hand sanitizer.

In the next post, we expand on this discussion of infection risk and mitigation strategies and look at the implications of these on the customer experience and customer experience management.

Click Here For More Information About Kinesis' Research Services

[1] “Nothing Like SARS: Researchers Warn The Coronavirus Will Not Fade Away Anytime Soon”  npr.org,  June 11, 202.  Web.  August 13, 2020.

[2] Fred Hutchinson Cancer Research Center. Dr. Amitabha Gupta “Fred Hutch and Covid-19.” August 4, 2020. Video, 10:15. https://www.youtube.com/watch?v=iaa40DflvOk&feature=youtu.be.

[3] Skinner, Michael.  “expert reaction to questions about COVID-19 and viral load”  sciencemediacentre.org, March 26, 2020.  Web. May 13, 2020.

[4] “How COVID-19 Spreads.”  CDC.gov, May 21, 2020.  Web.  May 21, 2020.

[5] “How COVID-19 Spreads.”  CDC.gov, May 21, 2020.  Web.  May 21, 2020.

[6] “Transmission of SARS-CoV-2: implications for infection prevention precautions.”  who.int, July 9, 2020.  Web.  August 13, 2020.

[7] Geddes, Linda. “Does a high viral load or infectious dose make covid-19 worse?”  newscientist.com, March 27, 2020.  Web May 14, 2020.

[8] Bromage, Eric.  “The Risks – Know Them – Avoid Them.” Erinbromage.com, May 6, 2020.  Web. May 13 2020.

[9] “Public Health Recommendations for Community-Related Exposure.”  CDC.gov, March 30, 2020.  Web.  May 15 2020.

[10] Shaver, Katherine.  “Wondering what’s safe as states start to reopen? Here’s what some public health experts say.”  Washingtonpost.com, May 15, 2020.  Web. May 15, 2020.

[11] Shaver, Katherine.  “Wondering what’s safe as states start to reopen? Here’s what some public health experts say.”  Washingtonpost.com, May 15, 2020.  Web. May 15, 2020.

[12] “About Masks.”  CDC.gov, August 6, 2020.  Web.  August 14 2020.

Research Tools to Monitor Planned Interactions Through the Customer Life Cycle

As we explored in an earlier post, 3 Types of Customer Interactions Every Customer Experience Manager Must Understand, there are three types of customer interactions: Stabilizing, Critical, and Planned.

The third of these, “planned” interactions, are intended to increase customer profitability through up-selling and cross-selling.

These interactions are frequently triggered by changes in the customer’s purchasing patterns, account usage, financial situation, family profile, etc. CRM analytics combined with Big Data are becoming quite effective at recognizing such opportunities and prompting action from service and sales personnel. Customer experience managers should have a process to record and analyze the quality of execution of planned interactions with the objective of evaluating the performance of the brand at the customer brand interface – regardless of the channel.

The key to an effective strategy for planned interactions is appropriateness. Triggered requests for increased spending must be made in the context of the customer’s needs and permission; otherwise, the requests will come off as clumsy and annoying. By aligning information about execution quality (cause) and customer impressions (effect), customer experience managers can build a more effective and appropriate approach to planned interactions.

Research Plan for Planned Interactions

The first step in designing a research plan to test the efficacy of these planned interactions is to define the campaign. Ask yourself, what customer interactions are planned based on customer behavior? Mapping the process will define your research objectives, allowing an informed judgment of what to measure and how to measure it.

For example, after acquisition and onboarding, assume a brand has a campaign to trigger planned interactions based on triggers from tenure, recency, frequency, share of wallet, and monetary value of transactions. These planned interactions are segmented into the following phases of the customer lifecycle: engagement, growth, and retention.

LifeCycle

 

Engagement Phase

Often it is instructive to think of customer experience research in terms of the brand-customer interface, employing different research tools to study the customer experience from both sides of this interface.

In our example above, management may measure the effectiveness of planned experiences in the engagement phase with the following research tools:

Customer Side Brand Side
Post-Transaction Surveys

Post-transaction surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey, targeting specific customers shortly after a service interaction. As the name implies, the purpose of this type of survey is to measure satisfaction with a specific transaction.

Transactional Mystery Shopping

Mystery shopping is about alignment.  It is an excellent tool to align sales and service behaviors to the brand. Mystery shopping focuses on the behavioral side of the equation, answering the question: are our employees exhibiting the sales and service behaviors that will engage customers to the brand?

Overall Satisfaction Surveys

Overall satisfaction surveys measure customer satisfaction among the general population of customers, regardless of whether or not they recently conducted a transaction.  These surveys give managers a feel for satisfaction, engagement, image and positioning across the entire customer base, not just active customers.

Alternative Delivery Channel Shopping

Website mystery shopping allows managers of these channels to test ease of use, navigation and the overall customer experience of these additional channels.

Employee Surveys

Employee surveys often measure employee satisfaction and engagement. However, they can also be employed to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.They not only provide intelligence into the customer experience, but also evaluate the level of support within the organization, and identifies perceptual gaps between management and frontline personnel.

 

Growth Phase

In the growth phase, one may measure the effectiveness of planned experiences on both sides of the customer interface with the following research tools:

Customer Side Brand Side
Awareness Surveys

Awareness of the brand, its products and services, is central planned service interactions.  Managers need to know how awareness and attitudes change as a result of these planned experiences.

Cross-Sell  Mystery Shopping

In these unique mystery shops, mystery shoppers are seeded into the lead/referral process.  The sales behaviors and their effectiveness are then evaluated in an outbound sales interaction.

Wallet Share Surveys

These surveys are used to evaluate customer engagement with and loyalty to the brand.  Specifically, to determine if customers consider the brand their primary provider, and identify potential road blocks to wallet share growth.

 

Retention Phase

Finally, planned experiences within the retention phase of the customer lifecycle may be monitored with the following tools:

Customer Side Brand Side
Lost Customer Surveys

Lost customer surveys identify sources of run-off or churn to provide insight into improving customer retention.

Life Cycle Mystery Shopping

Shoppers interact with the company over a period of time, across multiple touch points, providing broad and deep observations about sales and service alignment to the brand and performance throughout the customer lifecycle across multiple channels.

Comment Listening

Comment tools are not new, but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.

 

Call to Action – Make the Most of the Research

Research without call to action may be interesting, but not very useful.  Regardless of the research choices you make, be sure to build call to action elements into research design.

For mystery shopping, we find linking observations to a dependent variable, such as purchase intent, identifies which sales and service behaviors drive purchase intent – informing decisions with respect to training and incentives to reinforce the sales activities which drive purchase intent.

For surveys of customers, we recommend testing the effectiveness of the onboarding process by benchmarking three loyalty attitudes:

  • Would Recommend: The likelihood of the customer recommending the brand to a friend relative or colleague.
  • Customer Advocacy: The extent to which the customer agrees with the statement, “you care about me, not just the bottom line?”
  • Primary Provider: Does the customer consider the brand their primary provider for similar services?

As you contemplate campaigns to build planned experiences into your customer experience, it doesn’t matter what specific model you use.  The above model is simply for illustrative purposes.  As you build your own model, be sure to design customer experience research into the planned experiences to monitor both the presence and effectiveness of these planned experiences.


 

Click Here For More Information About Kinesis' Research Services

Best Practices in Bank Customer Experience Measurement Design

The question was simple enough…  If you owned customer experience measurement for one of your bank clients, what would you do?

Through the years, I developed a point of view of how to best measure the customer experience, and shared it with a number of clients, however, never put it down to writing.

So here it is…

Best practices in customer experience measurement use multiple inputs in a coordinated fashion to give managers a 360-degree view of the customer experience.  Just like tools in a tool box, different research methodologies have different uses for specific needs.  It is not a best practice to use a hammer to drive a screw, nor the butt end of a screwdriver to pound a nail.  Each tool is designed for a specific purpose, but used in concert can build a house. The same is true for research tools.  Individually they are designed for specific purposes, but used in concert they can help build a more whole and complex structure.

Generally, Kinesis believes in measuring the customer experience with three broad classifications of research methodologies, each providing a unique perspective:

  1. Customer Feedback – Using customer surveys and other less “scientific” feedback tools (such as comment tools and social media monitoring), managers collect valuable input into customer expectations and impressions of the customer experience.
  1. Observation Research – Using performance audits and monitoring tools such as mystery shopping and call monitoring, managers use these tools to gather observations of employee sales and service behaviors.
  1. Employee Feedback – Frontline employees are the single most underutilized asset in terms of understanding the customer experience. Frontline employees spend the majority of their time in the company-customer interface and as a result have a unique perspective on the customer experience.  They have a good idea about what customers want, how the institution compares to competitors, and how policies, procedures and internal service influence the customer experience.

These research methodologies are employed in concert to build a 360-degree view of the customer experience.

360-degree bank customer experience measurement

The key to building a 360-degree view of the customer experience is to understand the bank-customer interface.  At the center of the customer experience are the various channels which form the interface between the customer and institution.  Together these channels define the brand more than any external messaging.  Best in class customer experience research programs monitor this interface from multiple directions across all channels to form a comprehensive view of the customer experience.

Customer and front-line employees are the two stakeholders who interact most commonly with each other in the customer-institution interface.  As a result, a best practice in understanding this interface is to monitor it directly from each direction.

Tools to measure the experience from the customer side of interface include:

Post-Transaction Surveys: Post-transaction surveys provide intelligence from the other side of customer-employee interface.  These surveys are targeted, event-driven, collecting feedback from customers about specific service encounters soon after the interaction occurs.  They provide valuable insight into both customer impressions of the customer experience, and if properly designed, insight into customer expectations.  This creates a learning feedback loop, where customer expectations can be used to inform service standards measured through mystery shopping.  Thus two different research tools can be used to inform each other.  Click here for a broader discussion of post-transaction surveys.

Customer Comments:  Beyond surveying customers who have recently conducted a service interaction, a best practice is to provide an avenue for customers who want to comment on the experience.  Comment tools are not new (in the past they were the good old fashioned comment card), but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.  Additionally, comment tools can be used to inform the post transaction surveys.  If common themes develop in customer comments, they can be added to the post-transaction surveys for a more scientific measurement of the issue.  Click here for a broader discussion of comment tools.

Social Monitoring:  Increasingly social media is “the media”; prospective customers assign far more weight to social media then any external messaging.  A social listening system that analyzes and responds to social indirect feedback is increasingly becoming essential.  As with comment tools, social listening can be used to inform the post transaction surveys.  Click here for a broader discussion of social listening tools.

Directing our attention to the bank side of the interface, tools to measure the experience from the bank side of bank-customer interface include:

Mystery Shopping:  In today’s increasing connected world, one bad experience could be shared hundreds if not thousands of times over.  As in-person delivery models shift to a universal associate model with the branch serving as more of a sales center, monitoring and motivating selling skills is becoming increasingly essential.  Mystery shopping is an excellent tool to align sales and service behaviors to the brand. Unlike the various customer feedback tools designed to inform managers about how customers feel about the bank, mystery shopping focuses on the behavioral side of the equation, answering the question: are our employees exhibiting appropriate sales and service behaviors?  Click here for a broader discussion of mystery shopping tools.

Employee Surveys:  Employee surveys often measure employee satisfaction and engagement.  However, in terms of understanding the customer experience, a best practice is to move employee surveys beyond employee engagement and to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.  This information comes directly out one side of the customer-employee interface, and provides not only intelligence into the customer experience, but also evaluates the level of support within the organization, solicit recommendations, and compares perceptions by position (frontline vs. management) to identify perceptual gaps which typically exist within organizations.  Click here for a broader discussion of employee surveys.

For more posts in this series, click on the following links:


Click Here For More Information About Kinesis' Bank CX Research Services

Leverage Unrecognized Experts in the Customer Experience: Best Practices in Bank Customer Experience Measurement Design – Employee Surveys

Bank Employee Surveys

Frontline customer facing employees (tellers, platform, and contact center agents) are a vastly underutilized resource in terms of understanding the customer experience.  They spend the majority of their time in the customer-bank interface, and as a result tend to be unrecognized experts in the customer experience.

An excellent tool to both leverage this frontline experience and identify any perceptual gaps between management and the frontline is to survey all levels of the organization to gather impressions of the customer experience.  This survey can be fielded very efficiently with an online survey.

Typically, we start by asking employees to put themselves in the customers’ shoes and to ask how customers would rate their satisfaction with the customer experience, including specific dimensions and attributes of the experience.  A key call-to-action element of these surveys tends to be a question asking employees what they think customers most like or dislike about the service delivery.

Next we focus employees on their own experience, asking the extent to which they believe they have all the tools, training, processes, policies, customer information, coaching, staff levels, empowerment, and support of both their immediate supervisor and senior management to deliver on the company’s service promise.  Call-to-action elements can be designed into this portion of the research by asking what, in their experience, leads to customer frustration or disappointment, and soliciting suggestions for improvement.   Perhaps most interesting, we ask what are some of the strategies the employee uses to make customers happy.   This is an excellent source for identifying best practices and potential coaches.

Finally, comparing results across the organization identifies any perceptual gaps between the frontline and management.  This can be a very illuminating activity.

For more posts in this series, click on the following links:


Click Here For More Information About Kinesis' Employee Engagement Research

Internal Customer Survey Design Tips

It is a commonly accepted principal that internal customer service, the service all employees provide each other, has a significant influence on employee satisfaction and turnover as well as the customer experience, customer loyalty, retention and wallet share. Key to managing any process, whether it is internal customer service or not, is information. This post outlines a survey tool to measure internal customer service in a manner to, not only understand what is going on, but also to inform management decisions with respect to the internal service environment. Kinesis has had success measuring the internal environment using a fairly simple internal customer service survey process with key call to action elements built into the program.

147953294 Resize

Data collection for such internal customer service surveys is typically performed via an online survey instrument. Online surveys from an external provider allow for a promise of anonymity, allowing the employee to be completely candid in their input; with the additional benefit of being extremely cost efficient.

Generally, internal customer service surveys start with a screener asking what departments the employee has interacted with within a specific time period (typically the past three or six months). From this screener, survey logic determines what departments the employee will be asked to evaluate.

Departmental Evaluations

The next step in the survey process is to ask the employee to evaluate each department they have interacted with recently. We typically present the employee with a series of internal customer service survey questions based on specific service attributes. Among some of the service attributes we have had success measuring are:

  • Integrity
  • Respect
  • Quality
  • Teamwork

As an option, you may want to consider using a battery of adjectives to evaluate each department. This technique presents employees with a list of adjectives and asks them, which if any of the adjectives describe each department. Some of the adjectives Kinesis has had success using include:

  • Attentive
  • Efficient
  • Experts
  • Empathetic
  • Respectful
  • Knowledgeable
  • Accountable
  • Reliable
  • Responsive
  • Courteous

Overall Company Evaluation

In additional to evaluating individual departments, it can also be instructive to evaluate the company as whole. There are a number of ways to do this. One technique is to ask yourself what is your brand personality, and evaluate each department against this benchmark. Another is to ask employees the extent to which they agree with the following statements:

  • We are the preferred provider in our industry.
  • Our employees build customer relationships to deliver exceptional service.
  • Our employees are empowered to serve customers.
  • Our strategy is forward thinking.
  • Our strategy ensures efficient and cost-effective operations.

Call to Action

Finally, research without call to action elements may be interesting, but not very useful. We always build several “call to action” elements into our internal customer service surveys, which are designed to identify ways clients can act on the research. Perhaps the simplest of these techniques is to solicit ideas for improvement with an open-ended question, such as:

What would you most like to change about the way our departments serve each other?

Or conversely,

What do you like best about the way our departments serve each other?

The answers to these questions will point managers in the right direction in terms of improving internal service.

Again, information is a critical to managing internal customer service. The benefits to managing internal customer service and integrating them into your culture include: improved collaboration, workflows, communication and productivity, while at the same time reducing costs with the ultimate benefit chain including stronger employee and customer satisfaction, lower turnover, and increased customer loyalty.


Click Here For More Information About Kinesis' Research Services

Does Your Frontline Understand the Customer Experience Better than the CEO?

Frontline customer facing employees are a vastly underutilized resource in terms of understanding the customer experience. They spend the majority of their time in the company-employee interface, and as a result tend to be unrecognized experts in the customer experience. Conversely, often the further management is removed from the customer interface the less they truly understand some details about what is going on.

One tool to both leverage frontline experience and identify any perceptual gaps between management and the frontline is to survey all levels of the organization to gather impressions of the customer experience.

Typically, we start by asking employees to put themselves in the customers’ shoes and to ask how customers would rate their satisfaction with the customer experience, including specific dimensions and attributes of the experience. A key call-to-action element of these surveys tends to be a question asking employees what they think customers would most like or dislike about the service delivery.

Next we focus employees on their own experience, asking the extent to which they believe they have all the tools, training, processes, policies, customer information, coaching, staff levels, empowerment, and support of both their immediate supervisor and senior management to deliver on the company’s service promise. Call-to-action elements can be designed into this portion of the research by asking what, in their experience, leads to customer frustration or disappointment, and soliciting suggestions for improvement. Perhaps most interesting we ask what are some of the strategies the employee uses to make customers happy – this is an excellent source for identifying best practices and potential coaches.

Finally, comparing results across the organization identifies any perceptual gaps between the frontline and management. This can be a very illuminating activity.

And why not leverage employees as a resource to understanding the customer experience? They spend most of their time in the company-customer interface, and are therefore experts of what is actually going on. Secondly, employees and customers generally want the same things.

Customers want… Employees want…
To get what they are promised The tools/systems/ policies to do their job
Their problems resolved Empowerment to solve problems
Their needs listened to/understood More/better feedback
Knowledgeable employees; adequate information More training; more/better feedback
Employees to take the initiative, take responsibility, represent the company Empowerment; clear priorities; inclusion in the company’s big picture
The company to value their business Clear priorities; the tools/systems/policies to do their job

 


Click Here For More Information About Kinesis' Employee Engagement Research

People: The Fifth-P of Marketing

We are all familiar with the Four-P’s of marketing: Product, Price, Promotion and Place – but maybe there is a Fifth-P. Recent research by Kinesis has determined the importance bank representatives play in driving purchase intent. The Fifth-P – it is People.

In the first quarter of 2004, Kinesis conducted a survey of people who had recently visited a bank branch to inquire about a retail banking product or service. The purpose of this research was to determine the key drivers of purchase intent after a branch visit. Six hundred ninety-four people (n=694) were asked to rate how the experience at the branch influenced their intention or purchase a product or service from the bank according to a five-point rating scale anchored at -2 and +2, where -2 was defined as “significantly decreased purchase intent”, +2 was defined as “significantly increased purchase intent” with zero (the mid-point) defined as “had no influence on purchase intent.” Paired with this purchase intent rating, respondents were asked why the experience influenced their purchase intent as it did? The responses to this open-ended question were grouped according to common themes and cross-tabulated according to the purchase intent rating to determine what elements of the branch experience drive purchase intent. The results of this cross-tabulation are presented in the accompanying chart.

Once a prospective customer enters the branch, the platform representative clearly drives purchase intent. Over two-thirds (69%) of the reasons given for positive purchase intent are the result of branch personnel, only one-in-five (18%) were product related, while 8% were due to the branch atmosphere.

The branch personnel driven elements include: generally positive friendly service (26%), product knowledge/informative/confidence in the representative (16%), attentive to needs/ interest in helping/ personalized service (14%), and professional/ respectful/ not pushy employees (10%). Prospective customers want confidence and trust not just in the institution, but also in the people who are the human face of the institution. Product knowledge, attention to personal needs, and a respectful professional sales approach are all very important in establishing the trust necessary to earn the right to play a role in the household’s financial business.

Reasons Behind Purchase Intent

Friendly Employees /General Positive Customer Service

26%

Product Reasons

18%

Rep. Product Knowledge/ Informative /Confidence in Rep.

16%

Attentive to Needs/ Interest in Helping/ Personalized Service

14%

Professional/ Respectful Employees/ Not Pushy

10%

Branch Atmosphere

8%

Felt Valued as a Customer

3%

In a follow-up study, Kinesis wanted to investigate the key drivers of purchase intent as a result of the sales presentation for more complex financial services. Three hundred fourteen people (n=314) who had recently participated in a sales presentation for bank-marketed investment products (annuities, mutual funds, and other securities from Series 6 and Series 7 licensed sales people) were given the same survey. If trust and confidence in the platform personnel is important for retail banking, it is even more important for investment products.

Over nine-out-of-ten (91%) of the respondents mentioned an employee related reason as driving purchase intent, only a handful mentioned product (3%) and branch atmosphere (2%), as drivers of purchase intent. The themes of trust and confidence are even more evident in investment presentations. Interest in helping/attentive to needs (43%), product knowledge/informative/confidence in the representative (28%), and professional/respectful employees (13%) are the top three reasons for positive purchase intent. Furthermore, the strength of these attributes’ influence is striking, 78% of the respondents who mentioned these attributes assigned the highest purchase intent rating (+2). Not only are they mentioned with significant frequency, but the strength of their influence on Purchase Intent is significant as well.

As banks continue to expand their offerings to more sophisticated products, the professionalism and personalization of the sales process is critical to garnering the trust necessary to earn the household’s business. Not only is this due to the sophistication of these products, but also because of their risk profile. Customers understand this increased risk and therefore require even more personalized and professional attention to feel comfortable giving their business to an institution, even if the institution is a bank with all the trust usually associated with bank brands.

Once a prospective client is in the branch, the people in the sales process drive purchase intent. Product, Price, Promotion and Place all play a role in getting prospective clients to the branch, but it is the people involved in the sales process who clearly drive purchase intent from that point on. Perhaps People really doesn’t deserve elevation to the status of Fifth-P. In fact, the people in the sales process is part of the Fourth-P, Place. However, consider this – financial institutions invest heavily in marketing efforts for both core bank products and increasingly more sophisticated products. Yet, all this marketing investment is wasted if it is not supported by the people involved in the sales process. It is these people who connect with prospective customers on a personal level, and as a result, influence the sale and maximize return on marketing investments. That’s worth repeating – the people in the sales process maximize ROI on marketing dollars.


Click Here For More Information About Kinesis' Employee Engagement Research

Click Here For More Information About Kinesis' Employee Engagement Research