Archive | Acting On Research RSS for this section

Critical Incident Technique: A Tool to Identify and Prepare for Your Moments of Truth

As we explored in an earlier post, 3 Types of Customer Interactions Every Customer Experience Manager Must Understand, there are three types of customer interactions: Stabilizing, Critical, and Planned.

The second of these, “critical” interactions are service encounters which are out of the ordinary (a complaint, question, special request, an employee going the extra mile).  The outcomes of these critical incidents can be either positive or negative, depending on how they are responded to; however, they rarely are neutral.  Because they are memorable and unusual, critical interactions tend to have a powerful effect on the relationship with the customer, they are “moments of truth” where the brand has an opportunity to solidify the relationship or risk defection.

78634568
Customer experience strategies need to include systems for identifying common or potential moments of truth, analyzing trends and patterns, and feeding that information back to the organization. Employees can then be trained to recognize critical opportunities, and empowered to respond to them in such a way that they will lead to positive outcomes and desired customer behaviors.  One way to identify potential moments of truth and gauge the efficacy of service recovery strategies is a research technique called Critical Incident Technique (CIT).

Critical Incident Technique

CIT is a qualitative research methodology designed to uncover details surrounding a service encounter that a customer found particularly satisfying or dissatisfying. There is plenty of room for freedom in study design, but basically what we are trying to find out is what happened, what the customer did in response to the incident (positive or negative), what recovery strategy was used for negative incidents, and how effective was this recovery strategy.

Again, there is a lot of freedom here, but roughly study design looks like this:

First, ask the research participant to recall a recent experience in your industry that was particularly satisfying or dissatisfying.  Now, ask open-ended probing questions to gather the who, what, when, why and how surrounding that experience, questions like:

  • When did the incident happen?
  • What caused the incident? What are the specific circumstances that led to the incident or situation?
  • Why did you feel the incident was particularly satisfying or dissatisfying?
  • How did the provider respond to the incident? How did they correct it?
  • What action(s) did you take as a result of the incident?

The analysis of CIT interviews consists of classifying these incidents into well defined, mutually exclusive categories and sub-categories of increasing specificity.  For example, the researcher may classify incidents into the following categories:

  • Service Delivery System Failures
    1. Unavailable Service
    2. Unreasonably Slow Service
    3. Other Core Service Failures
  • Customer Needs and Requests
    1. Special Customer Needs
    2. Customer Preferences
  • Unprompted and Unsolicited Actions
    1. Attention Paid to Customer
    2. Truly Out of the Ordinary Employee Behavior/Performance
    3. Holistic Evaluation
    4. Performance Under Adverse Circumstances

A similar classification technique should be used to group both recovery strategies and their effectiveness.  As well as classifying the attitudinal and behavioral result on the customer, identifying in what ways the customer changed their behavior toward or relationship with the brand based on the incident, such as, did they purchase more or less, tell others about the experience directly or via social media, call for support more or less often, use different channels, change providers, etc.

The end result of this analysis will produce a list of common moments of truth within your industry, how customers change their behavior in either profitable ways or unprofitable ways as a result of this moment of truth and an evaluation of the effectiveness of recovery strategies, giving managers an informed perspective upon which to prepare employees to recognize moments of truth, and respond in ways that will lead to positive outcomes.

 

For additional perspectives on moments of truth, see the post: 4 Ways to Understand & Manage Moments of Truth.

Click Here For More Information About Kinesis' Research Services

Translate Research to Action with a VOC Table

Ask any group of satisfaction researchers and consumers of satisfaction research about the largest problem facing the research industry, most likely, the lack of actionability (or usefulness of the research) will be the most common concern raised.  All too often, research is conducted, reports produced and bound into professional looking binders which end up gathering dust on a shelf some place, or if you’re like me, providing excellent use as a door stop.

What is missing is a strategy to transition research into action, and bring the various stakeholders into the research process.

Managers and researchers alike are faced with the difficult task of determining where to make investments, and predicting the relative return on such investments.  One such tool for transforming research into action is the Voice of the Customer (VOC) table.

A VOC Table is an excellent tool to match key satisfaction dimensions and attributes with business processes, and allow managers to make informed judgments regarding which business process will have the most return in terms of satisfaction improvement.

A VOC Table supports this transition from  listing the key survey elements on the vertical axis, sorting each attribute by an importance rating.  On the horizontal axis, a complete list of business functions is listed.  At this point, the researcher and manager match business process/functions with key survey elements and make judgments regarding the extent to which the business function influences key survey element (in the enclosed example, a dark filled-in square represents a strong influence, an unfilled square represents a moderate influence, while a triangle represents a slight influence.)  A numeric value is assigned to each influence (typically, a value of ‘four’ for a strong influence, ‘two’ for a medium influence, and ‘one’ for a weak influence).  For each cell in the table, a value is calculated by multiplying the strength of the influence by the importance rating of the survey element.  Finally, the cell values are summed for each column (business function) to determine which business functions have the most influence on customer satisfaction.

Consider the enclosed example of a VOC table.  In this example, a retail mortgage-lending firm has conducted a wave of customer satisfaction research, and intends to link this research to process improvement initiatives using the attached VOC Table.  The satisfaction attributes and their relative importance, as determined in the survey, are listed in the far left column.  Specific business processes from loan origination to closing are listed across the top of the table.  For each cell, where satisfaction attributes and business process intersect, the researchers have made a judgment of the strength of the business process’s influence on the satisfaction attribute.  For example, the researchers have determined proper document collection to have a strong influence on the firm’s ability to perform services right the first time, and a weak relationship for willingness to provide service.  For each cell, the strength of the influence is multiplied by the importance.  The sum of the values of each cell in each column determines the relative importance of each business process in influencing overall customer satisfaction.

In the example, the loan quote process and clearance of underwriting exemptions are the two parts of the lending process, which have the greatest influence on customer satisfaction, followed closely by an explanation of the loan process.  The other three aspects of the loan process of significance are document collection, application, and preliminary approval.  The least important are document recording and credit and title report ordering.  The managers of this hypothetical lending institution now know what parts of the lending process to focus on to improve customer satisfaction.  Furthermore, in addition to knowing which specific events to focus on, they also know, generally speaking, which improvements in the loan origination process will yield more return in terms of customer satisfaction than improvement in processing, underwriting, and closing.  As all the loan origination elements have comparatively strong influence on satisfaction.

The VOC table is an excellent tool to transition customer satisfaction research into action, include various stakeholders in the research process, and generally increase the actionability and return on research investment both in terms of increased satisfaction and financial ROI.


Click Here For More Information About Kinesis' Research Services

Not All Customer Experience Variation is Equal: Use Control Charts to Identify Actual Changes in the Customer Experience

Variability in customer experience scores is common and normal. Be it a survey of customers, mystery shops, social listening or other customer experience measurement, a certain amount of random variation in the data is normal. As a result, managers need a means of interpreting any variation in their customer experience measurement to evaluate if the customer experience is truly changing, or if the variation they are seeing is simply random.

One solution to this need is control charts. Control charts are a statistical tool commonly used in Six Sigma programs to measure variation. They track customer experience measurements within upper and lower quality control limits. When measurements fall outside either limit, the trend indicates an actual change in the customer experience rather than just random variation.

To illustrate this concept, consider the following example of mystery shop results:

Mystery Shop Scores

In this example the general trend of the mystery shop scores is up, however, from month to month there is a bit of variation.  Managers of this customer experience need to know if July was a particularly bad month, conversely, is the improved performance of in October and November something to be excited about.  Does it represent a true change in the customer experience?

To answer these questions, there are two more pieces of information we need to know beyond the average mystery shop scores: the sample size or count of shops for each month and the standard deviation in shop scores for each month.

The following table adds these two additional pieces of information into our example:

Month Count of Mystery Shops Average Mystery Shop Scores Standard Deviation of Mystery Shop Scores
May 510 83% 18%
June 496 84% 18%
July 495 82% 20%
Aug 513 83% 15%
Sept 504 83% 15%
Oct 489 85% 14%
Nov 494 85% 15%
Averages 500 83.6% 16.4%

Now, in order to determine if the variation in shops scores is significant or not, we need to calculate upper and lower quality control limits, where any variation above or below these limits is significant, reflecting an actual change in the customer experience.

The upper and lower quality control limits (UCL and LCL, respectively), at a 95% confidence level, are calculated according to the following formulas:

Equations

Where:

x = Grand Mean of the score

n = Mean sample size (number of shops)

SD = Mean standard deviation

Applying these equations to the data in the above table, produces the following control chart, where the upper and lower quality control limits are depicted in red.

Control Chart

This control chart tells us that, not only is the general trend of the mystery shop scores positive, and that November’s performance has improved above the upper control limit, but it also reveals that something unusual happened in July, where performance slipped below the lower control limit. Maybe employee turnover caused the decrease, or something external such as a weather event was the cause, but we know with 95% confidence the attributes measured in July were less present relative to the other months.  All other variation outside of November or July is not large enough to be considered statistically significant.

So…what this control chart gives managers is a meaningful way to determine if any variation in their customer experience measurement reflects an actual change in the experience as opposed to random variation or chance.

In the next post, we will look to the causes of this variation.

Next post:

Not All Customer Experience Variation is Equal: Common Cause vs. Special Cause Variation

 

Click Here For More Information About Kinesis' Research Services

Not All Customer Experience Variation is Equal: Common Cause vs. Special Cause Variation

Variability in customer experience scores is common and normal.  Be it a survey of customers, mystery shops, social listening or other customer experience measurement, a certain amount of random variation in the data is normal.  As a result, managers need a means of interpreting any variation in their customer experience measurement to evaluate if the customer experience is truly changing, or if the variation they are seeing is simply random.

In a previous post, we proposed the use of control charts as a tool to track customer experience measurements within upper and lower quality control limits, giving managers a meaningful way to determine if any variation in their customer experience measurement reflects an actual change in the experience as opposed to random variation or chance.

Now, managers need to understand the causes of variation, specifically common and special cause variation.  Common and special cause variation are six sigma concepts, while most commonly used in industrial production, they can be borrowed and employed to the customer experience.

Common Cause Variation:  Much like variation in the roll of dice, common cause variation is natural variation within any system.  Common cause variation is any variation constantly active within a system, and represents statistical “noise” within the system.

Examples of common cause variation in the customer experience are:

  • Poorly defined, poorly designed, inappropriate policies or procedures
  • Poor design or maintenance of computer systems
  • Inappropriate hiring practices
  • Insufficient training
  • Measurement error

Special Cause Variation: Unlike the roll of the dice, special cause variation is not probabilistically predictable within the system, as a result it does not represent statistical “noise” within the system, but is the signal within the system.

Examples of special cause variation include:

  • High demand/ high traffic
  • Poor adjustment of equipment
  • Just having a bad day

When measuring the customer experience it is helpful to consider everything within the context of the company-customer interface.  Every time a sales or service interaction within this interface occurs the customer learns something from the experience and adjusts their behavior as a result of the experience.  Managing the customer experience is the practice of managing what the customers learn from the experience and thus managing their behavior in profitable ways.

A key to managing customer behaviors is understanding common cause and special cause variation and their implications.  Common cause variation is variation built into the system: policies, procedures, equipment, hiring practices, and training.  Special cause variation is more or less how the human element and the system interact.

See earlier post:

Not All Customer Experience Variation is Equal: Use Control Charts to Identify Actual Changes in the Customer Experience

 

Click Here For More Information About Kinesis' Research Services

Customer Experience Measurement in the Coronavirus Age

Earlier in this three-part series we discussed the mechanism and risk of SARS-CoV-2 infection, and the implications of the pandemic on the customer experience.

For many brands, this pandemic represents a moment of truth with their customers.  Moments of truth are specific experiences of high importance, where a customer either forms or changes their opinion of a brand in meaningful and lasting ways.  Customers are stressed.  They feel uncertainty, fear and, frankly, exhaustion.   This uncertainty and fear drives customers to seek shelter from resources they trust.  Brands which become a trusted resource, which provide comfort, true comfort, in the face of this crisis have an opportunity to not only do the right thing, but cement their customers’ relationship with the brand.  On the other hand, brands which fail to do so, risk destruction of their customer relationships.

Perhaps the most important way brands can respond to the moment of truth presented by this crisis is showing true care for: customers, employees, and the community.

Additionally, it is imperative that customers feel safe.  Based on current science, in-person interactions can be relatively safe if followed within CDC and public health guidance including risk mitigation efforts such as: physical distancing, masks, ventilation, length of exposure, and hand washing & sanitizer.

Using these previous posts as a foundation, we can now address the implications of the pandemic on customer experience measurement.

So…. what does all this mean in terms of customer experience measurement?

First, I like to think of the customer experience measurement in terms of the brand-customer interface where customers interact with the brand.  At the center of the customer experience are the various channels which form the interface between the customer and institution. Together, these channels define the brand more than any external messaging. Best-in-class customer experience research programs monitor this interface from multiple directions across all channels to form a comprehensive view of the customer experience.

Customers and front-line employees are the two stakeholders who interact most commonly with each other in the customer-institution interface. As a result, a best practice in understanding this interface is to monitor it directly from each direction: surveying customers from one side, gathering observations from employees on the brand side, and testing for the presence and timing of customer experience attributes through observational research such as mystery shopping.

Measure Customer Comfort and Confidence

First, fundamentally, the American economy is a consumer confidence driven economy.   Consumers need to feel confident in public spaces to participate in public commerce.  Customer experience researchers would be well served by testing for consumer confidence with respect to safety and mitigation strategies.  These mitigation strategies are quickly becoming consumer requirements in terms of confidence in public commerce.

The American economy is

driven by consumer confidence.

Along the same lines, given the centrality of consumer confidence in our economy, measuring how customers feel about the mitigation strategies put in place by the brand is extremely important.  Such measurements would include measures of appropriateness, effectiveness, and confidence in the mitigation strategies employed.  We recommend two measurements: how customers feel about the safety of the brand’s in-person channel in general, and how they feel about the safety relative to other brands they interact with during the pandemic.  The first is an absolute measure of comfort, the other attempts to isolate the variable of the pandemic, just measuring the brand’s response.

The pandemic is changing consumer behavior. This much is clear.  As such customer experience researchers should endeavor to identify and understand how consumer behavior is changing so they can adjust the customer experience delivery mix to align with these changes.

Testing Mitigation Strategies

Drilling down from broader research issues to mystery shopping specifically, there are several research design issues that should be continued in response to the COVID-19 pandemic.

Measure Customer Confidence in Post-Transaction Surveys with Alerts to Failures:  First, as economic activity waxes and wanes through this coronavirus mitigation effort, consumer confidence will drive economic activity both on a macro and micro-economic level.  Broadly, consumers as a whole will not participate in the in-person economy until they are confident the risk of infection is contained.  Pointedly, at the individual business level, customers will not return to a business if they feel unsafe.  Therefore, market researchers should build measures of comfort or confidence into the post-transaction surveys to measure how the customer felt as a result of the experience.   This will alert managers to potential unsafe practices which must be addressed.  It will also serve as a means of directly measuring the return on investment (ROI) of customer confidence and safety initiatives in terms of the customer experience.

Measure Customer Perception of Mitigation Strategies:  Coronavirus mitigation strategies will become typical attributes of the customer experience.   Beyond simply testing for the presence of these mitigation strategies, customer experience managers should determine customer perceptions of their appropriateness, efficacy, and perhaps most importantly, their confidence in these mitigation strategies.

Gather Employee Observations of Mitigation Strategies:  Frontline employees spend nearly all their time in the brand customer interface.  As such, they have always been a wealth of information about the customer experience, and can be surveyed very efficiently.  The post-pandemic customer experience is no exception. 

First, as we discussed previously, employees have the same personal safety concerns as customers.   Surveys of employees should endeavor to evaluate employees’ confidence in and comfort with coronavirus mitigation strategies. 

Secondly, frontline employees being placed in the middle of the brand-customer interface are in perfect position to give feedback regarding the efficacy of mitigation strategies and the extent to which it fits into the desired customer experience – providing managers with valuable insight into adjustments which may make mitigation strategies fit more precisely into overall the customer experience objectives.

Independently Test for the Presence of Mitigation Strategies:  All in-person channels across all industries will require the adoption of coronavirus mitigation strategies.  Mystery shopping is the perfect tool to test for the presence of mitigation strategies – evaluating such strategies as: designed physical distancing, physical barriers between POS personnel and customers, mask compliance, sanitization, and duration of contact.

Alternative Research Sources for Behavioral Observations:  Some customer experience managers may not want unnecessary people within their in-person channel.  So the question arises, how can employee behaviors be measured without the use of mystery shoppers?  One solution is to solicit behavioral observations directly from actual customers shortly after the in-person service interaction.  Customers can be recruited onsite to provide their observations through the use of QR codes, or in certain industries after the event via e-mail.  The purpose of these surveys is behavioral – asking the customers to recall if a specific behavior or service attribute was present during the encounter.  From a research design standpoint, this practice is a little suspect, as asking people to recall the specifics about an event after the fact, without prior knowledge, is problematic.  Customers are not prepared or prompted to look for and recall specific events.  However, given the unique nature of the circumstances we are under, in some cases there is an argument that the benefits of this approach outweigh the research limitations.

Test Channel Performance and Alignment

The instantaneous need for alternative delivery channels has significantly raised the stakes in cross-channel alignment.  As sales volume shifts to these alternative channels, customer experience researchers need to monitor the customer experience within all channels to measure the efficacy of the experience, as well as alignment of each channel to both each other and the overall brand objectives.

Finally, as more customers migrate to less in-person channels, customer experience researchers should endeavor to measure the customer experience within each channel.  As more late adopters are forced by the pandemic to migrate to these channels, they may bring with them a completely different set of expectations relative to early adopters, therefore managers would be well served to understand the expectations of these newcomers to the alternative channels so they can adjust the customer experience to meet these new customers’ expectations.

As commerce migrates away from conventional in-person channels to alternative delivery channels, the importance of these channels will increase.  As a result, the quality and consistency of delivery in these channels will need to be measured through the use of mystery shoppers.  Some industries are going to be problematic, as their current economics do not currently support alternative delivery.  With time however, economic models will evolve to support alternative channels.

Conclusion

This is a difficult time.  It will be the defining event of our generation.

The pandemic, and our reaction to it, is dramatically changing how humans interact with each other, and the customer experience is no exception.  There is reason to suggests this difficult time could become a new normal.  Managers of the customer experience need to understand the implications of the customer experience in the post-Covid environment, as the implications of the pandemic may never fully subside.  Customer experience managers must consider the implications of this new normal, not only on the customer experience, but on customer experience measurement.

Customer Experience Measurement in the Coronavirus Age: Implications for Customer Experience

Earlier in this three-part series we discussed the mechanism of infection and risk of SARS-CoV-2 infection. 

In summary, the most common cause of spread is believed to be airborne by inhaling virus particles exhaled into the environment.  The infectious dose of a virus is the amount of virus a person needs to be exposed to in order to establish an infection.  We currently do not know the infectious dose for SARS-CoV-2.  Estimates range from a few hundred to a few thousand virus particles.[1]  One virus particle will not cause an infection.  To be infected one must exceed the infectious dose by either being exposed to a cough or a sneeze.  Absent coughs or sneezes, under normal activity one must be exposed to the virus over time to exceed the infectious dose.

This post draws ocorn the foundation of the first to discuss the implications of the pandemic on the customer experience.

Modern day customer experiences exist in a finely tuned ecosystem, where the dramatic changes as a result of the pandemic have off set the delicate balance, causing problems from supply chain disruptions to an immediate shift away from in-person channels.

Furthermore, the pandemic represents what I call a moment of truth regarding the relationship with customers.  Moments of truth are specific experiences of high importance, where a customer either forms or changes their opinion of a brand in meaningful and lasting ways.  How brands respond to moments of truth, particularly in this time of global crisis, will strengthen or weaken the customers’ relationship to the brand.

Moments of truth are specific experiences of high

importance, where a customer either forms or changes

their opinion of a brand in meaningful or lasting ways.

Customers are stressed.  They feel uncertainty, fear and, frankly, exhaustion.   Ongoing concern for personal safety, education of children, and the well being of loved ones is exhausting.  This uncertainty and fear drives customers to seek shelter from resources they trust.  Brands which become a trusted resource, which provide comfort, true comfort, in the face of this crisis have an opportunity to not only do the right thing, but cement their customers’ relationship with the brand.  On the other hand, brands which fail to do so, risk destruction of their customer relationships.

Care for all Stakeholders

Perhaps the most important way brands can respond to the moment of truth presented by this crisis is showing true care for stakeholders in the brand: customers, employees, and the community.

Care for Customers

Brands must communicate care for customers.  Drawing on a personal example, March of 2020 was a particularly worrisome time for me.   At that time, the Seattle area was considered one of the epicenters of the outbreak, mandatory stay at home orders where being introduced – fear ruled – fear driven by uncertainty; uncertainty with respect to the safety of myself and loved ones; uncertainty with respect to the financial future; uncertainty with respect to the state of the entire globe.

Amidst all this uncertainty and fear I received an email from Citigroup entitled “Covid-19.  Let us know if we can help.”  It communicated personal care for me, encouraged alternative channel use: online, mobile and 24/7 contact center assistance, and contained links to CDC guidance.

A week later the campaign continued with an update on the actions Citigroup was implementing based on the pandemic; again, educating me to digital tools available, offering personal assistance if needed.

Two and a half months later, in June, I received an email expressing “heartfelt thanks” for adapting to changes and remaining loyal.  It described ways Citigroup was assisting with a variety of COVID-19 relief, specifically introducing a partnership with celebrity chef Jose Anres’ World Central Kitchen Campaign distributing meals in low-income neighborhoods in big cities like New York, and monitoring the globe for food shortages elsewhere. This not only demonstrated care for me personally, but care for the community.

Care for Communities

Citigroup’s donations to the World Central Kitchen campaign is one example of care for our communities.   There are countless examples of brands offering community support. 

  • A beer brewery, Brewdog, shifted production away from beer to hand sanitizer.
  • A Spanish sports retailer donated scuba masks to hospitals.
  • EBay offered free services to small business forced to switch from brick-and-mortar to ecommerce to keep their small business afloat – pledging $100 million in support of this endeavor.

Care for Employees

Employees are important.  They animate the brand and drive customer loyalty – particularly in moments of truth like these.  Research has determined that in many retail and service environments, there is a positive correlation between employee satisfaction and employee retention as well as customer loyalty.  They are not immune from the fear and the stress of this crisis.  Additionally, frontline employees spend all their time in the brand-customer interface.  They are the personal representatives of the brand.

Additionally, given these front-line employees spend the majority of their time in the brand-customer interface, they tend to have a level of understanding about the customer experience that management often misses.

As a result, it is incumbent on brands to attend to the stresses employees are under, demonstrate concern, and develop communication channels for employees to feed customer experience intelligence to management.

Delivery Channels

I’ve always been an advocate of meeting customers in their preferred channel; meeting them where they are today and delivering a seamless experience.   Obviously, over the recent decades there has been a migration from in-person channels to increasing self-directed, alternative channels.  The pandemic has immediately accelerated this shift.  Be it telehealth, online banking, in-home instruction of our children, or a restaurant delivering through UberEats, providers of all types now face increasing pressure to bring their business to their customers’ homes.

Emotional Well Being

As observed earlier, this pandemic is a moment of truth between many brands and their customers.  In our experience, customers primarily want three things from a provider: 1) empathy, 2) care/concern for their needs, and 3) competence.  We see this constantly.  Customers want to do business with brands that empathize with them, care about their needs, and are capable of satisfying those needs in a competent manner.  Brands that seek to attend to the emotional needs of their customers during this moment of truth will earn the loyalty and positive word-of-mouth of their customers.

In-Person Precautions and Mitigation Strategies

While the pandemic has accelerated an ongoing transition to alternative channels, some industries require an in-person experience.  Based on current science, in-person interactions can be relatively safe if followed within CDC and public health guidance outlined in the first part of this series:

  • Physical Distancing:  Estimates of exposure time all assume close personal contact.  Physical distancing decreases the likelihood of receiving an infectious dose by putting space between ourselves and others – current recommendations are 6 feet.

Furthermore, many in-person transactions can now be done touch free.  I recently had to rent a car, and was pleased to meet the rental attendant outside holding a tablet.  The attendant took down all my information, I never had to touch or sign anything.  In a different transaction, requiring a signature, I was offered a single use pen to keep.

  • Masks:  Masks are a core tool to provide physical distancing between individuals. Masks do not primarily act as a filter for the wearer, but suppress the amount of droplets an infected person can spread into the space around them. This reduces the risk that others will exceed the infectious dose of the virus.
  • Ventilation:  Well ventilated areas disperse virus particles making it less likely a dose exceeds the infectious limits.  Like my car rental agency, brands should endeavor to provide well ventilated spaces for employees and customers to interact – not only to protect customers but employees as well.
  • Length of Exposure:  Finally, brands should design service encounters to be as time efficient as possible.  Again, the CDC advises a 15-minute exposure limit for close personal contact.  Social distancing through physical distance, masks, and ventilation should increase this safe exposure limit.  However, strategies should be implemented to make service encounters as brief as possible.  For example, if you require information from your customers as part of the service interaction, collect this required information online or over the phone prior to an appointment.  This could help to make customers and employees safer and more comfortable.
  • Hand Washing & Sanitizer:  Hand washing and sanitization is the primary defense against transfer infections.

Putting it All Together

Putting all this together, let’s look at an industry Kinesis has the most experience with.  Kinesis’ largest practice is in the banking and financial services industry.  Recently the American Bankers Association (ABA) released the results of an industry survey regarding publically announced responses of US banks to the pandemic. [2] 

Many banks are applying some of the concepts discussed above in creative ways.  A review of a random selection of banks reveals the following responses ranked from most common to least common:

  1. Enhanced deep cleaning and disinfecting of work spaces;
  2. Implementing social distancing in work spaces, including branches;
  3. Encouraging use of alternative delivery channels, such as mobile and internet banking;
  4. Personalized assistance to customers negatively impacted by the pandemic;
  5. Increased donations to charity/ partnering with the local community to mitigate the effects of the pandemic;
  6. Allowing employees to work remotely if possible;
  7. Limiting access to branches (closing branch lobbies, limiting hours, appointment only banking);
  8. Paid time off for employees to self-quarantine or to care of school age children;
  9. Rotating schedules of customer-facing staff to reduce risk (one institution has applied a 10 days on 10 days off policy); and
  10. Educating customers of pandemic related fraud/scams.

In the next post, we will build off the foundation of the previous two posts to address the implications of the pandemic on customer experience measurement.


[1] Geddes, Linda. “Does a high viral load or infectious dose make covid-19 worse?”  newscientist.com, March 27, 2020.  Web May 14, 2020.

[2] “America’s Banks Are Here to Help: The Industry Responds to the Coronavirus.”  ABA.com, April 29, 2020.  Web.  May 19 2020.

Implications of CX Consistency for Researchers – Part 3 – Common Cause v Special Cause Variation

Previously, we discussed the implications of intra-channel consistency for researchers.

This post considers two types of variation in the customer experience: common and special cause variation, and their implications for customer researchers.

The concepts of common and special cause variation are derived from the process management discipline Six Sigma.

Common cause variation is normal or random variation within the system.  It is statistical noise within the system.   Examples of common cause variation in the customer experience are:

  • Poorly defined, poorly designed, inappropriate policies or procedures
  • Poor design or maintenance of computer systems
  • Inappropriate hiring practices
  • Insufficient training
  • Measurement error

Special cause variation, on the other hand, is not random.  It conforms to laws of probability.   It is the signal within the system.  Examples of special cause variation include:

  • High demand/ high traffic
  • Poor adjustment of equipment
  • Just having a bad day

What are the implications of common and special cause variation for customer experience researchers?

Given the differences between common cause and special cause variation, researchers need a tool to help them distinguish between the two.  Researchers need a means of determining if any observed variation in the customer experience is statistical noise or a signal within the system.  Control charts are a statistical tool to make a determination if variation is noise or a signal.

Control charts track measurements within upper and lower quality control limits.  These quality control limits define statistically significant variation overtime (typically at a 95% confidence), which means there is a 95% probability that the variation is the result of an actual change in the customer experience (special cause variation) not just normal common cause variation.  Observed variation within these quality control limits are common cause variation.  Variation which migrates outside these quality control limits is special cause variation.

To illustrate this concept, consider the following example of mystery shop results:

Mystery Shop Scores

This chart depicts a set of mystery shop scores which both vary from month to month and generally appear to trend upward.

Customer experience researchers need to provide managers a means of determining if the month to month variation is statistical noise or some meaningful signal within the system.  Turning this chart into a control chart by adding statistically defined upper and lower quality control limits will determine if the monthly variation is common or special cause.

To define quality control limits, the customer experience researcher needs to determine the count of observations for each month, the monthly standard deviation, and the average count of shops across all months.

The following table adds these three additional pieces of information into our example:

 

Month

Count of Mystery Shops Average Mystery Shop Scores Standard Deviation of Mystery Shop Scores

May

510 83% 18%

June

496 84% 18%

July

495 82% 20%

Aug

513 83%

15%

Sept 504 83%

15%

Oct 489 85%

14%

Nov 494 85%

15%

Averages 500 83.6%

16.4%

To define the upper and lower quality control limits (UCL and LCL, respectively), apply the following formula:

Where:

x = Grand Mean of the score

n = Mean sample size (number of shops)

SD = Mean standard deviation

 

These equations yield quality control limits at 95% confidence, which means there is a 95% probability any variation observed outside these limits is special cause variation, rather than normal common cause variation within the system

Calculating these quality control limits and applying them to the above chart produces the following control chart, with upper and lower quality control limits depicted in red:

Control Chart

This control chart now answers the question, what variation is common cause and what variation is special cause.  The general trend upward appears to be statistically significant with the most recent month above the upper quality control limit.  Additionally, this control chart identifies a period of special cause variation in July.  With 95% confidence we know some special cause drove the scores below the lower control limit.  Perhaps this special cause was employee turnover, perhaps a new system rollout, or perhaps a weather event that impacted the customer experience.

 

 

Implications of CX Consistency for Researchers – Part 2 – Intra-Channel Consistency

Previously, we discussed the implications of inter-channel consistency for researchers, and introduced a process for management to define a set of employee behaviors which will support the organization’s customer experience goals across multiple channels.

This post considers the implications of intra-channel consistency for customer experience researchers.

As with cross-channel consistency, intra-channel consistency, or consistency within individual channels requires the researcher to identify the causes of variation in the customer experience.  The causes of intra-channel variation, is more often than not at the local level – the individual stores, branches, employees, etc.  For example, a bank branch with large variation in customer traffic is more likely to experience variation in the customer experience.

Regardless of the source, consistency equals quality.

In our own research, Kinēsis conducted a mystery shop study of six national institutions to evaluate the customer experience at the branch level.  In this research, we observed a similar relationship between consistency and quality.  The branches in the top quartile in terms of consistency delivered customer satisfaction scores 15% higher than branches in the bottom quartile.  But customer satisfaction is a means to an end, not an end goal in and of itself.  In terms of an end business objective, such as loyalty or purchase intent, branches in the top quartile of consistency delivered purchase intent ratings 20% higher than branches in the bottom quartile.

Satisfaction and purchase intent by customer experience consistency

Purchase intent and satisfaction with the experience were both measured on a 5-point scale.

Again, it is incumbent on customer experience researchers to identify the causes of inconsistency.   A search for the root cause of variation in customer journeys must consider processes cause variation.

One tool to measure process cause variation is a Voice of the Customer (VOC) Table. VOC Tables have a two-fold purpose:  First, to identify specific business processes which can cause customer experience variations, and second, to identify which business processes will yield the largest ROI in terms of improving the customer experience.

VOC Tables provide a clear road map to identify action steps using a vertical and horizontal grid.  On the vertical axis, each customer experience attribute within a given channel is listed.  For each of these attributes a judgment is made about the relative importance of each attribute.  This importance is expressed as a numeric value.   On the horizontal axis is a exhaustive list of business processes the customer is likely to encounter, both directly and indirectly, in the customer journey.

This grid design matches each business process on the horizontal axis to each service attribute on the vertical axis.  Each cell created in this grid contains a value which represents the strength of the influence of each business process listed on the horizontal axis to each customer experience attribute.

Finally, a value is calculated at the bottom of each column which sums the values of the strength of influence multiplied by the importance of each customer experience attribute.  This yields a value of the cumulative strength of influence of each business process on the customer experience weighted by its relative importance.

Consider the following example in a retail mortgage lending environment.

VOC Table

In this example, the relative importance of each customer experience attributes was determined by correlating these attributes to a “would recommend” question, which served as a loyalty proxy.  This yields an estimate of importance based on the attribute’s strength of relationship to customer loyalty, and populates the far left column.  Specific business processes for the mortgage process are listed across the top of this table.  Within each cell, an informed judgment has been made regarding the relative strength of the business process’s influence on the customer experience attribute.  This strength of influence has been assigned a value of 1 – 3.  It is multiplied by the importance measure of each customer experience attribute and summed into a weighted strength of influence – weighted by importance, for each business process.

In this example, the business processes which will yield the highest ROI in terms of driving the customer experience are quote of loan terms (weighted strength of influence 23.9), clearance of exemptions (22.0), explanation of loan terms (20.2), loan application (18.9) and document collection (16.3).

Next, we will look into the concepts of common and special cause variation, and another research methodology designed to identify areas for attention. Control charts as just such a tool.

Mystery Shopping Gap Analysis: Identify Service Attributes with Highest Potential for ROI

Research without call to action may be interesting, but in the end, not very useful.

This is particularly true with customer experience research.  It is incumbent on customer experience researchers to give management research tools which will identify clear call to action items –items in which investments will yield the highest return on investment (ROI) in terms of meeting management’s customer experience objectives.   This post introduces a simple intuitive mystery shopping analysis technique that identifies the service behaviors with the highest potential for ROI in terms of achieving these objectives.

Mystery shopping gap analysis is a simple three-step analytical technique.

Step 1: Identify the Key Objective of the Customer Experience

The first step is to identify the key objective of the customer experience.  Ask yourself, “How do we want the customer to think, feel or act as a result of the customer experience?”

For example:

  • Do you want the customer to have increased purchase intent?
  • Do you want the customer to have increased return intent?
  • Do you want the customer to have increased loyalty?

Let’s assume the key objective is increased purchase intent.  At the conclusion of the customer experience you want the customer to have increased purchase intent.

Next draft a research question to serve as a dependent variable measuring the customer’s purchase intent.  Dependent variables are those which are influenced or dependent on the behaviors measured in the mystery shop.

Step 2: Determine Strength of the Relationship of this Key Customer Experience Objective

After fielding the mystery shop study, and collecting a statistically significant number of shops, the next step is to determine the strength of the relationship between this key customer experience measure (the dependent variable) and each behavior or service attribute measured (independent variable).  There are a number of ways to determine the strength of the relationship, perhaps the easiest is a simple cross-tabulation of the results.  Cross tabulation groups all the shops with positive purchase intent and all the shops with negative purchase intent together and makes comparisons between the two groups.  The greater the difference in the frequency of a given behavior or service attribute between shops with positive purchase intent compared to negative, the stronger the relationship to purchase intent.

The result of this cross-tabulation yields a measure of the importance of each behavior or service attribute.  Those with stronger relationships to purchase intent are deemed more important than those with weaker relationships to purchase intent.

Step 3: Plot the Performance of Each Behavior Relative to Its Relationship to the Key Customer Experience Objective

The third and final step in this analysis to plot the importance of each behavior relative to the performance of each behavior together on a 2-dimensional quadrant chart, where one axis is the importance of the behavior and the other is its performance or the frequency with which it is observed.

Interpretation

Interpreting the results of this quadrant analysis is fairly simple.    Behaviors with above average importance and below average performance are the “high potential” behaviors.  These are the behaviors with the highest potential for return on investment (ROI) in terms of driving purchase intent.  These are the behaviors to prioritize investments in training, incentives and rewards.  These are the behaviors which will yield the highest ROI.

The rest of the behaviors are prioritized as follows:

Those with the high importance and high performance are the next priority.  They are the behaviors to maintain.  They are important and employees perform them frequently, so invest to maintain their performance.

Those with low importance are low performance are areas to address if resources are available.

Finally, behaviors or service attributes with low importance yet high performance are in no need of investment.  They are performed with a high degree of frequency, but not very important, and will not yield an ROI in terms of driving purchase intent.

Research without call to action may be interesting, but in the end, not very useful.

This simple, intuitive gap analysis technique will provide a clear call to action in terms of identifying service behaviors and attributes which will yield the most ROI in terms of achieving your key objective of the customer experience.

Mystery_Shopping_Page

Two Questions….Lots of Insights: Turn Customer Experience Observations into Valuable Insight

Customer experience researchers are constantly looking for ways to make their observations relevant, to turn observations into insight. Observing a behavior or service attribute is one thing, linking observations to insight that will maximize return on customer experience investments is another. One way to link customer experience observations to insights that will drive ROI is to explore the influence of customer experience attributes to key business outcomes such as loyalty and wallet share.

The first step is to gather impressions of a broad array of customer experience attributes, such as: accuracy, cycle time, willingness to help, etc. Make this list as long as you reasonably can without making the survey instrument too long.

For additional thoughts on survey length and research design, see the following blog posts:

Click Here: Maximizing Response Rates: Get Respondents to Complete the Survey

Click Here: Keys to Customer Experience Research Success – Start with the Objectives

The next step is to explore the relationship of these service attributes to loyalty and share of wallet.

Two Questions – Lots of Insight

In our experience, two questions: a “would recommend” and primary provider question, yield valuable insight into the relative importance of specific service attributes. Together, these two questions form the foundation of a two-dimensional analytical framework to determine the relative importance of specific service attributes in driving loyalty and wallet share.

Loyalty Question

Research has determined the business attribute with the highest correlation to profitability is customer loyalty. Customer loyalty lowers sales and acquisition costs per customer by amortizing these costs across a longer lifetime – leading to some extraordinary financial results.

Measuring customer loyalty in the context of a survey is difficult. Surveys best measure attitudes and perceptions. Loyalty is a behavior not an attitude. Survey researchers therefore need to find a proxy measurement to determine customer loyalty. A researcher might measure customer tenure under the assumption that length of relationship predicts loyalty. However, customer tenure is a poor proxy. A customer with a long tenure may leave, or a new customer may be very satisfied and highly loyal.

Likelihood of referral captures a measurement of the customer’s likelihood to refer a brand to a friend, relative or colleague. It stands to reason, if one is going to refer others to a brand, they will remain loyal as well, because customers who are promoters of a brand are putting their reputational risk on the line. This willingness to put their reputational risk on the line is founded on a feeling of loyalty and trust.

Any likelihood of referral question can be used, depending on the specifics of your objectives. Kinesis has had success with both a “yes/no” question, “Would you refer us to a friend, relative or colleague?” and the Net Promoter methodology. The Net Promoter methodology asks for a rating of the likelihood of referral to a friend, relative or colleague on an 11-point (0-10) scale. Customers with a likelihood of 0-6 are labeled “detractors,” those with ratings of 7 and 8 and identified as “passive referrers,” while those who assign a rating of 9 and 10 are labeled “promoters.”

In our experience asking the “yes/no” question: “Would you refer us to a friend, relative or colleague?” produces starker differences in this two-dimensional analysis making it easier to identify which service attributes have a stronger relationship to both loyalty and engagement.

Engagement Question

Similar to loyalty, customer engagement or wallet share can lead to some extraordinary financial results. Wallet share is the percentage of what a customer spends with a given brand over a specific period of time.

Also similar to loyalty, measuring engagement or wallet share in a survey is difficult. There are several ways to measure engagement: one methodology is to use some formula such as the Wallet Allocation Rule which uses customer responses to rank brands in the same product category and employs this rank to estimate wallet share, or to use a simple yes/no primary provider question.

Methodology

Using these loyalty and engagement measures together, we can now cross tabulate the array of service attribute ratings by these two measures. This cross tabulation groups the responses into four segments: 1) Engaged & Loyal, 2) Disengaged yet Loyal, 3) Engaged yet Disloyal, 4) Disengaged & Disloyal. We can now make comparisons of the responses by these four segments to gain insight into how each of these four segments experience their relationship with the brand.

These four segments represent: the ideal, opportunity, recovery and attrition.

Loyalty Engagement_2

Ideal – Engaged Promoters: This is the ideal customer segment. These customers rely on the brand for the majority of their in category purchases and represent lower attrition risk. In short, they are perfectly positioned to provide the financial benefits of customer loyalty. Comparing attribute ratings for customers in this segment to the others will identify both areas of strength, but at the same time, identify attributes which are less important in terms of driving this ideal state, informing future decisions on investment in these attributes.

Opportunity – Disengaged Promoter: This customer segment represents an opportunity. These customers like the brand and are willing to put their reputation at risk for it. However, there is an opportunity for cross-sell to improve share of wallet. Comparing attribute ratings of the opportunity segment to the ideal will identify service attributes with the highest potential for ROI in terms of driving wallet share.

Recovery – Engaged Detractor: This segment represents significant risk. The combination of above average share of wallet, and low commitment to put their reputational risk on the line is flat out dangerous as it puts profitable share of wallet at risk. Comparing attribute ratings of customers in the recovery segment to both the ideal and the opportunity segments will identify the service attributes with the highest potential for ROI in terms of improving loyalty.

Attrition – Disengaged Detractor: This segment represents the greatest risk of attrition. With no willingness to put reputational risk on the line, and little commitment to placing share of wallet with the brand, retention strategies may be too late for them. Additionally, they most likely are unprofitable. Comparing the service attributes of customers in this segment to the others will identify elements of the customer experience which drive attrition and may warrant increased investment, as well as, elements that do not appear to matter very much in terms driving runoff, and may not warrant investment.

By making comparisons across each of these segments, researchers give managers a basis to make informed decisions about which service attributes have the strongest relationship to loyalty and engagement. Thus identifying which behaviors have the highest potential for ROI in terms of driving customer loyalty and engagement. This two-dimensional analysis is one way to turn customer experience observations into insight.

Click Here For More Information About Kinesis' Research Services