Tag Archive | Voice of the Customer

Maximizing Response Rates: Get Respondents to Complete the Survey

Previously we discussed ways researchers can increase the likelihood of respondents opening an email survey invitation. Additionally, in a subsequent post we discussed how to get respondents to actually click on the survey link and participate in the survey.

This post is a discussion of ways to keep respondents motivated to complete the entire survey once they have entered it.

147953294 Resize

At its core, the key to completion rates is an easy to complete and credible survey that delivers on all promises offered in the invitation email.

Survey Length

From time to time various service providers of mine send me a survey invite, and I’m often surprised how many of them impose upon me, their customer, to complete a 30 or 40 minute survey.  First of all, they never disclose the survey length in advance, which communicates a complete lack of respect for my time.  In addition to just plain being an imposition, it is also a bad research practice.  Ten minutes into the survey I’m either pressed for time, frustrated, or just plain bored, and either exit the survey or frivolously complete the remaining questions without any real consideration of my opinions on the questions they are asking – completely undermining the reliability of my responses.  This is just simply a bad research practice, in addition to being inconsiderate of the end customer’s time.

We recommend keeping survey length short, no more than 10 to 12 minutes – in some cases such as a post-transaction survey – 5 minutes.

If research objectives require a long survey, rather than impose a ridiculously long survey on your customers producing frivolous results, break a 30 – 40 minute survey into two, or better yet, three parts fielding each part to a portion of your targeted sample frame.

Additionally, skip logic should be employed to avoid asking questions that are not applicable to a given respondent, thus decreasing the volume of questions you present to the end customer.

Finally, include a progress bar to keep respondents informed of how far along they are on the survey.

Ease of Completion

The last thing you want respondents feeling when they complete your survey is frustration.  First of all, if the sample frame is made up of your customers, the primary thing you are accomplishing is upsetting your customers and damaging your brand.  And also, creating bad research results because frustrated respondents are not in the proper mindset to give you well considered answers.

Frustration can come from awkward design, question wording, poor programming, and insufficient response choices.  Survey wording and vocabulary should be simple and jargon free, response choices should be comprehensive, and of course the survey programming should be thoroughly proofed and pretested.

Pretesting is a process where the survey is prefielded to a portion of the sample frame to test how they respond to the survey, significant portions of the questionnaire unanswered or a high volume of “other” or “none of the above” responses could signal trouble with survey design.

Convenience

Survey completion should be easy.  Survey entry should work across a variety platforms, browsers and devices.

Additionally, respondents should be allowed to take the survey on their own time, even leaving the survey while saving their answers to date and allowing reentry when it is more convenient for them.

Click Here For More Information About Kinesis' Research Services

Maximizing Response Rates: Get Respondents to Start the Survey

147953294 Resize

It is incumbent on researchers fielding self-administered surveys to maximize response rates.  This reduces the potential for response bias, where the survey results may not accurately reflect the opinions of the entire population of targeted respondents. Previously we discussed ways researchers can increase the likelihood of respondents opening an email survey invitation.  This post addresses how to get respondents to actually click on the survey link and participate in the survey.

Make the Invite Easy to Read

Don’t bury the lead.   The opening sentence must capture the respondent’s attention and make the investment in effort to read the invitation.   Keep in mind most people skim emails.  Keep text of the invitation short, paying close attention to paragraph length.  The email should be easy to skim.

Give a Reward

Offering respondents a reward for participation is an excellent way to motivate participation.  Tangible incentives like a drawing, coupon, or gift card, if appropriate and within the budget, are excellent tools to maximize response rates.   However, rewards do not necessarily need to be tangible.  Intangible rewards can also prove to be excellent motivators.  People, particularly customers who they have a relationship with the brand, want to be helpful.  Expressing the importance of their option, and communicating how the brand will use the survey to improve its offering to customers like the respondent is an excellent avenue to leverage intangible rewards to motivate participation.

Survey Length

Intangible rewards are often sufficient if the respondent’s cost to participate in the survey is minimal.  Perhaps the largest cost to a potential respondent is the time required to complete the survey.  Give them an accurate estimate of the time it takes to complete the survey – and keep it short.  We recommend no more than 10 minutes, more preferably five to six.   If the research objectives require a longer survey instrument, break the survey into two or three shorter surveys and deliver them separately to different targeted respondents.  Do not field excessively long surveys or mis-quote the estimated time to complete the survey – it is rude to impose on your respondents not to mention disastrous to your participation rates – and it’s unethical to mis-quote the survey length.  As with getting the participants to open the email – creditability plays a critical role in getting them to click on the survey.

Credibility

One of the best ways to garner credibility with the survey invite is to assure the participant confidentiality.  This is particularly important for customer surveys, where the customers interact commonly with employees.  For example, a community bank where customers may interact with bank employees not only in the context of banking but broadly in the community, must ensure customers that their survey response will be kept strictly confidential.

Personalizing the survey with appropriate merge fields is also an excellent way to garner credibility.

Survey Entry

Make it as easy as possible for the participant to enter the survey.  Program a link to the survey, and make sure it is both visible and presented early in the survey.  Again, most people skim the contents of emails, so place the link in the top 1/3 of the email and make it clear that it is a link to enter the survey.

In designing survey invitations, remember to write short, concise, easy to read emails that both leverage respondent’s reward centers (tangible or intangible), and credibly estimate the short time required to complete the survey.  This approach will help maximize response rates and avoid some of the pitfalls of response bias. Click here for the next post in this series in prompting respondents to complete the survey.

Click Here For More Information About Kinesis' Research Services

Maximizing Response Rates: Get Respondent to Open the Email

147953294 Resize

In fielding surveys researchers must be aware of the concepts of error and bias and how they can creep into a survey, potentially making the survey unreliable in ways that cannot be predicted.  For example, one source of error is statistical error, where not enough respondents are surveyed to make the results statistically reliable.  Another source of error, or bias, is response bias caused by not having a random sample of the targeted population.

A key concept of survey research is randomness of sample selection, in essence to give each member of the targeted survey population an equal chance of being surveyed.  Response rates are important in self administered surveys (such as an email surveys), because it is possible non-responders (people who for some reason choose not to complete the survey) have different opinions than those who choose to participate in the survey.  As a result, the survey is not purely random.  If non-responders are somehow different than responders, the survey results will reflect that difference – thus biasing the research.   It is therefore incumbent on researchers to maximize the survey response rate.

Say for example, a bank wants to survey customers after they have completed an online transaction.  If customers who love the bank’s online capabilities are more likely to participate in the survey than those who do not like the bank’s online capabilities, the survey results will be biased in favor of a positive view of the bank’s online offering because it is not a representative sample – it is skewed toward customers with the positive view.

It is, again, incumbent on researchers to maximize the response rate as much as possible in self-administered email surveys.

Pre-Survey Awareness Campaign

One strategy to maximize response rates (particularly in a customer survey context) is a pre-survey awareness campaign to make customers aware of the coming survey and encourage participation.  Such a campaign can take many forms, such as:

  1. Letter on company letterhead, signed by a high profile senior executive.
  2. Statement or billing inserts
  3. Email in advance of the survey

Each of these three are excellent ways to introduce the survey to respondents and maximize response rates.

Email Content

The next steps in maximizing response rates in email surveys is passing SPAM filter tests, and prompting the recipient to open the email.  The core concept here is credibility – to make the email appear as credible as possible.

The first step to maintaining credibility is to avoid getting caught in SPAM filter tests, the email content should avoid the following:

  • Words common in SPAM, like “win” or “free”
  • The use of ALL CAPS
  • Excessive punctuation
  • Special characters

Additionally, do not spoof emails.  Spoofing is the forgery of an email header to make it appear it originated from a source other than the actual source.   Send emails from your server.  (Sometimes Kinesis has clients who want the email to appear to originate from their server.  In such cases, we receive the sample from the client, append a unique identifier and send it back to the client to actually be mailed from their servers.)

Perhaps the best strategy to maintain the credibility of the email invite is to conform to Marketing Research Association (MRA) guidelines.  These guidelines include:

  • Clearly identify the researcher, including phone number, mailing address, and email
  • Post privacy policies online and include a link to these policies
  • Include a link to opt out of future emails

From and Subject Lines

Both the FROM and SUBJECT lines are critical in getting the respondent to open the email.

The FROM line has be as credible and recognizable as possible, avoiding vague or generic terms like “feedback”.  For surveys of customers, the company name or the name of a recognizable representative of the company should be used.

The SUBJECT line must communicate the subject of the email in a credible way that will make the respondent want to open the email.  Keep it brief (50 characters or less), clear, concise and credible.

Survey Timing

Not only is the content of the email important, but the timing of delivery plays a role in response rates.  In our experience sending the survey invitation in the middle of the week (Tuesday – Thursday) during daytime hours increases the likelihood that the email will be noticed by the respondent.

Reminder Emails

After an appropriate amount of time (typically for our clients 5 days), reminder emails should be sent, politely reminding the respondent of the previous invitation, and highlighting the importance of their opinion.  One, perhaps two, reminder emails are appropriate, but do not send more than two.

To maximize the probability that respondents will receive and open the email focus on sending a credible email mid-week, one which will pass SPAM filter tests, contain accurate credible and compelling SUBJECT and FROM lines, and send polite reminder emails to non-responders.

But opening the email is just the first step.  The actual objective is to get the respondents to open and complete the survey. Click here for the next post in this series in prompting respondents to participate in the survey.

 

Click Here For More Information About Kinesis' Research Services

Not All Customer Experience Variation is Equal: Common Cause vs. Special Cause Variation

Variability in customer experience scores is common and normal.  Be it a survey of customers, mystery shops, social listening or other customer experience measurement, a certain amount of random variation in the data is normal.  As a result, managers need a means of interpreting any variation in their customer experience measurement to evaluate if the customer experience is truly changing, or if the variation they are seeing is simply random.

In a previous post, we proposed the use of control charts as a tool to track customer experience measurements within upper and lower quality control limits, giving managers a meaningful way to determine if any variation in their customer experience measurement reflects an actual change in the experience as opposed to random variation or chance.

Now, managers need to understand the causes of variation, specifically common and special cause variation.  Common and special cause variation are six sigma concepts, while most commonly used in industrial production, they can be borrowed and employed to the customer experience.

Common Cause Variation:  Much like variation in the roll of dice, common cause variation is natural variation within any system.  Common cause variation is any variation constantly active within a system, and represents statistical “noise” within the system.

Examples of common cause variation in the customer experience are:

  • Poorly defined, poorly designed, inappropriate policies or procedures
  • Poor design or maintenance of computer systems
  • Inappropriate hiring practices
  • Insufficient training
  • Measurement error

Special Cause Variation: Unlike the roll of the dice, special cause variation is not probabilistically predictable within the system, as a result it does not represent statistical “noise” within the system, but is the signal within the system.

Examples of special cause variation include:

  • High demand/ high traffic
  • Poor adjustment of equipment
  • Just having a bad day

When measuring the customer experience it is helpful to consider everything within the context of the company-customer interface.  Every time a sales or service interaction within this interface occurs the customer learns something from the experience and adjusts their behavior as a result of the experience.  Managing the customer experience is the practice of managing what the customers learn from the experience and thus managing their behavior in profitable ways.

A key to managing customer behaviors is understanding common cause and special cause variation and their implications.  Common cause variation is variation built into the system: policies, procedures, equipment, hiring practices, and training.  Special cause variation is more or less how the human element and the system interact.

See earlier post:

Not All Customer Experience Variation is Equal: Use Control Charts to Identify Actual Changes in the Customer Experience

 

Click Here For More Information About Kinesis' Research Services

Best Practices in Bank Customer Experience Measurement Design

The question was simple enough…  If you owned customer experience measurement for one of your bank clients, what would you do?

Through the years, I developed a point of view of how to best measure the customer experience, and shared it with a number of clients, however, never put it down to writing.

So here it is…

Best practices in customer experience measurement use multiple inputs in a coordinated fashion to give managers a 360-degree view of the customer experience.  Just like tools in a tool box, different research methodologies have different uses for specific needs.  It is not a best practice to use a hammer to drive a screw, nor the butt end of a screwdriver to pound a nail.  Each tool is designed for a specific purpose, but used in concert can build a house. The same is true for research tools.  Individually they are designed for specific purposes, but used in concert they can help build a more whole and complex structure.

Generally, Kinesis believes in measuring the customer experience with three broad classifications of research methodologies, each providing a unique perspective:

  1. Customer Feedback – Using customer surveys and other less “scientific” feedback tools (such as comment tools and social media monitoring), managers collect valuable input into customer expectations and impressions of the customer experience.
  1. Observation Research – Using performance audits and monitoring tools such as mystery shopping and call monitoring, managers use these tools to gather observations of employee sales and service behaviors.
  1. Employee Feedback – Frontline employees are the single most underutilized asset in terms of understanding the customer experience. Frontline employees spend the majority of their time in the company-customer interface and as a result have a unique perspective on the customer experience.  They have a good idea about what customers want, how the institution compares to competitors, and how policies, procedures and internal service influence the customer experience.

These research methodologies are employed in concert to build a 360-degree view of the customer experience.

360-degree bank customer experience measurement

The key to building a 360-degree view of the customer experience is to understand the bank-customer interface.  At the center of the customer experience are the various channels which form the interface between the customer and institution.  Together these channels define the brand more than any external messaging.  Best in class customer experience research programs monitor this interface from multiple directions across all channels to form a comprehensive view of the customer experience.

Customer and front-line employees are the two stakeholders who interact most commonly with each other in the customer-institution interface.  As a result, a best practice in understanding this interface is to monitor it directly from each direction.

Tools to measure the experience from the customer side of interface include:

Post-Transaction Surveys: Post-transaction surveys provide intelligence from the other side of customer-employee interface.  These surveys are targeted, event-driven, collecting feedback from customers about specific service encounters soon after the interaction occurs.  They provide valuable insight into both customer impressions of the customer experience, and if properly designed, insight into customer expectations.  This creates a learning feedback loop, where customer expectations can be used to inform service standards measured through mystery shopping.  Thus two different research tools can be used to inform each other.  Click here for a broader discussion of post-transaction surveys.

Customer Comments:  Beyond surveying customers who have recently conducted a service interaction, a best practice is to provide an avenue for customers who want to comment on the experience.  Comment tools are not new (in the past they were the good old fashioned comment card), but with modern Internet-based technology they can be used as a valuable feedback tool to identify at risk customers and mitigate the causes of their dissatisfaction.  Additionally, comment tools can be used to inform the post transaction surveys.  If common themes develop in customer comments, they can be added to the post-transaction surveys for a more scientific measurement of the issue.  Click here for a broader discussion of comment tools.

Social Monitoring:  Increasingly social media is “the media”; prospective customers assign far more weight to social media then any external messaging.  A social listening system that analyzes and responds to social indirect feedback is increasingly becoming essential.  As with comment tools, social listening can be used to inform the post transaction surveys.  Click here for a broader discussion of social listening tools.

Directing our attention to the bank side of the interface, tools to measure the experience from the bank side of bank-customer interface include:

Mystery Shopping:  In today’s increasing connected world, one bad experience could be shared hundreds if not thousands of times over.  As in-person delivery models shift to a universal associate model with the branch serving as more of a sales center, monitoring and motivating selling skills is becoming increasingly essential.  Mystery shopping is an excellent tool to align sales and service behaviors to the brand. Unlike the various customer feedback tools designed to inform managers about how customers feel about the bank, mystery shopping focuses on the behavioral side of the equation, answering the question: are our employees exhibiting appropriate sales and service behaviors?  Click here for a broader discussion of mystery shopping tools.

Employee Surveys:  Employee surveys often measure employee satisfaction and engagement.  However, in terms of understanding the customer experience, a best practice is to move employee surveys beyond employee engagement and to understand what is going on at the customer-employee interface by leveraging employees as a valuable and inexpensive resource of customer experience information.  This information comes directly out one side of the customer-employee interface, and provides not only intelligence into the customer experience, but also evaluates the level of support within the organization, solicit recommendations, and compares perceptions by position (frontline vs. management) to identify perceptual gaps which typically exist within organizations.  Click here for a broader discussion of employee surveys.

For more posts in this series, click on the following links:


Click Here For More Information About Kinesis' Bank CX Research Services

Best Practices in Bank Customer Experience Measurement Design: Customer Surveys

Post Transaction Surveys

Many banks conduct periodic customer satisfaction research to assess the opinions and experiences of their customer base. While this information can be useful, it tends to be very broad in scope, offering little practical information to the front-line.  A best practice is a more targeted, event-driven approach collecting feedback from customers about specific service encounters soon after the interaction occurs.

These surveys can be performed using a variety of data collection methodologies, including e-mail, phone, point-of-sale invite, web intercept, in-person intercept and even US mail.  Fielding surveys using e-mail methodology with its immediacy and relatively low cost, offers the most potential for return on investment.   Historically, there have been legitimate concerns about the representativeness of sample selection using email.  However, as the incidence of email collection of banks increases, there is less concern about sample selection bias.

The process for fielding such surveys is fairly simple.  On a daily basis, a data file (in research parlance “sample”) is generated containing the customers who have completed a service interaction across any channel.  This data file should be deduped, cleaned against a do not contact list, and cleaned against customers who have been surveyed recently (typically three months depending on the channel).  At this point, if you were to send the survey invitations, the bank would quickly exhaust the sample, potentially running out of eligible customers for future surveys.   To avoid this, a target of the required number of completed surveys should be set per business unit, and a random selection process employed to select just enough customers to reach this target without surveying every customer. [1]

So what are some of the purposes banks use these surveys for?   Generally, they fall into a number of broad categories:

Post-Transaction: Teller & Contact Center: Post-transaction surveys are event-driven, where a transaction or service interaction determines if the customer is selected for a survey, targeting specific customers shortly after a service interaction.  As the name implies, the purpose of this type of survey is to measure satisfaction with a specific transaction.

New Account & On-Boarding:  New account surveys measure satisfaction with the account opening process, as well as determine the reasons behind new customers’ selection of the bank for a new deposit account or loan – providing valuable insight into new customer identification and acquisition.

Closed Account Surveys:  Closed account surveys identify sources of run-off or churn to provide insight into improving customer retention.

Call to Action

Research without a call to action may be informative, but not very useful.  Call to action elements should be built into research design, which provide a road map for clients to maximize the ROI on customer experience measurement.

Finally, post-transaction surveys support other behavioral research tools.  Properly designed surveys yield insight into customer expectations, which provide an opportunity for a learning feedback loop to support observational research, such as mystery shopping, where customer expectations are used to inform service standards which are in turn measured through mystery shopping.

For more posts in this series, click on the following links:

 

[1] Kinesis uses an algorithm which factors in the targeted quota, response rate, remaining days in the month and number of surveys completed to select just enough customers to reach the quota without exhausting the sample.


Click Here For More Information About Kinesis' Bank CX Research Services

Customer Experience Measurement Implications of Changing Branch Networks

The branch network is evolving based on banking’s changing economic model as well as changing customer expectations and behaviors. As the branch network evolves measurement of the customer experience within the branch channel will need to evolve as well to fit both the changing economic model and customer behaviors.

Deb Stewart’s recent article “The Branch Shrinks” in the June 2014 edition of ABA Bank Marketing and Sales used the experience of Sweden as an example of how the branch operating model in the US may evolve in response to these changes. Ms. Stewart describes Sweden’s branch operating model’s evolution in four primary ways:

  • Branches will be less monolithic, with branches tailored to location and market;
  • Branches will be much smaller and more flexible;
  • Customer facing technology will be more prevalent; and
  • Branch staffing both decline and change with increased use of “universal” associates who will conduct a wider range of functions, transforming tellers to sellers.

The article goes on to describe five case studies for innovative branch design in the United States.

Most commentary suggests branch networks will be redefined in three primary ways:

  • Flagship Branches: Hubs to a hub and spoke model offering education, advice, and serving as sales centers.
  • Community Centers: Branches smaller in scope focused on community outreach driving loyalty.
  • Expanded ATMs: These will serve as transaction centers at in-store or other high traffic sites.

In short, there will be a variety of branch types, many staffed with fewer employees, each with a unique role, presenting three customer experience challenges:

  1. Consistently delivering on the brand promises despite disparate branch types – Does the customer experience reinforce the overall brand promise?
  2. Fidelity to each branch’s unique role within network – Does the customer experience fit the specific role and objectives of the branch?
  3. Huge challenges associated with a transformation of skills to universal associates – How do we conduct a massive transition of skills of tellers into financial advisors, fluent in all bank products, and manage these associates fewer less employees on site.

Flagship Branches
The customer experience at flagship branches will best be measured much like it is at traditional branches today with a mix of customer satisfaction surveys and mystery shopping. A random sampling across all interaction types will ensure that all of the services offered at these education and sales centers are evaluated. Mystery shopping should focus scenarios on sales scenarios across all retail product lines, evaluating sales effectiveness, quality of experience and compliance.

Community Centers
Community Center branches offer the greatest need to refine customer experience measurement, and opportunity to use it as a management tool. Universal associates, with broad skill requirements working in lightly staffed branches, mandate that the customer experience be monitored closely. Post-transaction surveys across all interaction types should be used to evaluate employee skill level, appropriate resolution of inquiry, and consistency of service with brand promise. An automated email or mobile survey will provide managers with a near real time view of the customer experience at fraction of the cost of other data collection methods. Mystery shopping across a broad range of scenarios will evaluate employee skill level and appropriate referral practices for mortgage and investment services to Flagship branches or Video Bankers. Fewer employees will allow for better tracking of the customer experience at the employee level, which will be a necessity given the increased expectations on these employees with less onsite management.

Expanded ATMs
As with the other branch types, a random sampling of all interaction types will yield a valid sample of transactions these branches perform. As with the other branch types, automated email or mobile surveys will provide a near real time view of the experience. Mystery shopping may be used to evaluate service interactions with video tellers, investment advisors or tellers.

Evolution of the branch network, particularly with changes in the staffing model, will require changes in how the customer experience is monitored. The good news is survey technology is evolving as well, and will give managers the opportunity to gather intelligence on the customer experience in a highly efficient and productive manner.

For more posts in this series, click on the following links:


Click Here For More Information About Kinesis' Bank CX Research Services