Companies and leaders engaging in Win-Loss often think that Loss-focused data will net the most-useful information on a research project. The thinking goes that since they already know what Wins have to say speaking with Losses will yield the most useful information. And if the account manager says the relationship is good then that’s good enough, right? Not so fast.
We were recently engaged by a customer to reach out to their renewal list or, more accurately, their non-renewal list. The folks on the interview list had, for whatever reason, chosen to end the business relationship and we were tasked with finding out why. Strap yourself in, friends. Let’s go for a ride.
A Dire Picture
The Win-Loss calls painted a dire picture over time. The product is seen as lack-lustre and is missing elements that the competition offers. Worse, new products don’t work as promised and service doesn’t live up to expectations. Ouch. The team’s worst fears are confirmed by the calls. These are real and significant problems. The team’s natural reaction is to focus resources on fixing the issues raised in the calls, even if it means taking resources away from current customers.
But Wait… What’s Wrong with This Picture?
This data is all rather negative now isn’t it? It’s big and it’s scary and it requires action. But wait… where are those who think that the offering is adequate or even quite good? Let’s be real here, our client is a going concern and they steadily onboard new clients as other churn out. So there’s a bit of a downward trend in sales year-over-year, okay, but it’s not so bad that layoffs are imminent. Essentially, they could be better but they aren’t as bad as the calls make things seem.
Zooming Out – The Importance of Strategic Sampling
Since this is a post on the problem with Loss-focused research, you can probably see where we’re going with this. The point-at-issue here is a classic example of sampling bias. Yep, that simple. Sampling bias skews your results because you are sampling from a sub-set of people that all think the same way. In this case, it was those dissatisfied with the product/business. Sampling bias can be a problem for any research effort since it is practically impossible to ensure perfect randomness in sampling. It’s particularly a problem for research methods with small sample sizes since if you’re not careful you can unknowingly yield a set of folks who all feel the same way even if that opinion is not the popular opinion (holy run-on-but-accurate-sentence). But that’s okay.
The Real Issue Here
So here’s the thing. Sampling bias isn’t necessarily a bad thing – when you know what you’re getting into. If you’re going to focus on Losses, of course you’re going to get negative responses. The real issue here is that, despite advice to the contrary, our otherwise-savvy client took the series of negative calls at face value without considering the alternative where they still did good work in the minds of many customers.
Net-Net: Take Loss-focused research with a grain of salt. Your customers will tell you important things and you should definitely pay attention to what they say. Dissenting opinions are our friends. But take dissenters’ feedback and filter it through what you already know about your business. Compare your responses to information you know well like your sales figures and trends. Loss-focused research has a place, but since no business can be all things to all people make sure that Loss-focused research is in the right place.
Or you could talk to Wins, too. Wins are often over-looked by companies researching with Win-Loss, and that’s too bad. Learning from your existing customers in a structured way can both mitigate all that we talked about above whilst also providing a welcome feedback channel for your customers. They want to talk. You just have to want to listen.