← All posts

2026-04-20 · Greg Armstrong

Why NPS is probably lying to you

NPS was introduced in a Harvard Business Review article in 2003. The idea was simple: ask customers one question -- "How likely are you to recommend us to a friend or colleague?" on a scale of 0 to 10, divide them into Promoters, Passives, and Detractors, subtract one percentage from the other, and you have a single number that supposedly predicts business growth.

Thousands of companies adopted it. It became the default customer satisfaction metric. "What's our NPS?" became a boardroom question. Whole consulting practices were built around interpreting it.

I get why. A single number is easy to track, easy to report, easy to argue about in a meeting. The problem is that in practice the number often tells you very little.

Think about what the question actually asks. Not whether you solved the customer's problem. Not whether they'd come back. It asks whether they'd recommend you -- which is a social act, not a satisfaction rating. It depends on who they know, what those people need, whether the subject ever comes up in conversation. A customer can genuinely like your product and never recommend it because their friends just don't need it. A lukewarm customer might still say yes because you asked and they couldn't think of anything better off the top of their head.

Then there's the benchmarking problem. An NPS of 40 sounds meaningful until you realise it means completely different things in different sectors. A software company with 40 might be doing badly. A healthcare provider with the same score might be exceptional. The number looks precise. The comparison is murky.

And timing. Most companies send NPS at a fixed trigger -- after a purchase, after a support call, quarterly. But sentiment shifts with every interaction. A score taken right after a frustrating billing experience is measuring something very different from one taken after a product update that worked well. You're capturing a moment and treating it as a relationship.

I'm not saying it's useless. Tracked consistently over time, treated as a rough directional signal, it tells you something about whether things are broadly improving or declining. The trouble is that most companies treat it as a verdict rather than a starting point. Support teams close tickets faster to move the number. Marketing sends the survey at the moment most likely to produce a high result. The metric starts optimising for itself.

And while that's happening, the questions that actually matter don't get asked. What's working? Where are people getting stuck? What keeps breaking? You only get real answers to those by asking directly, about specific things, tied to actual moments in the experience -- not on a quarterly schedule.

A score of 47 doesn't tell you onboarding is confusing, or that support is slow on Friday afternoons, or that three customers this month flagged the same missing feature in the open text box. Those are the things that change how you run the product.

NPS got popular because it made something complicated into a number you could put on a slide. That's not nothing. But a number that tells you something changed is only useful if you're also asking why, and most NPS implementations skip that part entirely.