← All posts

2026-04-29 · Greg Armstrong

What a 4.2 doesn't tell you

You check your reviews and you're sitting at 4.2. Not bad. Not great, but not a crisis. You've had dips before and things levelled out. Business is ticking along. Customers seem happy enough.

Then you start to notice that some regulars have gone quiet. People who used to come in every couple of weeks. You tell yourself it's probably nothing. Life gets busy, routines change. You move on.

Six months later the score is still 4.2. The regulars still haven't come back.

A 4.2 is a compressed signal. It takes every experience your customers have had, filters them through whoever bothered to leave a review (already a self-selected group, skewed toward the very satisfied and the very annoyed), and collapses everything into one number. That number tells you roughly where you land in the minds of people who felt strongly enough to say something. Which isn't nothing. But what it doesn't tell you is which customers are quietly leaving, or why, or what part of the experience tipped them.

The number is a lagging indicator. By the time your score shifts, the underlying problem has usually been playing out for weeks or months. You're reading a snapshot of what already happened, averaged across a population whose opinions you only partially captured.

The averaging is where meaning disappears. Your 4.2 might be a tight cluster of real fours: customers who had a decent experience and went home mostly satisfied. Or it might be a bimodal spread, a lot of fives from loyal regulars alongside a cluster of twos and threes from first-time visitors who saw nothing that would bring them back. Both produce the same score. They are not the same business.

And even if you could see the distribution, you would still be missing the thing that actually lets you act. Which part of the experience is driving the lower scores? The wait? Something about how the visit ended? The aggregate score doesn't carry that information. It was never designed to.

The instinctive response to a dip is usually to generate more reviews. Push for volume, hope the distribution averages out. This works occasionally, but it's treating the symptom. If the thing driving your lower scores is a real pattern in the experience, more reviews don't change the pattern. They just create more data about it.

A less obvious response is to ask what the low scores have in common. Which customers gave them. When they visited. What they experienced. Whether there's something addressable or just noise. That question is hard to answer from an aggregate score, and close to impossible from a pile of unstructured text reviews.

The businesses that actually improve tend to be the ones who can say something specific. "New customers rate us a full point lower than returning ones." "Our scores on value for money have been dropping for two months while everything else is flat." Those sentences point somewhere. A 4.2 gives you something to worry about but not somewhere to start.

A score becomes useful when you can break it down. When you can say: first-time visitors are rating us lower than returning customers, and it's specifically the value-for-money question where they diverge. That's a sentence you can act on. "We have a 4.2" is not.

A post-visit prompt to leave a Google review gives you a star and, if you're lucky, a sentence or two of text to read and interpret yourself. It doesn't tell you whether the problem was at the front of house or the back. It doesn't tell you whether the person who gave you a 3 would have come back if one thing had gone differently. It's a signal without a location.

Structured feedback changes that. Qria collects per-question scores with every response, so you can track specific dimensions of the experience over time and see how different customer segments are scoring you. The overall number is still there. But you can see what it's made of.

The 4.2 from a customer who has been coming for three years and who sent their sister last month is the same number as the 4.2 from someone who left quietly and will not be back. They land in the same average.

What you need to know is who is behind it, and what they experienced, and whether the pattern is getting better or worse among the customers who are still deciding whether to stay. The score alone can't tell you that. You have to have asked.