← All posts

2026-04-30 · Greg Armstrong

What to do with feedback you can't act on yet

There's a Notion page somewhere in your workspace. Maybe it's called "User feedback" or "Feature requests" or just "Stuff to look at later." It has sixty entries. Some of them have the user's name attached. Most don't. The oldest ones are eight months old. Nobody knows which of those users have since churned, whether the thing they asked for is still the thing they'd want today, or whether a different feature you shipped last quarter quietly solved it for half of them.

This is the default state for a lot of SaaS products. Feedback gets collected, somewhere. It just doesn't go anywhere.

The problem isn't that the feedback isn't useful. Some of it probably is. The problem is that without context, you can't tell which parts. A feature request from a user on a free trial eight months ago is a different signal than the same request from a paying customer who's been using your product for a year. The words might be identical. The weight is not.

Why context decays

A request that made sense at the time it was submitted can become uninterpretable within a few months. The user who asked for X may have churned because they didn't get it, which makes their request a churn signal as much as a feature opportunity. Or they found a workaround and it's less urgent now than it seemed. Or your product has changed in ways that make the original request tangentially relevant at best.

None of this is knowable from a row in a spreadsheet.

The other thing that happens over time: you can no longer reconstruct what you were thinking when you added the entry. "Export to CSV" could mean five different things depending on what product area the user was working in, which plan they were on, and whether they were trying to do something you already support. Without that context, you end up having to re-investigate the original problem from scratch, which defeats the purpose of having collected the feedback at all.

What to store

The fix isn't a more elaborate feedback system. It's a small amount of metadata captured at the time the feedback comes in.

Tag by theme, specifically. Not "reporting" as a category, but "export limitations" or "sharing with external users" or "chart types in the dashboard." Tags specific enough that when you search for them in three months, you know exactly what people were asking for. The more precise the tag, the less work it takes to reconstruct intent later.

Next to the tag, note enough about the user to reconstruct who they were. Which plan they were on. How long they'd been a customer at the time. Whether they were still active the last time you checked. A request from someone on your highest plan who had been with you for eighteen months carries different weight than the same request from someone who signed up the week before. You don't need to build a CRM entry. A sentence of context is usually enough.

Then set a trigger instead of a regular meeting. The trigger should be specific: "look at this when we start the sprint for improving the reporting area." Regular feedback review meetings fail because most of what's in the backlog isn't relevant to what the team is currently working on. So the meeting becomes perfunctory and eventually gets cancelled. A trigger means the feedback surfaces when it's actually useful.

Link back to the original response too, if you have one. When you come back to something months later, seeing what else that user said is usually more useful than the extracted note.

Where structured collection helps

Unstructured feedback, where someone sends an email or types something into a support chat, doesn't carry context unless you manually add it. Which is fine when it happens occasionally, but it's not a system.

When feedback comes in through a form with specific questions, each response carries information about who submitted it and what they were responding to. Qria attaches customer segment information to responses by default, which means when you return to a six-month-old feature request, you can see whether it came from the kind of user whose opinion should actually influence your roadmap.


One thing that makes backlogs harder over time: collecting feedback without any follow-up mechanism. When customers submit feedback and never hear anything back, they stop submitting. If someone tells you your reporting feature is limited and then watches you ship three other things over the next quarter without touching reporting, their next piece of feedback on that topic will be more muted. Or there won't be one.

This doesn't mean you need to address every request or send a personal response to everyone. But when you ship something that addresses a theme people raised, acknowledging it in-product or in a release note has a disproportionate effect on how willing people are to keep telling you things. That's a separate topic from the backlog problem. The point is just that collecting feedback and doing nothing visible with it erodes the signal over time.

Your backlog will always be bigger than your capacity to address it. That's not a problem you can solve. What you can control is whether the stuff in it is still legible when you eventually come back to it. A feature request with a user segment tag and a trigger condition attached is something you can still use in eight months. A feature request with nothing but a name and a vague sentence is effectively gone.