For about two years, our daily standup started the same way: someone pulled up the DAU chart. Green meant a good day. Red meant we'd spend the next hour trying to figure out what happened. The problem wasn't that the number was wrong. The problem was that it was meaningless.

DAU is the comfort food of product metrics. It feels important. It goes up when things are "working." It's easy to explain to anyone. And it tells you almost nothing about the health of your product.

The flatline that changed everything

In Q3 2026, our DAU for one product was flat for 8 weeks. Completely flat. The team was anxious. But when we decomposed the number, we found something unexpected: acquisition was up 40%, and churn had also increased by roughly the same absolute number. Two completely different dynamics — one good, one terrible — canceling each other out into a single line that looked like "stable."

high mid low DAU acq. churn W1 W3 W5 W7
fig. 1 — DAU flatline masking divergent acquisition and churn trends

This is not an edge case. This is the default behavior of aggregate metrics. They compress multiple signals into one number and destroy the information you actually need to make decisions.

What we replaced it with

We moved to a decomposed dashboard. No single top-line number. Instead, we track four things independently, and we never combine them:

1. New user activation rate

Not signups. Not downloads. The percentage of new users who complete the core action within 48 hours. For us, this is the single best predictor of whether a user will still be around in 30 days. We found that the correlation between "completed core action in 48h" and "retained at D30" is 0.74 across our products. The correlation between "opened the app on day 1" and "retained at D30" is 0.12. That's the difference between signal and noise.

2. Qualified daily actives (QDA)

We invented this metric because DAU counts zombies. A user who opens the app, sees a loading screen, and leaves is a daily active user. A bot that pings your API is a daily active user. QDA counts users who performed at least one meaningful action — defined differently per product, but always tied to the value loop.

DAU: 100% of "active" users opened app + stayed >10s: ~68% QDA (core action): ~31% fig. 2 — DAU vs QDA filter

When we switched from DAU to QDA, the number dropped by 69%. That's not a bad thing. That's clarity. We were finally measuring users who were actually using the product, not users who happened to open it.

3. Revenue per retained user

Not ARPU. ARPU divides revenue by all users, including free users who will never convert. Revenue per retained user tells you how much value you're extracting from the users who actually stick around. When this number goes up, you're either getting better at monetization or attracting higher-quality users. When it goes down, you're either underpricing or diluting your user base. Both are actionable. ARPU going down could mean anything.

4. Cohort decay curves

Not a number. A shape. We look at the retention curve for each weekly cohort and compare the shape, not the level. A cohort that starts at 40% D1 retention but flattens at 15% D30 is infinitely more valuable than a cohort that starts at 60% D1 but decays to 3% D30. The first curve has a floor. The second doesn't.

60% 40% 20% 0% D1 D7 D14 D21 D30 A B floor ≈ 15%
fig. 3 — cohort A (low start, floor) vs cohort B (high start, no floor)

The organizational consequence

The most underrated effect of changing your metrics is that it changes your conversations. When DAU was the headline number, every team meeting devolved into "what can we do to get more people to open the app." That's an engagement question, and the answers are always the same: more push notifications, more emails, more re-engagement campaigns. It's a treadmill.

When QDA became the headline number, the question shifted to "what can we do to make more people complete the core action." That's a product question. And the answers are fundamentally different: simplify the flow, reduce time-to-value, remove steps.

The metric you put on the wall determines the questions your team asks. The questions determine the solutions. Pick the wrong metric and you'll build the wrong product while feeling good about it.

Why this matters if you're small

You might think this is a big-company problem. It's the opposite. If you have 500,000 DAU, a 5% measurement error is noise. If you have 1,500 DAU, a 5% measurement error is 75 users — which might be the difference between a growing product and a dying one. Small sample sizes amplify metric choice. You literally cannot afford to track the wrong thing.

We run products with DAU in the low thousands. At that scale, every user is visible. Every behavior pattern is identifiable. And every wrong metric is a magnified distortion. DAU was giving us a smoothed-out story that felt safe and was completely wrong.


We didn't replace one dashboard with another. We replaced a reflex — the reflex to check one number — with a practice: decompose, compare shapes, look for divergence. It takes more time. It's less satisfying. It doesn't fit in a Slack message. But it's the difference between piloting by instruments and piloting by the color of the sky.