Most teams treat pricing like a deployment: do it once, hope it works, and don't touch it again for a year. We decided to treat it like a feature — something you test, iterate on, and ship changes to weekly. Over 60 days, we ran 40 distinct pricing experiments across user cohorts. Here's the operating system we built and what it revealed.

The setup

First, the constraint: we don't have millions of users. We run consumer products in the low thousands of DAU. That means traditional A/B testing with 95% confidence intervals would require weeks per test. We can't afford that. So we built a testing framework around three principles:

Big deltas only. We never test a $4.99 price against $5.49. We test $4.99 against $9.99 or $4.99 against free-with-ads. If the difference isn't large enough to detect with a small sample, the difference isn't large enough to matter.

Sequential testing. Instead of fixing a sample size upfront, we use a Bayesian sequential framework that lets us stop early when evidence is strong. This cut our average test duration from 18 days to 6.

Multi-arm bandits for exploration. For tests with more than 2 variants, we allocate traffic dynamically using Thompson Sampling. The winning variant gets more traffic over time, which both accelerates convergence and limits revenue loss from bad variants.

HYPOTHESIS big delta only SEQUENTIAL TEST bayesian, stop early SHIP OR KILL ≤6 days avg learnings feed next hypothesis THOMPSON SAMPLING — traffic allocation over time fig. 1 — experiment framework + bandit allocation converging on winner

The 40 experiments

We didn't just test price points. Pricing is a surface, not a number. Here's how the 40 experiments broke down:

CategoryTestsBiggest winnerΔ conversion
Price point8$6.99 vs $9.99+18%
Framing (daily vs monthly)4$0.23/day vs $6.99/mo+34%
Trial length63-day vs 7-day trial+22%
Feature gating5Hard gate vs soft gate-11%
Annual vs monthly4Annual-first display+27%
Paywall screen design7Minimalist vs feature list+15%
Currency anchoring3Local currency rounding+8%
Social proof on paywall3Review count display+12%

Finding 1: Daily framing is the single biggest lever

Showing "$0.23/day" instead of "$6.99/month" increased conversion by 34%. This wasn't surprising in hindsight — every SaaS pricing page does this for annual plans. What was surprising is that almost no consumer app does it, and the effect is even stronger in consumer because the absolute numbers are smaller.

The psychology is straightforward but underappreciated: $6.99/month triggers a subscription evaluation — "is this worth adding to my monthly expenses?" $0.23/day triggers a comparison — "this costs less than a candy bar." The user isn't doing different math. They're in a different mental frame entirely.

Finding 2: Shorter trials convert better

This was counterintuitive. The 3-day trial converted 22% better than the 7-day trial. The standard wisdom is that longer trials let users "discover more value." In practice, longer trials let users procrastinate on the conversion decision. The 3-day trial creates urgency without being aggressive — users know they need to evaluate the product now, not "someday this week."

There's a second-order effect: 3-day trials also produce better-quality subscribers. Their D30 retention after conversion was 14% higher than 7-day trial converts. Our theory: the 3-day window self-selects for users who are genuinely interested enough to engage quickly.

A 7-day trial is a polite way of saying "we're not confident enough in our product to ask you to decide in 3 days."

Finding 3: Hard feature gates destroy more value than they capture

We tested hard gates (feature completely locked behind paywall) vs soft gates (feature available with limits — e.g., 3 uses per day free, unlimited paid). Hard gates reduced conversion by 11%. But worse, they reduced D7 retention of free users by 28%. Users who hit a hard gate and don't convert don't come back. They haven't built enough habit to justify paying, and the gate prevents them from ever building it.

Soft gates let the user experience enough of the value to build a habit, then create friction at the point where the habit is strong enough to justify paying. The conversion moment shifts from "I want to try this" to "I need more of this." That's a fundamentally stronger buying signal.

HARD GATE signup try free 🔒 WALL 72% leave 28% pay SOFT GATE signup use (3/day) build habit want more 39% pay better LTV fig. 2 — hard gate vs soft gate user journey

Finding 4: The paywall screen matters more than the price

We ran 7 variations of the paywall screen. Same price, same feature set, different presentation. The gap between the best and worst performing screen was 41%. Forty-one percent. The price was identical. Only the layout, copy, and visual hierarchy changed.

The winning screen was the simplest: product name, one sentence of value prop, the daily price in large text, and a single CTA. No feature comparison tables. No "most popular" badges. No testimonials. The losing screen had all of those things. More information led to more deliberation, which led to more exits.

Finding 5: Local currency rounding is free money

When we converted USD prices to local currencies, we initially used exact exchange rates. €6.43/month. This looks wrong to users — it feels like an afterthought. When we rounded to €6.49 or €5.99, conversion increased by 8% in European markets. The psychological effect of "clean" numbers in the local currency signals that the product was designed for this market, not just translated into it.

The meta-learning

After 40 experiments, the compound effect was a 2.3x increase in revenue per user. Not from raising prices — the final price point was actually lower than where we started. The revenue increase came entirely from conversion rate improvements and better retention of converted users.

The real learning isn't any individual finding. It's that pricing is a continuous optimization surface, not a one-time decision. Every week we don't run a pricing experiment is a week of revenue we're leaving on the table. The setup cost is minimal once you have the framework. The opportunity cost of not testing is enormous.


If you're running a consumer product and you haven't changed your pricing in the last 90 days, you're almost certainly leaving money on the table. Not because your price is wrong — but because you don't know if it's right, and you're not running the experiments that would tell you.