The conventional wisdom says you need a dedicated team per product. A PM, 2-3 engineers, a designer, maybe a data person. That's 4-6 people per product. For 4 products, that's 16-24 people. We run 4 products with 3 people. Not because we're martyrs. Because the math only works this way.
The portfolio thesis
Most consumer products fail. Not most startups — most products. The base rate of a single consumer product reaching sustainable revenue is somewhere around 5-10%. If you're a team of 15 people all working on one product, you have a 5-10% chance of succeeding and a 90% chance of spending 2 years on something that doesn't work.
If you're a team of 3 running 4 products with modest resource allocation, you have roughly a 20-35% chance that at least one of them works. Same total effort, dramatically different expected outcome. The math is straightforward — it's the same logic as portfolio diversification in finance, applied to product bets.
How it actually works day-to-day
The key insight is that 4 products don't require 4x the work. They share infrastructure, they share patterns, and most importantly, they share the most expensive resource: decision-making attention.
Our time allocation isn't equal across products. At any given time, one product gets 60% of our attention and the others run on maintenance. We rotate which product is in "active mode" based on a simple heuristic: which product currently has the strongest signal that more effort will produce more results?
A product that's growing organically and has a clear retention issue gets active mode — because the ROI on retention work is high when acquisition is free. A product that's stable and generating revenue stays in maintenance — there's no urgent signal. A product that's not getting traction despite effort gets either a major pivot or deprioritized entirely.
What "maintenance" means
A product in maintenance mode still works. Users still use it. Revenue still comes in. We just don't ship new features. The only work that happens is: fix critical bugs (same day), respond to support (within 24h), and monitor metrics (daily, automated). This takes roughly 30 minutes per product per day.
The surprising thing: some products improve their metrics during maintenance mode. When you stop shipping features and let users settle into the current product, you see the natural retention curve without interference. Sometimes the best thing you can do for a product is stop touching it and let the data stabilize.
The shared infrastructure advantage
Running 4 products created pressure to standardize. We use the same tech stack, the same deployment pipeline, the same analytics setup, the same payment integration across all products. This means a fix in one product's payment flow benefits all of them. A new analytics event pattern can be replicated across products in minutes.
More importantly, patterns we discover in one product often apply to others. The 3-interaction rule we found in one product's freemium conversion applied directly to another product with minimal adaptation. The pricing experiment framework we built is product-agnostic. Every learning compounds across the portfolio.
When to kill a product
We have a simple rule: if a product hasn't shown organic traction signal within 6 months of launch, we stop investing in it. "Organic traction signal" means users who arrive without paid acquisition and retain without intervention. If you have to pay for every user and nudge them to come back, you don't have product-market fit — you have a marketing-subsidized usage pattern that will collapse the moment you stop spending.
Killing a product is easier when you have others. There's no existential pressure to "make this one work." You can be honest about what the data is telling you, because the team's survival doesn't depend on this single bet.
The uncomfortable truth about headcount
Every person you add to a team adds communication overhead. With 3 people, there are 3 communication channels. With 6 people, there are 15. With 15 people, there are 105. The coordination cost grows quadratically while the output grows linearly at best.
At 3 people, every person is in every conversation. No one is blocked waiting for information. No one is in a meeting that doesn't involve them. No one is doing "alignment work" — the work of making sure everyone agrees on what to do. You just do it.
We will add people eventually. But only when a specific product has proven product-market fit and the bottleneck is clearly "we need more hands doing this specific thing," not "we need more people in general." Headcount is the last lever we pull, not the first.
The multi-product model with a tiny team is not for everyone. It requires comfort with ambiguity, willingness to let products sit untouched for weeks, and the discipline to kill bets that aren't working. But for early-stage consumer products where the base rate of success is low, it's the highest expected-value strategy we've found.