The MVP gospel goes something like this: build the smallest thing possible, ship it fast, learn from real users, iterate. Don’t over-invest before validation. Don’t pile assumptions on top of assumptions. Conserve resources until you know you’re building something people want. This is good advice. I’ve given it myself, many times. But there’s a failure mode that doesn’t get discussed enough: the MVP that’s too small.
The product so minimal it can’t actually test the hypothesis it was built to test. The version so incomplete that the data it generates is useless—or worse, actively misleading.
Sometimes minimum viable isn’t minimum valuable. And when that happens, you’re not de-risking. You’re wasting time.
Why We Build MVPs
Let’s start with why the MVP concept exists in the first place.
The traditional approach to building products was: plan extensively, build completely, then launch. This worked poorly for software because the cost of being wrong was enormous. By the time you discovered users didn’t want what you built, you’d already spent months or years building it.
The MVP approach inverts this. Instead of building then learning, you try to learn while building. Ship something small, see how people respond, adjust based on reality rather than assumptions.
The goals of MVP development
- De-risk: Don't invest heavily in something that might not work
- Learn: Get real user feedback as early as possible
- Conserve resources: Save your time and money for the version that works
- Avoid assumption stacking: Test core hypotheses before building on top of them
All of this is sound. The problem is in the execution.
The Too-Small Trap
Here’s what I see happen:
A founder takes the MVP advice to heart. They scope ruthlessly. They cut features until only the absolute core remains. They build the smallest possible version and ship it.
Then they get data that’s impossible to interpret.
Users don’t engage—but is that because the product doesn’t solve a real problem, or because the product is too incomplete to be useful? People sign up but don’t convert—but is that because the value proposition is wrong, or because the stripped-down version doesn’t deliver enough value to justify switching? Early adopters churn quickly—but is that because the idea doesn’t work, or because the MVP is missing the features that would make it sticky?
⚠ The Core Problem
The MVP was supposed to generate signal. Instead it generated noise.
Minimum Viable vs. Minimum Valuable
Here’s the distinction I’ve started making: MVP should stand for minimum valuable product, not just minimum viable product.
Key Takeaway
Viable means the product technically works. It runs. It does something. Valuable means the product actually solves a problem well enough that someone would pay for it—or at least, choose to use it when they have other options. These aren’t the same thing.
You can build something viable that isn’t valuable. It works, but it doesn’t work well enough to matter. It solves part of a problem but leaves the user needing to go elsewhere to finish the job. It functions, but it doesn’t deliver enough benefit to justify the friction of adoption.
When you’re validating a product idea, you’re not asking “can I build something?” You’re asking “will people choose this over their alternatives?” And to answer that question, you need something valuable enough to be a real choice.
The Completeness Problem
Some problems can’t be solved halfway.
If you’re building a project management tool, a version that handles task creation but not task assignment isn’t testing whether people want project management. It’s testing whether people want a to-do list. Those are different products with different competitive dynamics.
If you’re building a marketplace, a version with sellers but no buyers (or vice versa) isn’t testing whether your marketplace concept works. It’s testing whether people will sign up for something that doesn’t work yet. The answer to that question doesn’t tell you much.
If you’re building a workflow automation tool, a version that automates one step but requires manual work for everything else isn’t testing whether your automation approach is valuable. It’s testing whether people will adopt a partial solution—which they usually won’t, because partial solutions create more complexity than they remove.
The MVP concept works beautifully when the core value proposition can be isolated and tested independently. It breaks down when the value only emerges from a complete-enough solution.
Switching Costs Are Real
This matters especially in markets where you’re competing with existing solutions.
If someone is already using something—even something they’re not happy with—they face real costs to switch: learning curve, data migration, workflow disruption, retraining teams. These costs are paid upfront, and the benefits of the new solution are realized later.
For someone to make that switch, the new solution needs to be compelling enough to justify those costs. Not marginally better. Significantly better.
A minimal MVP often can’t clear this bar. It might be better in one dimension but worse in ten others. It might solve the core problem but miss all the surrounding features that make the current solution tolerable. It might be promising but not proven.
Users in this situation will often say “interesting, let me know when it’s ready”—and never come back. You’ll record this as validation failure, but what you actually got was an MVP too incomplete to test the real hypothesis.
What Signal Are You Actually After?
Before scoping your MVP, get clear on what you’re actually trying to learn.
“Do people want this?” is too vague. What does “this” mean? The core problem you’re solving? The specific approach you’re taking? The feature set you’re planning?
Different questions require different levels of completeness to answer.
Matching scope to learning goals
- Testing if a problem exists → You might not need a product at all. Conversations, landing pages, manual service delivery can generate signal.
- Testing if your solution approach works → You need enough implemented to actually test the approach, not just the concept.
- Testing if people will pay → You need enough value delivered to justify payment. People don't pay for potential.
The right MVP scope depends on what you’re trying to learn. “As small as possible” isn’t the right answer. “Small enough to learn what I need to learn” is.
The Alternative: Scope to the Hypothesis
Instead of asking “what’s the minimum we can build?”, ask “what’s the minimum that tests our actual hypothesis?”
Sometimes those are the same. Often they’re not.
Your hypothesis isn’t “can we build software.” Your hypothesis is something like:
- “Professionals in X industry will pay for a tool that does Y because it saves them Z hours per week”
- “Consumers will switch from incumbent product because our approach to [specific thing] is 10x better”
- “Small businesses need [specific capability] but current solutions are too expensive or complex”
Now look at that hypothesis and ask: what would I need to show someone to actually test this? What would they need to experience to have a meaningful reaction?
When Small Is Actually Right
ℹ Small Still Has Its Place
There are plenty of situations where the lean MVP approach is exactly right: testing a novel problem, isolated value propositions, technical feasibility questions, or genuine resource constraints. The point isn’t that MVPs should be big—it’s that MVP size should be determined by learning goals, not by a blanket directive to minimize.
Getting the Scope Right
Scoping to learn, not to minimize
- 1
Define the hypothesis precisely
What exactly are you trying to learn? Write it down in specific, testable terms.
- 2
Identify what would falsify it
What result would tell you the hypothesis is wrong? This clarifies what data you actually need.
- 3
Work backward to requirements
What would users need to experience to generate that data? What features are essential for that experience?
- 4
Consider the competitive context
If you're entering a market with existing solutions, what do you need to deliver to be a real alternative?
- 5
Be honest about completeness
Does your scoped version actually test your hypothesis, or does it test something easier?
The result might be a smaller product than you originally imagined. It might also be bigger than pure MVP thinking would suggest. Either is fine, as long as it’s calibrated to learning something meaningful.
Minimum Valuable Product
Build the smallest thing that can deliver real value to real users. Not the smallest thing that can run. Not the smallest thing that demonstrates a concept. The smallest thing that someone would actually choose to use.
That’s your MVP—and it might be more than you expected.
The lean startup movement gave us valuable tools for building in uncertainty. But like any framework, MVP thinking can be misapplied. The goal isn’t small for small’s sake—it’s learning efficiently. Sometimes that means building less. Sometimes it means building more. The question isn’t “how minimal can we go?” but “what do we actually need to learn, and what will it take to learn it?”
Thinking Through Your MVP Scope?
Getting the scope right is one of the hardest parts of building a new product. Let's talk through what you're trying to learn and what it would take to learn it.
Topics:
Founder, 1123Interactive
25+ years building products, from consumer electronics scaled to $5M to production SaaS shipped in weeks. Helping founders and businesses turn ideas into working software.
Learn moreContinue Reading
How to Validate Your Startup Idea Without a Technical Partner
Before you look for a developer or cofounder, answer this: do you know if anyone wants what you're building? Are you sure software is the answer?
The Real Cost of the Technical Cofounder Search
The cofounder search costs more than time—it costs optionality and equity. Here's what matchmaking events don't tell you.
How Much Does an MVP Actually Cost?
Real numbers on MVP development costs in 2026. What affects pricing, what you're actually paying for, and how to think about the investment.