The MVP That Actually Works
Most MVPs fail not because they are too small, but because they validate the wrong thing. A working MVP is not the smallest possible version of your idea. It is the smallest version that can answer the one question that matters.
The term MVP has been stretched to cover everything from a landing page with an email signup to a fully functional product with paying customers. Neither extreme is useful. What matters is what you are trying to learn and whether the thing you built can actually teach you that.
Define the hypothesis first
Before you can build a useful MVP, you need to know what it is trying to prove. Not “does this product work” but something specific: “will restaurant owners pay for a booking management tool if it reduces no-shows by 20%?” or “do freelancers need a dedicated invoicing tool or do they use spreadsheets because spreadsheets are actually good enough?”
That specific question shapes what you build. If the hypothesis is about willingness to pay, you need a checkout flow. If the hypothesis is about daily usage, you need the core workflow, not the billing. The MVP is only minimal relative to answering that specific question.
What makes an MVP fail
The most common failure mode is building something that cannot produce a clear answer. You launch, you get some usage, and you cannot tell whether the product worked or not. This usually happens when the MVP covers too many things at once, so you cannot isolate which part is working.
The second failure mode is building a version so stripped down that no one takes it seriously. If the product looks unfinished or behaves unreliably, users do not give you real feedback. They disengage, and you learn nothing useful about the product. You learn only that people do not want half-built software.
The third failure mode is building a version that your most charitable friends love but that does not represent real user behavior. An MVP tested only with people who want you to succeed is not a test at all.
Ship ugly where it does not matter
Not everything in a product deserves equal attention. For the Pulse demo, the settings page is functional but sparse. The onboarding flow has one step where a real product would have three. The email notifications are plain text. None of this matters, because the hypothesis we were testing was about the dashboard itself.
When you identify the part of the product that the hypothesis depends on, spend the attention there. Make that part work well and look credible. Let everything else be a placeholder.
Users are remarkably good at ignoring roughness in areas they do not care about, and remarkably sensitive to roughness in areas they do. A user trying to understand their data will forgive a rough settings page. They will not forgive a dashboard that loads slowly or shows data they cannot interpret.
Polish where it does
There are two areas where polish is not optional, even in an MVP: trust moments and core value moments.
Trust moments are where users decide whether to commit: signing up, entering a payment method, sharing personal information. If these feel unpolished or insecure, users stop. The product never gets a chance to prove its value.
Core value moments are the specific interactions that deliver the thing the product is supposed to deliver. For a data product, that is probably a chart or a table. For a booking tool, it is the confirmation screen. For a writing tool, it is the editor. These moments need to feel right. Not perfect, but credible and functional.
The right users for an MVP
An MVP should be tested with people who have the problem you are solving, not with people who are interested in your project. The feedback from those two groups is completely different.
People who have the problem will tell you whether it feels like a solution. People interested in your project will tell you what features they would add. The first feedback is useful. The second will expand your scope indefinitely and teach you nothing about whether the core works.
If you cannot find users with the actual problem during the MVP phase, that is important information. It means distribution is harder than expected, or the problem is less widespread than assumed. Either way, you need to know this before building more.
What a successful MVP looks like
A successful MVP produces a clear answer to its hypothesis and leaves you with a defined next step. It does not need to have many users, or revenue, or positive press. It needs to have answered the question it was built to answer.
The best outcome is: the hypothesis was confirmed, you know what version two needs to be, and you have early users who are already asking for it. The second-best outcome is: the hypothesis was wrong in a specific way, and you now know what to build instead. Both are wins. The only loss is building something that cannot produce either answer.