The story I hear more often than any other goes like this. A founder comes to me with an MVP that’s about 80% done. It mostly works. It mostly looks like what they asked for. But there are bugs that the developer can’t seem to fix — or won’t fix. There are features that were in the original scope that somehow never got built, or that the developer refused to build without additional payment. The timeline has slipped. Communication has degraded.
And at some point, the developer either ghosted entirely, started demanding significantly more money to finish the project, or became suddenly and conveniently unavailable.
What the founder is left with is something that looks close to done. Close enough that it feels like all they need is someone to carry it across the finish line. Just fix these bugs. Just add these missing features. Just get it working the way it was supposed to.
I have to tell them something they don’t want to hear: what they think is 80% done is almost certainly not what they think it is.
The 80% Illusion
If you’ve never built software before, there’s something you wouldn’t have any reason to know: the first 80% is the easy part. Getting a working demo, screens that look right, basic functionality that runs in a browser — that’s the straightforward work. The hard part is everything underneath. Error handling. Security. Performance under load. Edge cases. The things that make software actually work in the real world instead of just in a demo.
There’s a running joke in the vibe coding communities right now: it’s 20% vibe coding and 80% vibe debugging. And that joke gets at something real. Whether you’re using AI tools, offshore developers, or any other shortcut to get software built without deeply understanding how it works — you still have to understand how software works to produce software that works. You have to understand the problem you’re solving. You have to understand how to diagnose and fix bugs. You have to understand how the system performs under real conditions and how it delivers the features your users actually need.
⚠ The Math Gets Ugly
What looks like an application that’s 80% complete and just needs a little push to get across the finish line is, in most of the cases I’ve seen, an application that’s just getting started. The demo layer is done. The foundation — the part that makes it reliable, secure, maintainable, and ready for real users — was never built. The cost to fix this isn’t the remaining 20%. It’s often multiples of what you’ve already spent.
Because now someone has to untangle decisions they didn’t make, in a codebase they’ve never seen, with no documentation explaining why anything was built the way it was.
The Pitch That’s Hard to Pass Up
I understand why founders go offshore in the first place. The numbers are seductive. Someone quotes you $40,000 domestically, and an offshore team says they can do it for $8,000. The portfolios look great. The communication starts strong. The Upwork reviews are five stars across the board.
And here’s the thing — I’m not going to tell you it never works. It can work. But the conditions under which it works are very specific, and most first-time founders don’t meet them. More on that in a minute.
What happens far more often is the pattern I described above. You get something that looks like what you asked for, and you don’t realize what’s wrong underneath until it’s too late.
What “Done” Actually Looks Like
When I look under the hood of these “finished” MVPs, I find the same problems over and over:
What I Find Inside Most Offshore MVPs
- Code reused from other projects — sometimes entire modules copy-pasted, complete with variable names from a different client's application
- No documentation — not a README, not inline comments, not a single explanation of how anything works or why
- No logging or monitoring — when something breaks, no one knows. When something fails silently, no one knows that either
- No security hygiene — no input validation, passwords not hashed, API keys committed directly to the repository
- Hardcoded credentials — database passwords and third-party service keys embedded in source code, sometimes shared across clients
- No test suite — not a single automated test. No way to verify that changes don't break existing functionality
- Architecture that works at demo scale — handles 10 users fine, falls apart at 100
None of these are exotic problems. Every experienced developer would put them on their list of basics — the foundation you lay before writing a single line of business logic. But they’re invisible to someone who doesn’t know what to look for. And that’s the whole problem.
The Black Box Problem
If you don’t understand software development, you can’t evaluate what you received. The same knowledge gap that made offshore development seem like the smart financial play is the same knowledge gap that prevents you from recognizing what went wrong.
⚠ Questions You Can't Answer About Your Own App
Does your application have backdoors? Are your users’ passwords stored as plain text? Is someone else’s API key sitting in your codebase? Are your server credentials committed to a public repository? Is your database exposed to the internet without authentication? How would you know?
Someone who builds software professionally doesn’t worry about these things because they handle them instinctively. Security, documentation, logging, architecture — these aren’t premium features. They’re where you start. They’re the difference between a professional codebase and a prototype that happens to run.
What the Internet Does to Unprotected Software
Here’s something people don’t know about unless they’ve done it: there is an incredible amount of automated traffic on the internet, and your software is going to be exposed to every bit of it the moment it goes live.
Your application might work beautifully in a sandbox. It might be flawless on a staging server. But putting something on the real internet is a fundamentally different experience. Within hours of deploying a new application, bots are probing it. Automated scanners are looking for common vulnerabilities. Scripts are trying default credentials. This isn’t a hypothetical — it’s constant, it’s relentless, and it happens to every application regardless of how small or unknown it is.
And here’s the double-edged sword of actually doing the hard work of building a business: the more successful you are at promotion — backlinks, content marketing, social media, all the things you’re supposed to do — the more visible your application becomes to everyone and everything. That includes competitors, scrapers, and bad actors.
Especially in this age of AI, cloning software has never been easier. People will sign up for your free trial specifically to reverse-engineer what you’re doing. Bots will hit your API endpoints thousands of times looking for data they can extract. Automated tools will test your authentication system for weaknesses — not because someone is personally targeting you, but because these tools run against millions of targets simultaneously.
This isn’t something to be paranoid about. It’s just the state of the world. And it’s one of the clearest reasons why having a professional who knows how to secure a system matters. Someone who knows how to read logs. Who builds usage monitoring into the application from the start. Who implements session limits, rate limiting, sensible firewall rules — the basics that keep your application running for the people it’s meant to serve instead of buckling under automated abuse.
More than anything, someone who is just paying attention to what’s happening. Most of the worst outcomes I’ve seen weren’t sophisticated attacks. They were simple, automated probes that found an open door because nobody was watching.
I learned this the hard way. In 1999, when I started building websites, I thought: why am I paying for web hosting when I could stand up my own server? It seemed brilliant. Buy a server for cheap, put as many websites on it as I wanted, keep all the money. Except I deployed that server without a firewall. Within a week, someone had deleted my login executable — meaning I could no longer log into my own server. Period. And I didn’t have backups. I didn’t have the slightest clue what I was doing, and being exposed to the internet punished me for it.
The same lesson applies now, only everything is more automated. And frankly, this is the dark side of AI: the same tools that make it easier to build your MVP also make it easier for someone to build the tool that attacks your MVP. It’s the same technology, pointed in a different direction.
Exposure to the internet without doing your due diligence on security and hardening will punish you. Not might. Will. Every single time.
What Are You Actually Measuring?
Here’s the part that doesn’t get talked about enough.
An MVP is a measurement instrument. If your instrument is broken, your measurements are worthless.
You’re deploying an MVP to answer specific questions: Is there a market for this? Does my approach resonate? Will people pay? Is my pricing right? Can I acquire customers at a cost that makes the business viable?
When users bounce because the application is slow, did they reject your idea or your implementation? When the checkout flow crashes for 30% of mobile users, did you learn that nobody wants to pay — or that the payment integration wasn’t tested on Safari? When the app goes down for two days and you lose your early adopters, what exactly did that teach you about product-market fit?
Bad code doesn’t just cost money to fix. It corrupts your entire experiment. You can’t distinguish signal from noise when your instrument is generating its own noise. And you can’t rerun the experiment easily — those early users who had a terrible experience aren’t coming back to try version two.
I wrote about this dynamic in What You Actually Learn from Launching — the gap between what you expect launch to teach you and what it actually teaches you. A poorly built MVP widens that gap into a chasm. And if your MVP is too small to generate real signal on top of being poorly built, you’ve spent months learning nothing at all.
The Rebuild Trap
Here’s where the real cost shows up. You’ve spent months. You’ve burned through your initial development budget. You’ve tried to launch, hit problems, gone back to the offshore team for fixes — if they’re still responding. The fixes are slow. They introduce new bugs. Communication degrades. You start to realize something is fundamentally wrong.
Finally, you reach out to domestic developers. You ask them to take a look. The answer is almost always the same:
“It would cost more to understand and fix this than to rebuild it from scratch.”
I know how that sounds. When someone comes to me with an application they’ve spent months and thousands of dollars on, and I tell them I think they need to start over — I understand the reaction. It looks like I’m trying to rip them off. Or it looks like I’m being precious about my own code, my own way of doing things. Like I’m too proud to work with something someone else built.
That’s not what’s happening.
Here’s the reality that anyone who builds software for a living understands: no developer can tell you from a Zoom call whether your existing codebase is salvageable. It’s not possible. You’d need to go through it systematically — read the code, trace the architecture, check for security issues, understand the data model, test the assumptions. That alone could take weeks. And at the end of that assessment, the answer might still be “this is fundamentally flawed.”
That’s the scenario nobody wants to think about. You could spend weeks and significant money having a qualified developer audit, debug, document, and fix an existing codebase — only to discover that the core architecture can’t actually deliver what you need it to. That the database design doesn’t support the queries your application requires at scale. That the authentication system was built wrong in a way that can’t be patched, only replaced. That you have to start over anyway, except now you’ve spent even more time and money getting to the same conclusion.
No developer wants to inherit someone else’s technical debt. That’s not elitism — it’s self-preservation. Debugging code you didn’t write, with no documentation, no tests, and no one available to explain the decisions — that’s the hardest, most frustrating work in software development. The developer isn’t being difficult when they suggest starting fresh. They’re being honest about the fastest path to something that actually works.
I’ve written about what MVPs actually cost and the tradeoffs between different development paths. The offshore path looks cheapest on paper. But when you add the rebuild — and the months spent discovering you need one — it’s often the most expensive option by a wide margin.
When Offshore Actually Works
I want to be fair here. Offshore development can work. I’ve seen it work well. But it works under very specific conditions:
When Offshore Development Succeeds
- You understand software development well enough to review code, enforce standards, and catch problems early
- You're managing the engagement intensely — daily standups, code review on every pull request, hands-on oversight
- You own the infrastructure: the repository, the CI/CD pipeline, the cloud accounts, the deployment credentials
- You have quality gates: automated tests, security scanning, performance benchmarks, and someone qualified to verify them
- You've scoped the work precisely — detailed technical specifications with acceptance criteria, not 'build my app'
In other words: offshore development works when you could almost build it yourself. The offshore team is extra hands, not a replacement for technical leadership.
If you’re going offshore because you don’t have technical capability on your side, you’re going offshore for exactly the wrong reason.
The Toxic Cocktail
Here’s what makes this pattern so damaging: offshore development combines the lowest possible cost of entry with the highest possible penalty for ignorance.
You don’t know what you don’t know. Everything seems fine right up until it isn’t. The demo works. The screenshots look right. The team sends progress updates with green checkmarks. You feel like things are moving.
And then you try to launch. Or you try to hire a US developer to add a feature. Or you get your first security audit. And the floor falls out.
This isn’t even about bad actors. Most offshore teams are doing their honest best within the constraints of the engagement. The problem is structural. When a client asks for a car and doesn’t specify brakes, some shops include brakes because they’re professionals who take ownership of the outcome. Others build exactly what was specified — because the scope is the scope, and adding things outside it costs money they weren’t paid.
Both responses are rational. But if you’re the client who didn’t think to specify brakes, only one of them ends well for you.
This is exactly why I talk about not needing a technical cofounder — but absolutely needing someone technical in your corner. Not someone who works for the offshore team. Someone whose job is to protect your interests and make sure what gets built is something you can actually use, actually measure with, and actually grow from.
What a Real Partnership Looks Like
What I offer is intentionally different. I’m not competing on price. If lowest cost is your primary criterion, I’m not the right fit, and I’ll tell you that upfront.
What I bring is partnership. I’ve built seven ventures. I’ve been the founder figuring out what to build, who to build it with, and what it should cost — except I’m also the one who builds it. I think like a founder because I am one.
That means:
- Security, documentation, and architecture come first — not as afterthoughts, not as phase-two enhancements, but as the foundation everything else is built on
- Every decision gets explained — you understand what you have and why it was built that way, in language that makes sense to you
- Everything is documented — not because it’s busywork, but because you need to be able to hand this codebase to any qualified developer and have them understand it in hours, not weeks
- You own everything — the code, the infrastructure, the credentials, the documentation. From day one. No exceptions.
- I’m invested in your experiment succeeding — because building things that work is my reputation, and my reputation is my business
Your MVP should give you real answers to real questions. It should be a tool you trust. When you validate your idea, the data should mean something.
The Real Question
Key Takeaway
Before you engage anyone to build your MVP — offshore, domestic, freelancer, agency — ask yourself this: do I understand enough about what I’m buying to know if I got what I paid for? If the answer is no, that’s not a reason to go with the cheapest option and hope for the best. That’s the strongest possible argument for finding someone who will make sure you understand every piece of what gets built.
This isn’t about securing the lowest bid. It’s about building something real. Something you understand. Something you can measure with, grow from, and build on.
If your MVP experiment is a success, the last thing you should be doing 12 months from now is starting over.
Skip the Rebuild. Build It Right the First Time.
Let's talk about what you're trying to validate and how to build something you actually understand — and can actually grow.
Founder, 1123Interactive
Seven bootstrapped ventures. A consumer electronics company scaled to $5M. Production SaaS shipped in weeks. I've sat where you're sitting—figuring out what to build, who to build it with, and what it should cost.
Learn moreContinue Reading
What You Actually Learn from Launching
Product-market fit gets all the attention, but launching teaches you far more: how people actually use your product and why your assumptions were wrong.
The DIY Trade-off: When Building More Makes Sense
Standard MVP advice assumes you're paying developers. When you're building yourself, the calculus changes—and sometimes building more is the smarter move.
When Your MVP Is Too Small
The standard MVP advice can backfire. Sometimes minimum viable isn't minimum valuable—and a too-small product generates noise, not signal.