What Makes a Database Migration Succeed (Or Fail)
The 70% Number Is Real — But It's Not "Failure" That Kills Most Migrations
Every consulting deck cites the same statistic: 70–80% of database migrations fail or over-run. After running 500+ migrations across Oracle, MySQL, SQL Server, and MongoDB into PostgreSQL, I can confirm the number is roughly right — and that "fail" almost never means what executives think it means.
Migrations rarely blow up loudly. They limp. The cutover slips a quarter. The PL/SQL conversion gets re-scoped. A "non-critical" module is left running on the legacy database "temporarily" — and three years later it's still there. The original ROI projection is now an awkward conversation. That is what 70% looks like in practice.
The patterns that separate the projects that ship clean from the ones that limp are predictable. They're not what most teams expect either. Here's what we've seen.
The Five Failure Modes We See Repeatedly
These show up in roughly the same order in roughly every project. The fix is almost always cheaper than the failure — but only if you do it before kickoff.
| Failure mode | Root cause | What actually works |
|---|---|---|
| Underestimating PL/SQL complexity | 80% of the code converts cleanly — teams plan for 100%. The remaining 20% is packages, autonomous transactions, %ROWTYPE, and bulk collect. | Run a full inventory before quoting timeline. Multiply estimated PL/SQL effort by 3× until proven otherwise. |
| Skipping validation until cutover week | Row counts match early, so validation feels solved. Numeric precision loss, NULL semantics, and timezone shifts only surface during user testing. | Multi-layer validation from day one: row counts, checksums, statistical distributions, constraint checks. Not optional. |
| No executive air cover | The first delay arrives. The sponsor is mid-org. The next budget cycle starts. The project gets re-scoped or quietly paused. | A VP-level owner who has publicly committed to the migration and will defend a 3-week slip when it lands. |
| Schema drift between assessment and cutover | Source schema changes during the 6-month project. The migration plan was built against a snapshot. | Lock source DDL changes from kick-off, OR run continuous diff-detection. Treat any drift as a P1 blocker. |
| Row counts as final validation | COUNT(*) matches on both sides. Migration declared complete. Two months later, finance reports come back wrong. | No migration is signed off without per-row checksum reconciliation across every column of every table. |
The Five Success Predictors
Mirror image of the failure list. None of these is a feature you can buy — they're decisions made before code is touched.
- A real assessment with risk levels — not a vendor demo against a clean schema. The assessment must run against the actual production DDL, surface partitioned tables, XMLTYPE columns, circular FKs, and PL/SQL packages, and assign a difficulty to each.
- Cutover plan written before code is touched — including rollback criteria. If you can't describe how you'll back out in writing on day one, you will not back out cleanly on cutover night.
- CDC bridge instead of a downtime window — run source and target in sync for weeks. Cutover becomes a DNS flip, not a 36-hour outage. Even if your business technically allows a downtime window, CDC dramatically lowers the consequence of any last-minute issue.
- Per-row checksum validation, not COUNT(*) — covered in depth in our data validation article. A migration that passes
COUNT(*)can still have every numeric column silently truncated. - A migration owner with VP-level reach — someone who can defend a 3-week slip in front of finance and product. The technical work is solvable; the political work is what kills timelines.
Where the Time Actually Goes
The gap between planned and actual time-spent by phase is where projects lose their schedule. Below is the average across the migrations we've run end-to-end — your numbers will vary, but the shape is consistent.
| Phase | Typically planned | Typically actual | Pattern |
|---|---|---|---|
| Assessment | 5% | 2% | Skipped — blows up later |
| Schema convert | 15% | 8% | Easier than expected |
| Data load | 25% | 12% | Parallel copy is fast |
| PL/SQL convert | 20% | 38% | Always underestimated |
| Validation | 15% | 10% | Always rushed |
| Cutover plan | 10% | 20% | Always over-engineered |
| Post-cutover | 10% | 10% | On target |
The teams that ship on time spend more on assessment up front. The teams that slip spend more on cutover planning at the end — usually because the validation phase surfaces problems they thought were already solved.
The "Looks Done But Isn't" Trap
Post-cutover surprises are where reputations are made or broken. The data is loaded, the application connects, the smoke tests pass. Then over the following weeks:
- Sequence gaps — new inserts collide with existing primary keys because the sequence wasn't advanced past the max source value.
- FK orphan rows — parent and child loaded in parallel, one finished first, validation ran before the second completed.
- NULL semantics drift — Oracle treats
''as NULL; PostgreSQL doesn't. Reports that filteredWHERE name IS NOT NULLnow return rows they didn't before. - NLS_DATE_FORMAT dependencies — application code that relied on Oracle's session-level date formatting suddenly returns ISO strings the UI doesn't parse.
- Character set drift — WE8MSWIN1252 source loaded into UTF-8 target — works for ASCII, breaks on the first customer name with an umlaut.
Each one is fixable in minutes once identified — and each one looks identical to a passing cutover until a real user hits it. Multi-layer validation (row counts plus checksums plus statistical distribution plus constraint verification) catches all of these before users do.
How AI Changes the Math (Honestly)
AI assistance — the kind built into DBMigrateAIPro — genuinely changes the economics of three phases: schema risk assessment, PL/SQL transpilation, and error-pattern recognition during cutover. Schema work that used to take a senior DBA two weeks now takes an afternoon. PL/SQL conversion that ran 60–70% automated three years ago now clears 95%+ on most enterprise codebases.
Where AI doesn't help, and where teams still underestimate the cost:
- Business-logic edge cases — the financial month-end procedure that nobody can describe but everyone depends on. AI can transpile the syntax; it can't tell you whether the result is still correct.
- Application-side coupling — the ORM that emits Oracle-specific SQL, the reporting tool that depends on
CONNECT BY, the integration that assumes ROWID. - Organisational complexity — AI doesn't sit in the status meeting and defend a slip to the CFO. That work is still human.
If You're 0–6 Months From Migration, Do This Week
- Run a real assessment against your production DDL. Not a sample. The whole thing. Most assessment tools (including ours) are free for this step — there is no reason not to.
- Identify your VP-level owner in writing. If you can't name them today, fix that before scoping.
- Lock source schema changes — or commit to continuous drift detection. Pick one.
- Draft your rollback plan — the actual steps, the named owner, the decision criteria. One page is fine. Zero pages is not.
- Decide CDC or downtime — and budget the answer honestly. If the answer is downtime, your cutover plan needs to be twice as detailed.
The Honest Verdict
Migration projects don't fail because the technology is hard. They fail because the organisational scaffolding around the technology is brittle. The 30% that succeed cleanly do three things the other 70% don't: they assess against reality, they validate at every layer, and they have a sponsor who will stay in the room when something slips.
Everything else — type mappings, PL/SQL conversion, CDC tuning — is solvable with tooling. The hard part is human, and no AI is going to fix it for you. But if you get the human part right, the technical part is mostly automatable in 2026 — which is exactly the gap our tool is built to close.
Ready to start your assessment?
DBMigrateAIPro is free for Year 1. Run a full risk assessment against your real schema, get a per-table difficulty rating, and a per-PL/SQL-package conversion estimate.
- 🔗 Try the free schema converter: medaxai.com/tools/schema-converter
- 🔗 Download the desktop tool: medaxai.com
- 🔗 Read the migration guide: medaxai.com/blog/oracle-to-postgresql-complete-guide