Oracle → PostgreSQL Migration Guide
A complete step-by-step guide covering schema conversion, bulk data migration, PL/SQL transpilation, and zero-downtime cutover via CDC.
Prerequisites
- ✓Oracle 11g Release 2 or higher (11.2.0.4+)
- ✓Oracle LogMiner option enabled on the database
- ✓DBA or SELECT ANY TABLE privileges on the source schema
- ✓Python packages: pip install oracledb psycopg2-binary loguru
- ✓PostgreSQL 13+ target database with a dedicated migration user (CREATEDB, CREATEROLE)
- ✓Network connectivity between DBMigrateAIPro host and both databases
Enable Supplemental Logging
Supplemental logging is required for Oracle LogMiner to capture before/after column values for CDC. Enable it at the database level, then per-table for all tables you want to capture:
-- Enable at database level (run as SYSDBA)
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
-- Enable per-table (run for each table in the migration)
ALTER TABLE schema_name.table_name ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;Verify with: SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE; — should return YES.
Capture Start SCN
Before migrating any data, record the current System Change Number (SCN). This is your CDC start point — changes made to Oracle after this SCN will be replayed into PostgreSQL during cutover.
SELECT CURRENT_SCN FROM V$DATABASE;Save this number.
It is your start_scn in CDC settings. Archive logs must be retained from this SCN until CDC completes.
Schema Assessment
Click Assess Schema in the Actions tab. The report in reports/assessment_report.html covers:
Plan JSONB conversion. Export as CLOB first.
Replace with WITH RECURSIVE CTEs in views/procedures.
Enable defer_fks_during_load in settings.
Estimate migration time: ~50k rows/sec per worker at defaults.
DDL Conversion
DBMigrateAIPro reads Oracle DDL via DBMS_METADATA and converts each object to PostgreSQL syntax. Tables, sequences, indexes, and foreign keys are all handled. Converted DDL scripts are saved to ddl/.
Data Type Mapping
| Oracle Type | PostgreSQL Type |
|---|---|
| NUMBER(p, 0) where p ≤ 9 | INTEGER |
| NUMBER(p, 0) where p ≤ 18 | BIGINT |
| NUMBER(p, s) | NUMERIC(p, s) |
| NUMBER | NUMERIC |
| VARCHAR2(n) | VARCHAR(n) |
| CHAR(n) | CHAR(n) |
| CLOB | TEXT |
| BLOB | BYTEA |
| DATE | TIMESTAMP |
| TIMESTAMP | TIMESTAMP |
| TIMESTAMP WITH TIME ZONE | TIMESTAMPTZ |
| XMLTYPE | JSONB (or TEXT) |
| RAW(n) | BYTEA |
| FLOAT | DOUBLE PRECISION |
Data Migration
DBMigrateAIPro uses parallel workers (default: 4) to stream data from Oracle and load it into PostgreSQL using the native COPY protocol for maximum throughput.
FK deferral
FKs are dropped before load and re-created after to avoid constraint violations
Trigger disabling
PostgreSQL triggers on target tables are disabled during bulk load
Sequence sync
After load, each sequence is reset to MAX(id) + 1 of the loaded data
Progress
Live rows/sec and ETA visible in the Perf tab per worker
PL/SQL Conversion
Stored procedures, functions, and package bodies are transpiled to PL/pgSQL. Key transformations applied automatically:
| Oracle PL/SQL | PostgreSQL PL/pgSQL |
|---|---|
| Package body | Functions prefixed schema.pkg_proc_name() |
| ROWNUM | LIMIT n (in subquery context) |
| NVL(a, b) | COALESCE(a, b) |
| SYSDATE | NOW() |
| CONNECT BY | WITH RECURSIVE cte AS (...) |
| BULK COLLECT INTO | SELECT INTO array / RETURN QUERY |
Converted PL/SQL scripts are saved to sql/ with inline comments marking each transformation for review.
Validation
Validation runs automatically after data load. Open reports/validation_report.htmlto review:
- → Row count comparison — source vs target for every table
- → Optional MD5 checksum per row — enable
run_checksum_check: true - → Sequence current values — compared and synced
- → Index counts — verified in target schema
Start CDC for Cutover
Using the SCN captured in Step 2, start the LogMiner stream. DBMigrateAIPro replays all INSERT / UPDATE / DELETE events that occurred on Oracle while bulk migration was running.
# In DBMigrateAIPro CDC tab — set:
source_type: oracle
start_scn: <SCN from Step 2>
tables: [SCHEMA.TABLE1, SCHEMA.TABLE2]
# Click "Start CDC"Monitor lag (seconds behind source) in the CDC panel. Target lag < 5s before cutover.
Cutover
- 1Stop application writes to Oracle (maintenance window or connection drain)
- 2Wait for CDC lag to reach 0 — all changes replayed into PostgreSQL
- 3Stop CDC in DBMigrateAIPro
- 4Re-enable FK constraints and triggers on PostgreSQL target
- 5Update application connection string to point to PostgreSQL
- 6Run smoke tests against PostgreSQL — validate critical queries
- 7Re-enable application traffic
Troubleshooting
Supplemental logging not enabled
Run ALTER DATABASE ADD SUPPLEMENTAL LOG DATA; as SYSDBA before starting CDC.
LogMiner option not available
Requires Oracle LogMiner option. Already handled automatically in DBMigrateAIPro v2.0+.
Type cast errors on NUMBER columns
Check the type mapping table. Override with explicit cast in config/column_overrides.json.
PostgreSQL sequence starts at 1
DBMigrateAIPro auto-syncs sequences. If gap appears, run: SELECT setval('seq', (SELECT MAX(id) FROM table));
DATE columns shifted by timezone offset
Oracle DATE has no timezone. Set timezone = 'UTC' in PostgreSQL and ensure app layer is consistent.