Cross-platform migration - large Oracle 9i db from Solaris 64bit to Linux 32bit

Friday, November 27, 2009

Question for this post is - what is the best way to migrate >500GB Oracle database from Solaris 64 to Linux 64 operating system?

As imp/exp or impdp/expdp would probably be very slow option better choice is cross-platform migration.

In this case we have to upgrade db from 9i to 10g and then migrate tablespaces using cross-platform tablespace conversion with transport tablespace feature.

Why upgrading to 10g?
Because we don't have RMAN cross-platform tablespace conversion in Oracle 9i.

This time I will do simple demo case how to migrate Oracle database from Solaris 64bit to Linux 32bit. Similar actions are performed when Linux 64bit is target platform instead of Linux 32bit.


1. First create new 10g installation using OUI on Solaris machine.

2. Run utlu102i.sql (in 10g ORACLE_HOME) script in Oracle 9i environment while Oracle 9i db is running.

SQL> spool /tmp/preupgrade.log
SQL> @utlu102i.sql
SQL> spool off

Check in log file what actions you have to do before upgrading.

3. Create pfile from current spfile of Oracle 9i database and copy that parameter file to 10g ORACLE_HOME/dbs.

SQL> create pfile from spfile;

File created.

cp initdbtest.ora /oradata1/oratest/product/1020/dbs/

Edit parameter file according to recommendations from pre-upgrade tool.

I've set this parameter values for this little demo:

pga_aggregate_target = 25M
shared_pool_size = 200M
streams_pool_size = 64M
session_max_open_files = 20

4. Few more steps on 9i database.

- Check for invalid objects and recompile:
SQL> SELECT UNIQUE object_name, object_type, owner
FROM dba_objects WHERE status='INVALID';


- Set SYSTEM as default tablespace for SYS and SYSTEM users.

SQL> select username, default_tablespace from dba_users
where username in ('SYS','SYSTEM');

To modify use:

SQL> alter user sys default tablespace SYSTEM;
SQL> alter user system default tablespace SYSTEM;

5. Shutdown database and stop listener.

6. Very IMPORTANT setp - create backup!

7. Edit "/var/opt/oracle/oratab" entry and change entry of old 9i ORACLE_HOME to 10g ORACLE_HOME.

8. Login using 10g environment and execute following statements:

-bash-3.00$ sqlplus "/as sysdba"

SQL*Plus: Release - Production on Thu Nov 26 21:57:46 2009

Copyright (c) 1982, 2007, Oracle.  All Rights Reserved.

Connected to an idle instance.

SQL> startup upgrade pfile='$ORACLE_HOME/dbs/initdbtest.ora';
ORACLE instance started.

Total System Global Area  473956352 bytes
Fixed Size                  2041464 bytes
Variable Size             432019848 bytes
Database Buffers           33554432 bytes
Redo Buffers                6340608 bytes
Database mounted.
Database opened.

DATAFILE '/oradata1/oradata/sysaux01.dbf'

9. Start with upgrade process and check log file for errors:

SQL> set echo on
SQL> SPOOL /tmp/upgrade.log
SQL> @?/rdbms/admin/catupgrd.sql
SQL> spool off

10. Set compatible value to, create spfile from pfile, restart and recompile.

11. Run post-upgrade status tool - utlu102s.sql:


Check status and version of components.

12. Enable listener in new home.

13. I will skip gathering stats.


1. List internal names for each platform supporting cross-platform data transport:

SQL> set lines 200
SQL> set pages 999
SQL> select * from V$transportable_platform;

PLATFORM_ID PLATFORM_NAME                                   ENDIAN_FORMAT
----------- ----------------------------------------------- --------------
1 Solaris[tm] OE (32-bit)                             Big
2 Solaris[tm] OE (64-bit)                             Big
7 Microsoft Windows IA (32-bit)                       Little
10 Linux IA (32-bit)                                   Little
6 AIX-Based Systems (64-bit)                          Big
3 HP-UX (64-bit)                                      Big
5 HP Tru64 UNIX                                       Little
4 HP-UX IA (64-bit)                                   Big
11 Linux IA (64-bit)                                   Little
15 HP Open VMS                                         Little
8 Microsoft Windows IA (64-bit)                       Little
9 IBM zSeries Based Linux                             Big
13 Linux x86 64-bit                                    Little
16 Apple Mac OS                                        Big
12 Microsoft Windows x86 64-bit                        Little
17 Solaris Operating System (x86)                      Little
18 IBM Power Based Linux                               Big
20 Solaris Operating System (x86-64)                   Little
19 HP IA Open VMS                                      Little

19 rows selected.

In this case:
source platform = Solaris[tm] OE (64-bit) - ENDIAN_FORMAT = big
target platform = Linux IA (32-bit) - ENDIAN_FORMAT = little

Endian format of the source platform is different from that of the destination platform so we must perform conversion. The endian format of the datafiles in the transportable tablespace set must be converted to match destination platform.

2. Check if tablespace is self contained.
SQL> begin
2     dbms_tts.transport_set_check('TBS_NAME',TRUE);
3  end;
4  /

PL/SQL procedure successfully completed.


no rows selected

If select query shows any rows we should transport mentioned tablespace also.

3. Put in read only mode tablespaces chosen for migration.
SQL> select 'alter tablespace '||tablespace_name||' read only;'
2  from dba_tablespaces
3  where tablespace_name not in ('SYSTEM','UNDOTBS1','TEMP','SYSAUX','DRSYS',
4                                'XDB','ODM','CWMLITE','USERS');

alter tablespace EXAMPLE read only;
alter tablespace INDX read only;
alter tablespace TOOLS read only;
alter tablespace DATA1 read only;

4 rows selected.

For this test case I've chosen specified tablespaces.

5. Use RMAN to convert datafiles into the endian format of the destination host:

RMAN> convert tablespace 'DATA1'
to platform="Linux IA (32-bit)"

Starting backup at 27-NOV-09
using channel ORA_DISK_1
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00012 name=/oradata1/oradata/data01.dbf
converted datafile=/oradata1/rman_convert/data01.dbf
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:00:01
Finished backup at 27-NOV-09

Execute this for all tablespaces you would like to migrate.

6. Create transportable tablespace metadata:

-bash-3.00$ exp userid=\'/ as sysdba\' transport_tablespace=y
tablespaces=DATA1 file=data1.dmp statistics=none

Export: Release - Production on Fri Nov 27 14:41:28 2009

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in UTF8 character set and UTF8 NCHAR character set
server uses AL32UTF8 character set (possible charset conversion)
Note: table data (rows) will not be exported
About to export transportable tablespace metadata...
For tablespace DATA1 ...
. exporting cluster definitions
. exporting table definitions
. . exporting table                     TEST_TABLE
. exporting referential integrity constraints
. exporting triggers
. end transportable tablespace metadata export
Export terminated successfully without warnings.

Export metadata of all tablespaces you would like to migrate.

7. Create database on target Linux machine.

Check database character set on source and target database.
Create users on target database if you are using imp/exp tools.

8. Copy converted datafiles and export output to target machine.

$ scp rman_convert/* oracle@

9. Import transportable tablespace in target databse:

$ imp userid=\'/ as sysdba\' TRANSPORT_TABLESPACE=Y

Import: Release - Production on Fri Nov 27 15:02:28 2009

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

Export file created by EXPORT:V10.02.01 via conventional path
About to import transportable tablespace(s) metadata...
import done in UTF8 character set and UTF8 NCHAR character set
import server uses AL32UTF8 character set (possible charset conversion)
. importing SYS's objects into SYS
. importing SYS's objects into SYS
. importing MSUTIC's objects into MSUTIC
. . importing table                   "TEST_TABLE"
. importing SYS's objects into SYS
Import terminated successfully without warnings.

Use this script to import all transportable tablespaces.

10. Excerpt from alert log:

Fri Nov 27 15:02:28 2009
Plug in tablespace DATA1 with datafile
Plug in tablespace TOOLS with datafile

11. Put imported tablespaces in read/write mode:
SQL> alter tablespace DATA1 read write;

Tablespace altered.

SQL> alter tablespace TOOLS read write;

Tablespace altered.


12. Small check on target database:

SQL> conn msutic/pwd
SQL> select count(*) from test_table;





  • Anand

    Hi Marko,

    It was nice reading the article.I think the post title must be
    "Cross-platform migration - large Oracle 9i db from Solaris 64 to Linux 32"

    Linux 64 should be changed to Linux 32 as the target database is on Linux 32bit.


  • Marko Sutic

    Hi Anand,

    you are correct :)

    I have Solaris 64 -> Linux 64 situation in my production environment so I used title as I was doing this case in production.

    Thanks for comment and I'am glad that you liked post. I made this little correction in title just now ;)


  • Hi Marko,

    How convert backup set with big endian to little endian on target platform?


  • Marko Sutic

    Hi NashRus,

    first you must export metadata...

    Then after putting tablespace in read only mode you can copy datafiles to target host.
    Then you should use RMAN utility for conversion.

    You cannot use "convert tablespace" on target host - you should use "convert datafile" command instead.

    For example:
    RMAN> convert datafile '/oradata1/oradata/data01.dbf'
    2> to platform="Linux IA (32-bit)"
    3> from platform="Solaris[tm] OE (64-bit)"
    4> db_file_name_convert='/oradata1/oradata','/oradata1/rman_convert';

    Then import metadata...

    I think this should work for you.


  • Marko Sutic

    Eh, you mentioned backup set not datafile... (ignore my last comment)

    Haven't tried this yet so I cannot provide correct answer.


  • Thanks, I here try, but leaves nothing.
    It is impossible even to get spfile from available backupset.
    And it there is.


  • Marko Sutic

    Hm... you should be able to get spfile if you have valid backup of spfile.
    Have you tried to recover spfile from autobackup?


  • On a target platform (x86) writes that an autobackup copy it is not found. During too time for platform Sparc (a source platform) this file is restored.
    The problem in that the source platform (Sparc) is not accessible to me, and is a lot of data - 2 Tb.
    RMAN on x86 does not understand the files created in RMAN SPARC Solaris? Whether It is possible as that them convert? Probably foreign means?


  • Marko Sutic

    What platform is this (x86)?
    What endian format?

    If you take backup in big endian platform and want to restore it on little endian you will receive error.


  • gbitero

    Great write-up mate. I have been looking around for a note like this before I embark on a project similar to this and you have nailed the point.

  • gbitero

    I have got a question here:
    If I want to cross-platform migrate database from 9i (solaris) to 11g (Redhat) for instance, will it the nice to firstly upgrade the 9i database on the source i.e solaris to 11g instead of 10g above and follow your other steps?

  • Marko Sutic

    Hi gbitero,

    thanks for your comment.

    I think that there shouldn't be any problems if you upgrade 9i directly to 11g and continue with other steps.

    If you experience any problems please contact me.


  • Marko -

    I've the situation to migrate 2TB of data from HPunix(64bit) Big endian to Linux 64 bit Little.

    Its all partition tables and I'm worrying about the time. How long did it take for 500GB? Please advice.


  • Marko Sutic

    Hi Sam,
    I've noted this >500G size as a example for big databases where cross-platform tablespace conversion with transport tablespace feature comes as handy solution.
    So I cannot tell you how long it takes to migrate 500GB db. It depends about various factors.

    You should test migration in your test environment with smaller version of specified database. Document whole process and measure time before doing migration of production database.

    I wish you luck with your migration and don't worry too much ;)


  • Hi, great post. I have a question, though... What about import to Oracle RAC and ASM, which doesn't use datafiles..?

    many thanks,


  • Marko Sutic

    Hello Kevin,
    thank you for commenting and sorry for my late respond.

    I haven't tested this scenario so I might be wrong but with ASM I think you could use this steps:
    1. Put tablespaces in read only mode
    2. Convert tablespaces using RMAN convert command
    3. Export metadata
    4. "RMAN backup as copy" to copy datafiles from non-ASM to ASM
    5. Plug in datafiles using import command and for datafiles choose new ASM file locations

    Currently I don't have environment to test this out, but this would be nice subject for a blog post. I will add note to my to-do list and try this out soon.


  • Post a Comment