How to Migrate Oracle Database from Solaris to Linux

Start Here

Get in touch with a
TriCore Solutions specialist

Blog | Feb 23, 2016

How to Migrate Oracle Database from Solaris to Linux

Proprietary UNIX servers depreciate rapidly and they are often little of their original value after just a few years of usage due to their costs which varied from hundreds of thousands of dollars. Another reason is the increasing power of so-called commodity chips from Intel Corp & low-cost operating systems like Linux. 

In this blog, I will discuss steps to migrate the Oracle Database from Solaris Systems to the Linux System.

Before we get into the details, I would like you to go through the following points that highlight the reasons for migrating to the Linux systems from the traditional old Solaris boxes:

Proprietary UNIX servers depreciate rapidly and they are often little of their original value after just a few years of usage due to their costs which varied from hundreds of thousands of dollars. Another reason is the increasing power of so-called commodity chips from Intel Corp & low-cost operating systems like Linux. 

Where are IT shops going?

Let me share a few quotes and thoughts from some of the players in this space:

Intel Hardware – “If you want the world’s faster processors then you will be forced to pay less” – Larry Ellison, CEO Oracle Corporation

Linux – Large-scale uptake, but hindered by non-open source costs (Red Hat) and lackluster support.

Windows – Increasing in popularity but suffering from an unreliable past.

What are we seeing in the press about future issues with Sun Microsystems? Several of the articles are clear that Intel threatens proprietary UNIX:

 eWeek - “In a study conducted in April surveying 16,000 Unix systems users, Unisys found that 35 percent of businesses running Sun Microsystems Inc.'s SPARC/Solaris environments were interested in migrating to another platform”

Red_Hat_v3

Image Source: Red Hat

Migration Steps:-

1 ==> On the Source database (on the Solaris server, in this action plan), confirm the platform that can be converted via SQL:

SQL> SELECT platform_id, platform_name,endian_format FROM V$TRANSPORTABLE_PLATFORM

WHERE UPPER(platform_name) LIKE 'LINUX%';

2 ==> Prepare and collect information on the Source database.

The related tablespaces which need to be converted are:

SQL> select tablespace_name from dba_tablespaces where tablespace_name not in ('SYSTEM', 'SYSAUX') and contents

not in ('UNDO', 'TEMPORARY');

3 ==>On the source database (Solaris, in this action plan), run the following check tosee if the set of tablespaces being transported violates any of the self-containedrules:

SQL> set serveroutput on size 1000000

SQL> declare

  2   cursor c_TRANSPORT_SET_VIOLATIONS is select violations from TRANSPORT_SET_VIOLATIONS;

  3   tablespace_namesvarchar2(4096);

  4   type tslist is table of dba_tablespaces.tablespace_name%type;

  5   tstslist;

  6   type cursor_ref is ref cursor;

  7   ts_curcursor_ref;

begin

  8   

  9   dbms_output.put_line('Starting to check tablespaces as specified');

 10

 11   open ts_cur for 'select tablespace_name from dba_tablespaces

 12   where tablespace_name not in (''SYSTEM'', ''SYSAUX'')

 13   and contents not in (''UNDO'', ''TEMPORARY'')';

 14   fetch ts_cur bulk collect into ts;

 15   close ts_cur;

 16

 17   tablespace_names :='';

 18   for i in ts.first ..ts.last

 19   loop

 20   if ( i = ts.first)

 21   then

 22   tablespace_names := ts(i);

 23   else

tablespace_names := tablespace_names || ', ' || ts(i);

 24   25       end if;

 26    end loop;

 27

 28    dbms_output.put_line(tablespace_names);

 29    DBMS_TTS.TRANSPORT_SET_CHECK(tablespace_names, TRUE, TRUE);

 30

 31    for c_cur in c_TRANSPORT_SET_VIOLATIONS loop

 32    dbms_output.put_line(c_cur.violations);

 33    end loop;

 34

 35    dbms_output.put_line('In case there are no line(s) after '||chr(39)||'Starting to check ...'||chr(39));

dbms_output.put_line('It does imply that the check went fine and there are no issues to resolve.');

 36   37  end;

38  / 

Now, see if Oracle detected any violations:

Note: - For CHEM or ANY EXTENSIBLE INDEXES which have errors in above procedure can be ignored as the CHEM objects or Chem schema can be re-created by client in the destination database.

4 ==> Generate a script to create the related users, run the following on source:-

sqlplus "/ as sysdba"

SQL> set serveroutput on size 1000000

declare

string varchar2(4096);

typeuserlist is table of dba_users.username%type;

usersuserlist;

typecursor_ref is ref cursor;

c_curcursor_ref; 

begin

openc_cur for 'select distinct owner from dba_segments

wheretablespace_name in (select tablespace_name from dba_tablespaces

wheretablespace_name not in (''SYSTEM'', ''SYSAUX'')

and contents not in (''UNDO'', ''TEMPORARY''))';

fetchc_cur bulk collect into users;

closec_cur;

fori in users.first .. users.last

loop

dbms_output.put_line('create user '||users(i)||' identified by '||users(i)||';');

end loop;

end;

/ 

set feedback off

spool tts_exp_users_create.sql

/

spool off;

5 ==> Generate a script with the user-default tablespaces, run the following on source:-

sqlplus "/ as sysdba"

SQL>REM tts_ts_users

setserveroutput on size 1000000

declare

string varchar2(4096);

typeuserlist is table of dba_users.username%type;

usersuserlist; 

typecursor_ref is ref cursor;

c_curcursor_ref; 

def_tsdba_users.default_tablespace%type;

temp_tsdba_users.temporary_tablespace%type; 

begin

openc_cur for 'select distinct owner from dba_segments

wheretablespace_name in (select tablespace_name from dba_tablespaces

wheretablespace_name not in (''SYSTEM'', ''SYSAUX'')

and contents not in (''UNDO'', ''TEMPORARY''))';

fetchc_cur bulk collect into users;

closec_cur; 

fori in users.first .. users.last

loop

selectdefault_tablespace,temporary_tablespace into def_ts,temp_ts

fromdba_users where username=users(i);

dbms_output.put_line('alter user '||users(i)||' default tablespace '||def_ts||' temporary tablespace '||temp_ts||';');

end loop;

end;

/ 

set feedback off

spool tts_exp_users_alter.sql

/

spool off

6 ==> Export the data

Make an export with ROWS=NO,  in order to recreate all objects/schemes/grants/.. which are not covered by the TTS export

$ mkdir /app/datapump/testdb

SQL> create directory dpdir as '/app/datapump/testdb';

testdb_expdp_full_norows.par:

userid="/ as sysdba"

directory=dpdir

dumpfile=testdb_expdp_full_norows.dmp

logfile=testdb_expdp_full_norows.log

full=y

content=metadata_only

expdpparfile=testdb_expdp_full_norows.par

7 ==> Place the source tablespaces in read-only mode:
sqlplus "/ as sysdba"

SQL> REM tts_readonly

setserveroutput on size 1000000

declare

string varchar2(4096);

ts_fail integer:=0;

typetablespacetyp is table of dba_tablespaces%rowtype;

tslisttablespacetyp;

typecursor_ref is ref cursor;

c_curcursor_ref; 

begin

/*

First check if one of the tablespaces is already in read only mode, if true procedure will fail due to the fact that we cannot discriminate if the read only was a failure for an earlier run of this scriptOR that it is really a read only tablespace.

*/

openc_cur for 'select * from dba_tablespaces

wheretablespace_name not in (''SYSTEM'', ''SYSAUX'')

and contents not in (''UNDO'', ''TEMPORARY'')';

fetchc_cur bulk collect into tslist;

closec_cur; 

fori in tslist.first .. tslist.last

loop

iftslist(i).status!='ONLINE'

then

dbms_output.put_line('Tablespace: '||tslist(i).tablespace_name||

' can NOT be put in read only mode, current status '||

tslist(i).status);

ts_fail:=ts_fail+1;

end if;

end loop; 

ifts_fail!=0

then

dbms_output.put_line('Errors have been found while check if tablespace(s) can be put in read only mode');

return;

end if; 

fori in tslist.first .. tslist.last

loop

execute immediate 'alter tablespace '||tslist(i).tablespace_name ||' read only';

dbms_output.put_line('Tablespace ' || tslist(i).tablespace_name ||' read only');

end loop; 

end;

/

8 ==> Get the related datafiles for the tablespaces , run the following on source:-

SQL> REM tts_show_datafiles

setserveroutput on size 1000000

declare

typedatafiletyp is table of dba_data_files%rowtype;

filelistdatafiletyp; 

typecursor_ref is ref cursor;

c_curcursor_ref; 

begin

openc_cur for 'select * from dba_data_files

wheretablespace_name in (select tablespace_name from dba_tablespaces

wheretablespace_name not in (''SYSTEM'', ''SYSAUX'')

and contents not in (''UNDO'', ''TEMPORARY''))

order by tablespace_name,file_id';

fetchc_cur bulk collect into filelist;

closec_cur; 

fori in filelist.first .. filelist.last

loop

dbms_output.put_line('Tablespace: '||filelist(i).tablespace_name||' File: '||filelist(i).file_name);

end loop;

end;

/ 

9 ==> Export the tablespaces using Transportable Tablespace feature , run the following on source:-

testdb_expdp_tts.par :

userid="/ as sysdba"

directory=tts_exp

dumpfile=testdb_expdp_tts.dmp

logfile=testdb_expdp_tts.log

transport_full_check=y

transport_tablespaces=IDBS_EWB_CORE_TS,IDBS_EWB_DICT_TS,IDBS_EWB_WIDGET_TS,IDBS_EWB_SEC_TS,IDBS_EWB_INDX_TS,

IDBS_EWB_WIDGETS_INDX_TS,IDBS_EWB_CDC_SUB_INDX_TS,IDBS_EWB_LOB_INDX_TS,IDBS_EWB_LOB_DATA_TS,

IDBS_EWB_SEC_INDX_TS,IDBS_EWB_CORE_INDX_TS,IDBS_EWB_DICT_INDX_TS,IDBS_EWB_CDC_PUB_TS,IDBS_EWB_CDC_SUB_TS,

IDBS_EWB_DW_DATA_TS,IDBS_DW_LOB_INDX_TS,IDBS_DW_INDX_TS,IDBS_EWB_SA_DATA_TS,IDBS_SA_LOB_INDX_TS,

IDBS_SA_INDX_TS,IDBS_CATALOG_HUB_TS,IDBS_FHR_HUB_TS,IDBS_FHR_INDX_TS,IDBS_LSCAPE_HUB_TS,

IDBS_LSCAPE_INDX_TS,IDBS_LSCAPE_LOB_INDX_TS,IDBS_LSCAPE_LOB_DATA_TS,IDBS_EWB_CXTA_TS 

expdpparfile=testdb_expdp_tts.par

10 ==> Create the new database on the Destination host

The new database can be created using SQL*Plus of DBCA.

The characterset of the database needs to be same as the Source database or a superset of the 'source' characterset.

The new database only needs a SYSTEM, SYSAUX, UNDO and TEMP-tablespace.

Any other tablespace might block the import as the imported tablespace cannot exist in the new database. 

11 ==>On the destination Linux server, create a directory (if not exists) to place the dump file in:

$ mkdir /data/datapump/testdb

12 ==>On the destination Linux server, create a directory object that points to thedirectory holding the Data Pump metadata:

SQL> create directory dpdir as '/data/datapump/testdb';

13 ==> Copy the datafile, export dumps and SQL-script to the

Copy datafiles and export dumps to the Destination-server, using an OS-utility like 'ftp', 'sftp', 'scp'

13 a. Copy the elnrsdev_expdp_tts.dmp & elnrsdev_expdp_full_norows.dmp from the target server to the destination server.

$ scporacle@sourcemac:/app/datapump/testdb/testdb_expdp_tts.dmp /data/datapump/testdb

$ scp oracle@sourcemac:/app/datapump/testdb/testdb_expdp_full_norows.dmp /data/datapump/testdb

13 b. Copy the database files from the source server to the destination server.

Note :- since the dbfs are on ASM on source so we need to first copy them to the local file systemthen need to scp to destination server

mkdir /app/dbfs/testdb  ==> run this on source if not exists (make sure we have ample space here around 30 GB)

Be sure to copy each datafile which is shown in the output of the procedure 'tts_show_datafiles()' from step 8.

NOTE : "Crosscheck for duplicate file names".

Using ASMCMD Utility :-

================================ 

cp '+DATADG_1/testdb/idbs_catalog_hub_ts.dbs' /app/dbfs/testdb/idbs_catalog_hub_ts.dbs

cp '+DATADG_1/testdb/idbs_dw_indx_ts.dbs' /app/dbfs/testdb/idbs_dw_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_dw_lob_indx_ts.dbs' /app/dbfs/testdb/idbs_dw_lob_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_cdc_pub_ts.dbs' /app/dbfs/testdb/idbs_ewb_cdc_pub_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_cdc_sub_indx_ts.dbs' /app/dbfs/testdb/idbs_ewb_cdc_sub_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_cdc_sub_ts.dbs' /app/dbfs/testdb/idbs_ewb_cdc_sub_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_core_indx_ts.dbs' /app/dbfs/testdb/idbs_ewb_core_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_core_ts.dbs' /app/dbfs/testdb/idbs_ewb_core_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_cxta_ts.dbs' /app/dbfs/testdb/idbs_ewb_cxta_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_dict_indx_ts.dbs' /app/dbfs/testdb/idbs_ewb_dict_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_dict_ts.dbs' /app/dbfs/testdb/idbs_ewb_dict_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_dw_data_ts.dbs' /app/dbfs/testdb/idbs_ewb_dw_data_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_indx_ts.dbs' /app/dbfs/testdb/idbs_ewb_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_lob_data_ts.dbs' /app/dbfs/testdb/idbs_ewb_lob_data_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_lob_indx_ts.dbs' /app/dbfs/testdb/idbs_ewb_lob_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_sa_data_ts.dbs' /app/dbfs/testdb/idbs_ewb_sa_data_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_sec_indx_ts.dbs' /app/dbfs/testdb/idbs_ewb_sec_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_sec_ts.dbs' /app/dbfs/testdb/idbs_ewb_sec_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_widgets_indx_ts.dbs' /app/dbfs/testdb/idbs_ewb_widgets_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_ewb_widget_ts.dbs' /app/dbfs/testdb/idbs_ewb_widget_ts.dbs

cp '+DATADG_1/testdb/idbs_fhr_hub_ts.dbs' /app/dbfs/testdb/idbs_fhr_hub_ts.dbs

cp '+DATADG_1/testdb/idbs_fhr_indx_ts.dbs' /app/dbfs/testdb/idbs_fhr_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_lscape_hub_ts.dbs' /app/dbfs/testdb/idbs_lscape_hub_ts.dbs

cp '+DATADG_1/testdb/idbs_lscape_indx_ts.dbs' /app/dbfs/testdb/idbs_lscape_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_lscape_lob_data_ts.dbs' /app/dbfs/testdb/idbs_lscape_lob_data_ts.dbs

cp '+DATADG_1/testdb/idbs_lscape_lob_indx_ts.dbs' /app/dbfs/testdb/idbs_lscape_lob_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_sa_indx_ts.dbs' /app/dbfs/testdb/idbs_sa_indx_ts.dbs

cp '+DATADG_1/testdb/idbs_sa_lob_indx_ts.dbs' /app/dbfs/testdb/idbs_sa_lob_indx_ts.dbs

cp '+DATADG_1/testdb/users01.dbf'
/app/dbfs/testdb/users01.dbf                

copy the datafiles now from source to destination.

mkdir /data/dbfs/testdb ==> execute this on destination server if not exists (make sure we have ample space here)

$ scporacle@sourcemac:/app/dbfs/testdb/* /data/datapump/testdb

14==>On the destination Linux server, connect to RMAN and run the CONVERT DATAFILE command:

rman_convert.sh :-

===========================

export ORACLE_SID=TESTDB

export ORACLE_HOME=/data/app/oracle/product/11.2.0.3/dbhome_1

export PATH=$PATH:$ORACLE_HOME/bin

export LOG="/data/dbfs/elnrsdev/rman_convert_testdb.log"

echo " rman convert started at :`date`" >$LOG

$ORACLE_HOME/bin/rman target / <<EOF>>$LOG

run

{

CONVERT DATAFILE

'/data/datapump/testdb/idbs_ewb_core_ts.dbs',

'/data/datapump/testdb/idbs_ewb_dict_ts.dbs',

'/data/datapump/testdb/idbs_ewb_widget_ts.dbs',

'/data/datapump/testdb/idbs_ewb_sec_ts.dbs',

'/data/datapump/testdb/idbs_ewb_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_widgets_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_cdc_sub_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_lob_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_lob_data_ts.dbs',

'/data/datapump/testdb/idbs_ewb_sec_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_core_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_dict_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_cdc_pub_ts.dbs',

'/data/datapump/testdb/idbs_ewb_cdc_sub_ts.dbs',

'/data/datapump/testdb/idbs_ewb_dw_data_ts.dbs',

'/data/datapump/testdb/idbs_dw_lob_indx_ts.dbs',

'/data/datapump/testdb/idbs_dw_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_sa_data_ts.dbs',

'/data/datapump/testdb/idbs_sa_lob_indx_ts.dbs',

'/data/datapump/testdb/idbs_sa_indx_ts.dbs',

'/data/datapump/testdb/idbs_catalog_hub_ts.dbs',

'/data/datapump/testdb/idbs_fhr_hub_ts.dbs',

'/data/datapump/testdb/idbs_fhr_indx_ts.dbs',

'/data/datapump/testdb/idbs_lscape_hub_ts.dbs',

'/data/datapump/testdb/idbs_lscape_indx_ts.dbs',

'/data/datapump/testdb/idbs_lscape_lob_indx_ts.dbs',

'/data/datapump/testdb/idbs_lscape_lob_data_ts.dbs',

'/data/datapump/testdb/idbs_ewb_cxta_ts.dbs'

DB_FILE_NAME_CONVERT

'/data/dbfs/elnrsdev',

'/data/oradata/ELNRSDEV/datafile'

FROM PLATFORM 'Solaris[tm] OE (64-bit)';

}

exit

EOF

echo "rman convert Completed at :`date`" >>$LOG 

15==> Pre-create the users, using the generated script tts_exp_users_create.sql (step 4) on the Destination host :-

run script : tts_exp_users_create.sql

16==> Import the datafiles

Import the datafile and metadata using the Transportable Tablespace feature.

The datafiles are the converted datafiles

testdb_impdp_tts.par :

userid="/ as sysdba"

directory=dpdir

dumpfile=EXPDP_TTS_ELNRSDEV.DMP

logfile=impdp_tts_elnrsdev.log

transport_datafiles='/data/datapump/testdb/idbs_ewb_core_ts.dbs',

'/data/datapump/testdb/idbs_ewb_dict_ts.dbs',

'/data/datapump/testdb/idbs_ewb_widget_ts.dbs',

'/data/datapump/testdb/idbs_ewb_sec_ts.dbs',

'/data/datapump/testdb/idbs_ewb_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_widgets_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_cdc_sub_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_lob_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_lob_data_ts.dbs',

'/data/datapump/testdb/idbs_ewb_sec_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_core_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_dict_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_cdc_pub_ts.dbs',

'/data/datapump/testdb/idbs_ewb_cdc_sub_ts.dbs',

'/data/datapump/testdb/idbs_ewb_dw_data_ts.dbs',

'/data/datapump/testdb/idbs_dw_lob_indx_ts.dbs',

'/data/datapump/testdb/idbs_dw_indx_ts.dbs',

'/data/datapump/testdb/idbs_ewb_sa_data_ts.dbs',

'/data/datapump/testdb/idbs_sa_lob_indx_ts.dbs',

'/data/datapump/testdb/idbs_sa_indx_ts.dbs',

'/data/datapump/testdb/idbs_catalog_hub_ts.dbs',

'/data/datapump/testdb/idbs_fhr_hub_ts.dbs',

'/data/datapump/testdb/idbs_fhr_indx_ts.dbs',

'/data/datapump/testdb/idbs_lscape_hub_ts.dbs',

'/data/datapump/testdb/idbs_lscape_indx_ts.dbs',

'/data/datapump/testdb/idbs_lscape_lob_indx_ts.dbs',

'/data/datapump/testdb/idbs_lscape_lob_data_ts.dbs',

'/data/datapump/testdb/idbs_ewb_cxta_ts.dbs'


Make sure to include all the converted datafiles.

$ impdpparfile=testdb_impdp_tts.par

17 ==> Post import steps

Alter the related users, to have the correct default and temporary tablespace again, using the generated script from step 5.

run script : tts_exp_users_alter.sql

18 ==> Put the tablespaces in READ WRITE again , on destination & source:

SQL> set serveroutput on size 1000000

setserveroutput on size 1000000

declare

ts_fail integer:=0; 

typetablespacetyp is table of dba_tablespaces%rowtype;

tslisttablespacetyp; 

typecursor_ref is ref cursor;

c_curcursor_ref; 

begin

openc_cur for 'select * from dba_tablespaces

wheretablespace_name not in (''SYSTEM'', ''SYSAUX'')

and contents not in (''UNDO'', ''TEMPORARY'')'; 

fetchc_cur bulk collect into tslist;

closec_cur; 

fori in tslist.first .. tslist.last

loop

iftslist(i).status!='READ ONLY'

then

dbms_output.put_line('Tablespace: '||tslist(i).tablespace_name||' can NOT be put in read write mode, current status '||tslist(i).status);

ts_fail:=ts_fail+1;

end if;

end loop; 

ifts_fail!=0

then

dbms_output.put_line('Errors have been found while check if tablespace(s) can be put in read write mode');

return;

end if; 

fori in tslist.first .. tslist.last

loop

execute immediate 'alter tablespace '||tslist(i).tablespace_name||' read write';

dbms_output.put_line('Tablespace: '||tslist(i).tablespace_name||' put in read write mode');

end loop; 

end;

/

19 ==> Import all the related objects which were not imported by the TTS, like grants etc.

testdb_impdp_full_norows.par:

userid="/ as sysdba"

directory=dpdir

dumpfile=testdb_expdp_full_norows.dmp

logfile=testdb_impdp_full_norows.log

full=y

content=metadata_only

table_exists_action=skip 

$ impdpparfile=testdb_impdp_full_norows.par

20 ==> Recompile all invalid objects

SQL> connect / as sysdba

     @?/rdbms/admin/utlrp.sql

Conclusion: With above steps I conclude the migration activity from Solaris to Linux. For any questions on the topic click below:

Ask Ravi