Migrating to EXADATA with ZERO downtime

Recently I have been involved migrating multi-Terabyte Database to EXADATA with ZERO downtime. Let me make one thing clear it was not Literally ZERO downtime but close to Zero.

The requirement was to migrate the database to EXADATA and upgrade to 12c from 11g. There was some other requirement which needs more downtime, but for the purpose of this article, I will keep it simple.

Now if you go the Just DataGuard approach, you still need downtime to upgrade the database. So we went with DataGaurd plus GoldenGate route. Before we go to technical details, I will explain level plan.

Migration plan overview

  1. Create the DataGaurd of Source database on Target host(EXADATA) keep in Sync.
  2. Create another Empty 12c Database on Target host(EXADATA) and installed GoldenGate. We will call it OGG database for the purpose of this article.
  3. Make sure you added supplemental logging enabled in Source database.
  4. Configure redo transport between and Source and OGG database. Confirm log files are being sent from Source database
  5. Create downstream capture integrated Extract using 12c OGG database and confirm integrated Extract is working (trail files are being generated).
  6. Activate the 11g standby as a primary on EXADATA and record SCN and timestamp that this occurs (STANDBY_BECAME_PRIMARY_SCN from the v$database). This SCN is critical for starting Replicat.
  7.  Now DataGaurd is Disconnected from primary, but all the changes from primary are captured by GoldenGate and written to trail file. Now Upgrade the Target database to 12c on EXADATA.
  8. After upgrade to 12c, Create integrated Replicat on the Target database and start it applying from the SCN value recorded in #6 using the START REPLICAT…ATCSN <SCN>.
  9.  Wait until integrated Replicat catches up before moving on. Now at this point, your Target database is 12c and getting sync by with source via GoldenGate.
  10. Disconnect users from the 11g primary and wait until all data extracted and applied on the 12c Target.  Stop GoldenGate and allow users to connect to the new 12c Target database.
Here you go you have Migrated database on EXADATA and upgrade to 12c with close to ZERO downtime:)

 . 1

Environment details

 Soure user#server:database –>  orpvboxo@groucho:PVBOXO

Mining user#server:database –> orpvboxo@harpo:OGG

Target user#server:database –> orpvboxo@harpo:PVBOXO

User creation

SQL>create user ggsource identified by ggsource default tablespace users;

SQL>grant dba to ggsource;

SQL>create user ggmining identified by ggmining default tablespace users;

SQL>grant dba to ggmining;

SQL>exec DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE (grantee=>’ggsource’, privilege_type=>’capture’,grant_select_privileges=>true, do_grants=>TRUE);

SQL>exec DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE ( grantee=>’ggmining’, privilege_type=>’capture’, grant_select_privileges=>true, do_grants=>TRUE);

 Adding supplemental logging

On Source

SQL>alter database add supplemental log data;

SQL>alter database force logging;

Configure Redo Transport

On Source

Add TNS entry of mining database to source tnsnames.ora

sqlplus “/as sysdba”

SQL>ALTER SYSTEM SET LOG_ARCHIVE_CONFIG=’DG_CONFIG=(PVBOXO,PVBOXO_EXA,OGG)’;

SQL>ALTER SYSTEM  SET LOG_ARCHIVE_DEST_3=’PVBOXO_GG  ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=OGG’;

SQL>ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_3=ENABLE;

exit

On OGG database

 Add TNS entry of source database to mining tnsnames.ora

 sqlplus “/as sysdba”

SQL>STARTUP MOUNT;

SQL>ALTER DATABASE ARCHIVELOG;

SQL>ALTER DATABASE OPEN;

Create standby redo log in mining database.

SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 (‘+RECO_PVBOXO’, ‘+DATA_PVBOXO’) SIZE 200M;

SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 (‘+RECO_PVBOXO’, ‘+DATA_PVBOXO’) SIZE 200M;

SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 (‘+RECO_PVBOXO’, ‘+DATA_PVBOXO’) SIZE 200M;

SQL>ALTER SYSTEM SET LOG_ARCHIVE_DEST_1=’LOCATION=’USE_DB_RECOVERY_FILE_DEST’ valid_for=(ONLINE_LOGFILE,PRIMARY_ROLE)’

SQL>ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_1=ENABLE

SQL>ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’LOCATION=+DATA_PVBOXO VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)’

SQL>ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE

Exit;

Make sure redo transport started

Create downstream capture integrated Extract

On OGG database

 Creates a new credential store

$ggsci

GGSCI>add credentialstore

GGSCI>ALTER CREDENTIALSTORE ADD USER ggsource@pvboxo, password ggsource, alias ggsource

GGSCI>ALTER CREDENTIALSTORE ADD USER ggmining@ogg, password ggmining, alias ggmining

GGSCI>ALTER CREDENTIALSTORE ADD USER ggsource@pvboxo_exa, password ggsource, alias ggtarget

GGSCI>dblogin useridalias  ggsource

GGSCI>ADD SCHEMATRANDATA sh

Exit;

$ggsci

GGSCI>DBLOGIN USERIDalias ggsource

GGSCI>MININGDBLOGIN USERIDalias ggmining

REGISTER EXTRACT exthrsao DATABASE

ADD EXTRACT exthrsao INTEGRATED TRANLOG BEGIN NOW

ADD EXTTRAIL ./dirdat/lt, EXTRACT exthrsao, MEGABYTES 200

GGSCI>EDIT PARAMS exthrsao

extract exthrsao

useridalias ggsource

tranlogoptions mininguseralias ggmining

TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)

EXTTRAIL /pvboxo/oracle/product/GG/dirdat/lt

TABLE SH.GG_POC;

GGSCI>START EXTRACT exthrsao

GGSCI > info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING

EXTRACT     RUNNING     EXTHRSAO    00:00:50      00:00:07

Insert some data into sh.gg_poc tables

sqlplus sh/sh

SQL>insert into gg_poc values (‘b’,sysdate);

SQL>insert into gg_poc values (‘b’,sysdate);

SQL>insert into gg_poc values (‘b’,sysdate);

SQL>insert into gg_poc values (‘b’,sysdate);

Exit;

 Check the status of extract.

$ggsci

GGSCI> stats extract exthrsao

Sending STATS request to EXTRACT EXTHRSAO …

Start of Statistics at 2015-08-12 17:31:12.

Output to /pvboxo/oracle/product/GG/dirdat/lt:

Extracting from SH.GG_POC to SH.GG_POC:

*** Total statistics since 2015-08-12 17:30:32 ***

                Total inserts                              20475.00

                Total updates                              0.00

                Total deletes                                 0.00

                Total discards                                0.00

                Total operations                                      20475.00

Exit;

Activate the 11g standby as a primary

On target database

sqlplus “as sysdba”

SQL> SELECT OPEN_MODE,PROTECTION_MODE,DATABASE_ROLE FROM V$DATABASE;

OPEN_MODE                PROTECTION_MODE      DATABASE_ROLE

——————– ——————– —————-

READ ONLY WITH APPLY MAXIMUM PERFORMANCE  PHYSICAL STANDBY

SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;

Database altered.

SQL> ALTER DATABASE ACTIVATE PHYSICAL STANDBY DATABASE;

Database altered.

SQL> SELECT OPEN_MODE,PROTECTION_MODE,DATABASE_ROLE, CURRENT_SCN                                        FROM V$DATABASE;

OPEN_MODE                PROTECTION_MODE      DATABASE_ROLE  CURRENT_SCN

——————– ——————– —————-

MOUNTED         MAXIMUM PERFORMANCE  PRIMARY, 7100032

SQL> select name from v$database;

NAME

———

PVBOXO

SQL> ALTER DATABASE OPEN;

Upgrade the Target database to 12c.

Follow standard process to upgrade database.

Create integrated Replicat

Get the count from gg_poc table from target

SQL> select count(*) from sh.gg_poc;

  COUNT(*)

———-

    163912

$ggsci

GGSCI> ADD REPLICAT rephrsao INTEGRATED EXTTRAIL /pvboxo/oracle/product/GG/dirdat/lt

GGSCI>edit params rephrsao

REPLICAT rephrsao

USERIDalias ggtarget

ASSUMETARGETDEFS

discardfile ./dirrpt/repdown.dsc, purge

MAP sh.gg_poc, TARGET sh.gg_poc;

GGSCI> start REPLICAT rephrsao afterCSN 7100032

GGSCI > info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING

EXTRACT     RUNNING     EXTHRSAO    00:00:08      00:00:01

REPLICAT    RUNNING     REPHRSAO    00:00:00      00:00:01

Exit;

Insert rows in source gg_poc and take the count.  After sometimes check count in the target, and it should match.

Once there are no lag plan switch users to target database on EXADATA during some low activity period.

Enjoy:)

Migrating Oracle Database to EXADATA

Recently I have been migrated few multi Terabytes databases from HP-UX to Exadata x2-8 and x2-2 full rack.  Source database was 10g and target Exadata was 11g.  I had used XTTS (Cross platform transport tablespace) method due to Endian format difference between source and target.  The entire migration process kind of complicated but can be divided in 6 steps.

  • Migration Approach

This is depends upon your environment.  The bottom line is you need to present (nfs mount) source database file system to Exadata. Now if your source is normal cooked file system then it is very easy. But it you have source as ASM or RAW file system then those cannot be mounted on Exadata server, so you have to create an intermediate database on storage which can be mounted on Exadata. Most of obvious option is NAS file system.  There are other options like Golden gate but that is expensive option.

  • Target and Source instance preparation

Follow the MOS Doc ID 1366265.1 (Using Transportable Tablespaces for EBS 11i Using Database 11gR2) and check your source instance and apply whatever fixes you need.  You need to run script auxttspre.sql and check log file to see if XTTS is possible or not in your environment.   Mostly you end-up moving some of objects from tools/user/system tablespace to another tablespace. The idea is to make all the objects resides within list of tablespace which you are migrating.  This script will also general bunch of other scripts which will be used later for datapump and other activities.

File Description
auxttsread.sql Put transportable tablespace set in read only for source database.
auxttswrite.sql Script that converts the transportable tablespace set in the source database back to read write.
auexpxtts.dat Export transportable tablespace set.
auexpglb.dat Export global temporary tables.
auexpsys.dat Export for the SYSTEM tablespace schemas.
auexpmsc.dat Export for the other objects.
aurman.dat Script for Endian conversion using RMAN
auimpxtts.dat Import the transportable tablespace set.
auimpmsc1.dat Import the other objects that do not need tables
auimpmsc2.dat Import  other objects that do not need tables
auimpglb.dat Import global temporary tables.
auimpsys.dat Import SYSTEM tablespace schemas.

Sometime you need to apply database patches for datapump. You will sort out this in first build.  There is not much to prepare on target apart from creating empty database and changing init.ora parameter as per your environment.

  • Export metadata

Once your source is ready you need to put tablespace in ready only mode by running script auxttsread.sql.  Now to save time you can run all the following metadata export scripts in parallel.

auexpxtts.dat

auexpglb.dat

auexpsys.dat

auexpmsc.dat

  • Endian Conversion

While these exports are going you can nfs mount source database file system on Exadata and startup source database in mount state.  Now you start Endian conversion using RMAN on Exadata server. Script aurman.dat will copy the database from nfs file system to Exadata ASM and does Endian conversion.  Verify the log file to make sure no error reported.

  • Import metadata

Once exports of metadata and Endian conversion have been completed you can start imports. Unlike exports, import has to be run serially and in particular order.   Before you start import take full backup of source database.  In case there are errors you can just restore database from backup instead of recreating it again.

First import script to be run is auimpmsc1.dat.  This will import objects that do not need tables. Verify log file, of course you will see lots of error related to object compilation. You can safely ignore that but check for other errors and fixes them.  If needed drop database recreate and apply those fixes and do import again.

Second import script to be run is auimpxtts.dat.  This script will attach all the datafiles to tablespcaes. After this import completed successfully you can see all the datafiles in dba_data_files. At this point you compare tablespace between source and target it should match.  Now run script auxttswrite.sql to put all the tablespace in read write mode.

Now you can run auimpglb.dat , auimpmsc2.dat and  auimpsys.dat in this particular order.

  • Post import

Main post import work to compile invalid objects. Run the SYS schema statistics by running  adstats.sql. Collect schema statistics. Depending upon environment you may be having  other stuff like recreating queue etc. Last but not least is to compare object count. Get the object count group by schema and object type from target and source and compare.

The purpose of this article is to provide overview of entire database migration process. I am sure when you do migration you will have lots of issue because each environment is different.   But if you know fundamentals and understand whole process it makes task easy. I hope this helps you.

Hello World!

First of all I would like to give my sincere Thanks to all web community from where I learned and learning so much. I believe it is never too late for anything. There is a saying in my native language Hindi that  “Gyan batane se badhta hai” which means “Your Knowledge increases if you share it”. So finally I have decided to write this blog about whatever little knowledge I have about Oracle. I hope that I will keep updating this site.