Monday, July 21, 2008

ORACLE 10g RAC INSTALLATION

I hope you would have seen lots and lots of RAC install documents now a days, This is one of the installation performed for one my US based client. This is a 10g R2 Base release Installation on Sun Fire v 490 box.

Download link :
http://rapidshare.com/files/131310423/RAC_INSTALL.pdf.html



Below one is the other one installation which i have done for a HP-UX client.

http://rapidshare.com/files/131316003/HPUXRAC.doc.html

Friday, July 11, 2008

Create Oracle 10g Logical Standby Database

1. Backup a spfile copy from Production, Change the DB unique name parameter as per the new logical db name and proceed with the
db startup in no mount state in the logical standby .


2.
connected to target database: PWPRD (DBID=505738321)
connected to recovery catalog database
connected to auxiliary database: PWPRD (not mounted)

RMAN> run {
2> resync catalog;
3> allocate auxiliary channel t1 type 'SBT_TAPE' MAXOPENFILES 6 MAXPIECESIZE 8G
4> parms 'SBT_LIBRARY=/lib/libnwora.so,ENV=(NSR_SERVER=192.168.200.166,NSR_DATA_VOLUME_POOL=Database,NSR_GROUP=Oracle,NSR_DEBUG_
5> FILE=/tmp/debug.out)';
6> allocate auxiliary channel t2 type 'SBT_TAPE' MAXOPENFILES 6 MAXPIECESIZE 8G
7> parms 'SBT_LIBRARY=/lib/libnwora.so,ENV=(NSR_SERVER=192.168.200.166,NSR_DATA_VOLUME_POOL=Database,NSR_GROUP=Oracle,NSR_DEBUG_
8> FILE=/tmp/debug.out)';
9> duplicate target database for standby dorecover;
10> release channel t1;
11> release channel t2;
12> }


Please perform the restore as above

Note : After restore , Create spfile for your db and bounce your database


3. Add the following parameters to your database spfile

*.log_archive_config='dg_config=(PWPRD,PWPRDDR,PWRPPRD)'
*.log_archive_dest_1='LOCATION="+ARCHIVES", valid_for=(ONLINE_LOGFILE,ALL_ROLES)'
*.log_archive_dest_2='LOCATION=+ARCHIVES/PWRPPRD/STANDBY/ VALID_FOR=(STANDBY_LOGFILES, STANDBY_ROLE) DB_UNIQUE_NAME=PWRPPRD'

Note : Please make sure you create the directory named +ARCHIVES/PWRPPRD/STANDBY in your logical standby ASM manaully.


4. Make the below change in production database :


alter system set log_archive_dest_3='service="PWRPPRD", LGWR ASYNC NOAFFIRM delay=0 OPTIONAL max_failure=10 max_connections=1 reopen=15 db_unique_name="PWRPPRD" register net_timeout=180 valid_for=(online_logfile,primary_role)' scope=both sid='*';

Note : Starting shipping the logs from production and start applying it .

Conversion of physical standby to Logical

5. Cancel the managed Recovery at DR Database ( Logical db )

6. Make the below executions in Production database :

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG
DATA (PRIMARY KEY, uNIQUE INDEX) COLUMNS;

Database altered.

SQL> execute dbms_logstdby.build;

PL/SQL procedure successfully completed.

7. In the New logical Standby , Please convert the database into Single instance .


SQL> alter system set cluster_database=FALSE scope=spfile sid='*'
System altered.

SQL> shut immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.
SQL> startup nomount
ORACLE instance started.

Total System Global Area 4294967296 bytes
Fixed Size 2133936 bytes
Variable Size 810938448 bytes
Database Buffers 3456106496 bytes
Redo Buffers 25788416 bytes
SQL> alter database mount standby database;

Database altered.


8. Now issue the command to switch the Physical dr to logical DR :

SQL> alter database recover to logical standby PWRPPRD;

Note :

Please run the below script in the DR side :

select process,status,client_process,sequence#,block#,active_agents,known_agents
from v$managed_standby;

the above script will show you the logfile for which DR is waiting. After running the logical conversion command. go to
the production database and backup the log file which the above script show and scp it to the new DR and
register it in this side. Do the same method of couple of time until the above command gets altered.

Reason being , the conversion command requires all the log till we made the logical build in the production to recover
and convert it.


Note : At this point the conversion command will change the DB name and also it will update in the Spfile. Also alert log you
will find the below messages .


Converting standby mount to primary mount.
Tue Mar 4 02:41:00 2008
ACTIVATE STANDBY: Complete - Database mounted as primary (PWRPPRD1)
*** DBNEWID utility started ***
DBID will be changed from 505738321 to new DBID of 2131768892 for database PWPRD
DBNAME will be changed from PWPRD to new DBNAME of PWRPPRD
Starting datafile conversion
Setting recovery target incarnation to 1
Datafile conversion complete
Database name changed to PWRPPRD.
Modify parameter file and generate a new password file before restarting.
Database ID for database PWRPPRD changed to 2131768892.
All previous backups and archived redo logs for this database are unusable.
Database has been shutdown, open with RESETLOGS option.
Succesfully changed database name and ID.
*** DBNEWID utility finished succesfully ***
Tue Mar 4 02:41:01 2008
SUCCESS: diskgroup RPT_DATA was dismounted
Tue Mar 4 02:41:01 2008
Completed: alter database recover to logical standby PWRPPRD

9. Make sure you alter the cluster database instance paramter to 2 and also cluster database to true before you shutdown .


alter system set cluster_database=TRUE scope=spfile sid='*';

10 . Bounce the logical standby and open it in reset logs mode.

11. Start shipping the logs from production side and start the logical real time apply.

Note : please add standby logfiles at logical standby side. To make real time apply.

Transport Transactional tablespace

From db01-lisprd

Create directory object for Data Pump

SQL> create directory pwprd_dir as '/d01/app/pwprd';

Directory created.

SQL> grant read, write on directory pwprd_dir to system;

Grant succeeded.

Bring HLPR tablespace to read only mode to facilitate transfer

SQL> alter tablespace hlpr read only;

Tablespace altered.

Export metadata for tablespace with Data Pump. We also choose the transport_full_check option so that Data Pump will check tablespace dependencies. This was already performed via SQL*Plus but is used here just as a precaution.

-bash-3.00$ expdp system/p1pathway dumpfile=hlpr.dmp directory=pwprd_dir transport_tablespaces=hlpr transport_full_check=y

Export: Release 10.2.0.1.0 - 64bit Production on Thursday, 28 September, 2006 11:25:38

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
Starting "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01": system/******** dumpfile=hlpr.dmp directory=pwprd_dir transport_tablespaces=hlpr transport_full_check=y
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TRANSPORTABLE_01 is:
/d01/app/pwprd/hlpr.dmp
Job "SYSTEM"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 11:26:22

Launch RMAN and use the convert command to capture the HLPR tablespace in a backupset.

-bash-3.00$ rman target /

Recovery Manager: Release 10.2.0.1.0 - Production on Thu Sep 28 11:26:49 2006

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: PWHLPR (DBID=691297521)

RMAN> convert tablespace hlpr format '/d01/app/pwprd/hlpr_%U';

Starting backup at 28-SEP-06
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=510 instance=PWHLPR1 devtype=DISK
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00006 name=+HLPR_DATA/pwhlpr/datafile/hlpr01.ora
converted datafile=/d01/app/pwprd/hlpr_data_D-PWHLPR_I-691297521_TS-HLPR_FNO-6_09hudqct
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:05:36
channel ORA_DISK_1: starting datafile conversion
input datafile fno=00009 name=+HLPR_DATA/pwhlpr/datafile/hlpr.266.602275983
converted datafile=/d01/app/pwprd/hlpr_data_D-PWHLPR_I-691297521_TS-HLPR_FNO-9_0ahudqnd
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:01:35
Finished backup at 28-SEP-06

Move all files to rptdb01-lisprd.ftw


From rptdb01-lisprd

-bash-3.00$ ls -lrt
total 68248184
drwxrwx--- 3 pwrpprd pdba 512 Sep 21 11:18 product
-rw-r----- 1 pwrpprd pdba 385024 Sep 28 12:43 hlpr.dmp
-rw-r----- 1 pwrpprd pdba 26214408192 Sep 28 13:12 hlpr_data_D-PWHLPR_I-691297521_TS-HLPR_FNO-6_09hudqct
-rw-r----- 1 pwrpprd pdba 8560648192 Sep 28 13:21 hlpr_data_D-PWHLPR_I-691297521_TS-HLPR_FNO-9_0ahudqnd
-rw-r----- 1 pwrpprd pdba 150499328 Sep 28 13:22 pathway.dmp
-rw-rw-r-- 1 pwrpprd pdba 26767 Sep 28 13:38 import.log
-bash-3.00$
-bash-3.00$ rman target /

Recovery Manager: Release 10.2.0.2.0 - Production on Thu Sep 28 13:39:23 2006

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: PWHLPR (DBID=693046759)

RMAN> convert datafile '/d01/app/pwrpprd/hlpr_data_D-PWHLPR_I-691297521_TS-HLPR_FNO-6_09hudqct',
2> '/d01/app/pwrpprd/hlpr_data_D-PWHLPR_I-691297521_TS-HLPR_FNO-9_0ahudqnd'
3> format '+HLPR_DATA';

Starting backup at 28-SEP-06
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=54 instance=PWHLPR1 devtype=DISK
channel ORA_DISK_1: starting datafile conversion
input filename=/d01/app/pwrpprd/hlpr_data_D-PWHLPR_I-691297521_TS-HLPR_FNO-6_09hudqct
converted datafile=+HLPR_DATA/pwhlpr/datafile/hlpr.262.602343695
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:09:45
channel ORA_DISK_1: starting datafile conversion
input filename=/d01/app/pwrpprd/hlpr_data_D-PWHLPR_I-691297521_TS-HLPR_FNO-9_0ahudqnd
converted datafile=+HLPR_DATA/pwhlpr/datafile/hlpr.263.602344281
channel ORA_DISK_1: datafile conversion complete, elapsed time: 00:02:56
Finished backup at 28-SEP-06

-bash-3.00$ impdp system/pathway dumpfile=hlpr.dmp directory=pwprd_dir transport_datafiles='+HLPR_DATA/pwhlpr/datafile/hlpr.262.602343695','+HLPR_DATA/pwhlpr/datafile/hlpr.263.602344281'

Import: Release 10.2.0.2.0 - 64bit Production on Thursday, 28 September, 2006 14:05:30

Copyright (c) 2003, 2005, Oracle. All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
Master table "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01": system/******** dumpfile=hlpr.dmp directory=pwprd_dir transport_datafiles=+HLPR_DATA/pwhlpr/datafile/hlpr.262.602343695,+HLPR_DATA/pwhlpr/datafile/hlpr.263.602344281
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/CONSTRAINT/REF_CONSTRAINT
Processing object type TRANSPORTABLE_EXPORT/TABLE_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYSTEM"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at 14:05:55

SQL> select * from v$tablespace;

TS# NAME INC BIG FLA ENC
---------- ------------------------------ --- --- --- ---
0 SYSTEM YES NO YES
1 UNDOTBS1 YES NO YES
2 SYSAUX YES NO YES
4 USERS YES NO YES
3 TEMP NO NO YES
5 UNDOTBS2 YES NO YES
6 HLPR YES NO YES

7 rows selected.

SQL> alter tablespace hlpr read write;

Tablespace altered.

Thursday, June 19, 2008

Create ASM environment for testing Purpose ( Dummy only Testing Purpose )

This is only for test case and not for production use..!!!
 We dont need any ASM Lib Drivers to be installed for this method...!!


For running a Single Instance ASM We need to have CSS Dameon to be a running before we create ASM. This will be invoked in the first
step when you try to create the ASM instance.


Now As a root user execute all these steps.

1. Create a directory asm_disks and
cd /u01/asm_disks

Start creating Logical Volumes ( LV ) inside the directory

[root@testserver]dd if=/dev/zero of=asm_disk1 bs=1024k count=1024
1024+0 records in
1024+0 records out

[root@testserver]dd if=/dev/zero of=asm_disk2 bs=1024k count=1024
1024+0 records in
1024+0 records out

[root@testserver]dd if=/dev/zero of=asm_disk3 bs=1024k count=1024
1024+0 records in
1024+0 records out

[root@testserver]dd if=/dev/zero of=asm_disk4 bs=1024k count=1024
1024+0 records in
1024+0 records out

and then

[root@testserver]losetup /dev/loop1 asm_disk1
[root@testserver]losetup /dev/loop2 asm_disk2
[root@testserver]losetup /dev/loop3 asm_disk3
[root@testserver]losetup /dev/loop4 asm_disk4
[root@testserver]raw -qa

[root@testserver]raw /dev/raw/raw1 /dev/loop1
/dev/raw/raw1: bound to major 7, minor 1

[root@testserver]raw /dev/raw/raw2 /dev/loop2
/dev/raw/raw2: bound to major 7, minor 2

[root@testserver]raw /dev/raw/raw3 /dev/loop3
/dev/raw/raw3: bound to major 7, minor 3

[root@testserver]raw /dev/raw/raw4 /dev/loop4
/dev/raw/raw4: bound to major 7, minor 4

[root@testserver]raw -qa
/dev/raw/raw1: bound to major 7, minor 1
/dev/raw/raw2: bound to major 7, minor 2
/dev/raw/raw3: bound to major 7, minor 3
/dev/raw/raw4: bound to major 7, minor 4

Assigb permission to oracle user :


[root@testserver]chmod 660 asm_disk*

[root@testserver]ls
asm_disk1 asm_disk2 asm_disk3 asm_disk4



[root@testserver]chown oracle:oinstall /dev/raw/raw1
[root@testserver]chown oracle:oinstall /dev/raw/raw2
[root@testserver]chown oracle:oinstall /dev/raw/raw3
[root@testserver]chown oracle:oinstall /dev/raw/raw4

Note : MAke sure you install Oracle Database Binaries ( Software only ) before you proceed

Now as a root user run the below command for start the CSS dameon.

[root@testserver]/u01/oracle/product/10.2.0/db_1/bin/localconfig add
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Configuration for local CSS has been initialized

Adding to inittab
Startup will be queued to init within 30 seconds.
Checking the status of new Oracle init process...
Expecting the CRS daemons to be up within 600 seconds.
CSS is active on these nodes.
testserver
CSS is active on all nodes.
Oracle CSS service is installed and running under init(1M)

*******************************************************************

[root@testserver]ls /etc/oratab
/etc/oratab
[root@testserver]ls -ltr /etc/oratab
-rwxrw-r-- 1 oragrid root 881 Apr 29 02:40 /etc/oratab
[root@testserver]chown oracle:oinstall /etc/oratab


As Oracle User, Invoke DBCA












[root@testserver]ps -ef|grep ASM
oracle 8202 1 0 12:34 ? 00:00:00 asm_pmon_+ASM
oracle 8204 1 0 12:34 ? 00:00:00 asm_psp0_+ASM
oracle 8206 1 0 12:34 ? 00:00:00 asm_mman_+ASM
oracle 8208 1 0 12:34 ? 00:00:00 asm_dbw0_+ASM
oracle 8210 1 0 12:34 ? 00:00:00 asm_lgwr_+ASM
oracle 8212 1 0 12:34 ? 00:00:00 asm_ckpt_+ASM
oracle 8214 1 0 12:34 ? 00:00:00 asm_smon_+ASM
oracle 8216 1 0 12:34 ? 00:00:00 asm_rbal_+ASM
oracle 8218 1 0 12:34 ? 00:00:00 asm_gmon_+ASM
root 8240 7175 0 12:35 pts/2 00:00:00 grep ASM


Now you can go ahead and use your DBCA and create a database and on
step
6 of 13, you can use Automatic Storage management as your Filesystem.

Export and Import using Unix Pipe And Compress

This helped me in saving space in some of activities and also in cases to do parallel export and import.

Scenario 1 :

You were asked to import a schema of your production server into your test server for some purpose. You are expecting that the export will take 5 hours and again the import will take quite more than that. In this case, you can use this option which will save your time in exporting and shipping the file to the other server.
By doing this we will be able to export and import parallel on the same box.

1. In your test server, add the production database tnsnames to your tnsnames.ora
2. Create a unix pipe

mknod pipe name -p

ex : mknod ram_pipe -p

3. Start your export :

exp system/password@prod file=ram_pipe owner=erpds statistics=none log=/u01/export.log &

Remember to use & at the end to start the export in back groud and You be able to notice from the logfile whtr the pipe broken due to some errors in the export commands or the export is progressing...!!

4. Now from the same terminal start the import..!!

imp system/XXXXX@testserver file=ram_pipe fromuser=erpds touser=testds log=/u01/import.log &

Simple..!! You will notice one job wil be exporting and other will be importing...!!

Oracle 10g features

Even though we work on 10g databases on all 365 days,sometime when you try to recollect all the 10g features you won't be able to recall all of them.. For my reference i am adding this post..!!



1.Flashback Versions Query
2.Rollback Monitoring
3.Tablespace Management
4.Oracle Data Pump
5.Flashback Table
6.Automatic Workload Repository
7.Automatic Storage Management
8.RMAN
9.Auditing
10.Automatic Segment Management
11.Transportable Tablespaces
12.Automatic Shared Memory Management
13.ADDM and SQL Tuning Advisor


Oracle 10g R2 Features :

· ASM Command Line Tool
· Drop Empty Datafiles
· Direct SGA Access for Hung/Slow Systems
· Redefine a Partition Online
· Faster Startup
· Manage Multiple Objects in Oracle Enterprise Manager
· Automatic Segment Advisor
· Oracle Secure Backup
· Dynamic RMAN Views for Past and Current Jobs
· Dynamic Channel Allocation for Oracle RAC Clusters
· Tempfiles Recovery via RMAN
· Flashback Database/Query Through RESETLOGS
· Flashback Database Restore Points
· Flash Recovery Area View

I have got all these details from Oracle.com


From my understanding i have seen the following features were very much useful in daily life of us.

1.AWR
2.ASH
3. ADDM
4. Improved RMAN features
5. ASM
6. Memory management
7. DBMS utilities
8. Datapump
9. Flashback (Enhancement)
10. Improved Auditing
11. Improved SQL apply techniques - Useful in Logical Standby
12. Dataguard Enhancements
13.
SQL Tuning Advisor
14. Grid Control


I will be adding things whenever it strikes me..!!



Tuesday, May 27, 2008

Move Controlfile from Filesystem to Diskgroup

We
SQL> select name from v$controfile;

NAME
--------------------------------------------------------------------------------
/home/pwprd/ctl.ctl'

2. Shutdown the database and start the instance:
SQL> shutdown normal
SQL> startup nomount

3. Use RMAN to move the controlfile to ASM :
$ rman nocatalog
RMAN>connect target
RMAN>restore controlfile to '+DG NAME ' from 'FILE PATH';

e.g.

RMAN> restore controlfile to '+PWPRD_DATA' from '/home/pwprd/ctl.ct';

Monday, May 26, 2008

Creating an Oracle 10g database manually on ASM

Just for the understading of junior members in my present team..!!

Before Proceeding with Manual Database Creation Make sure you have the asm instance is running, Diskgroup is mounted and Sufficient space available for creating the database.

/home/oracle $ ps -ef | grep +ASM
oracle 25792 1 0 May25 ? 00:00:00 asm_pmon_+ASM

SQL> select group_number,name,state,total_mb/1024 TOTAL_GB,free_mb FREE_GB from v$asm_Diskgroup;

GROUP_NUMBER NAME STATE TOTAL_GB FREE_GB
------------ ------------------------------ ----------- ---------- ----------
1 DATA MOUNTED 4 2.5





Specifying the Instance's SID

The SID can be set through the ORACLE_SID environment variable.

export ORACLE_SID=ORCL

Creating the initialization parameter file

*.audit_file_dest='/u01/oracle/product/10.2.0/db_1/admin/orcl/adump'
*.background_dump_dest='/u01/oracle/product/10.2.0/db_1/admin/orcl/bdump'
*.compatible='10.2.0.1.0'
*.control_files='+DATA
*.core_dump_dest='/u01/oracle/product/10.2.0/db_1/admin/orcl/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='test'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=testXDB)'
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=200278016
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=536870912
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/oracle/product/10.2.0/db_1/admin/orcl/udump'
db_create_file_dest='+DATA'
log_archive_dest_1='LOCATION=+DATA'
db_create_online_log_dest_1='+DATA'

Note : Make sure you create the dump file directories( adump,udump,bdump,cdump)

Create Password File

Make sure you create the password file before you proceed with the instance startup.

Starting the instance

SQL*Plus: Release 10.2.0.1.0 - Production on Sat May 24 16:05:15 2008
 Copyright (c) 1982, 2004, Oracle.  All rights reserved.
SQL> connect sys/password as sysdba
Connected to an idle instance.

SQL*Plus tells us that we're connected to an idle instance. That means that it is not yet started. So, let's start the instance. We have to start the instance without mounting (nomount) as there is no database we could mount at the moment.

SQL> startup nomount pfile='/home/oracle/pfile.ora'
ORACLE instance started.
Total System Global Area 113246208 bytes
Fixed Size 787708 bytes
Variable Size 61864708 bytes
Database Buffers 50331648 bytes
Redo Buffers 262144 bytes

This created the SGA (System Global Area) and the background processes.

Creating the database

SQL>create database orcl
logfile group 1 ('+DATA') size 10M,
group 2 ('+DATA') size 10M,
group 3 ('+DATA') size 10M
character set WE8ISO8859P1
national character set utf8
datafile '+DATA' size 50M autoextend on next 10M maxsize unlimited extent management local
sysaux datafile '+DATA ' size 10M autoextend on next 10M maxsize unlimited
undo tablespace undo datafile '+DATA' size 10M
default temporary tablespace temp
tempfile '+DATA' size 10M;

If something goes wrong with the creation, Oracle will write an error into the alert log.

Completing the DB creation

In order to complete the db creation, the following scripts must be run as sys

SQL> @?/rdbms/admin/catalog.sql
SQL> @?/rdbms/admin/catproc.sql

SQL> connect system/manager
SQL> @?/sqlplus/admin/pupbld

After Creating set your asm environment and you can see all these files from asmcmd.

ASMCMD> ls
ARCHIVELOG/
CONTROLFILE/
DATAFILE/
ASMCMD> pwd
+DATA/ORCL
ASMCMD>


Suggestions and Corrections are welcome..!!!

Saturday, May 24, 2008

Oracle Streams

  • Oracle Streams is a replication methodology where database can be replicated across different operating database.

  • Oracle streams consists of three stages,

  • Changes of the database will be written in the online redo log files, Oracle streams will extract the changes from the log as it is written.
  • Changes are formatted as Logical Change Record (LCR).
  • Streams publish captured events into staging area.
  • All events captured in the capture site will be sent as queue.
  • The events will stage in the queue until it is consumed by the subscribers.
  • All the propagated messages will be consumed by the apply process and it will be applied at the destination site.
Below are the steps which i followed for implementing Oracle Streams on replicating their database across two operating system ( between Compaq Tru 64 and Hp-Ux )

  1. The following steps has to be followed at both the sites(Source and Destination),

a) create table space streams_tbs
datafile ‘/u01/datafiles/streams.dbf’ size 1024m;

b) create user stradm identified by stradm
default tablespace streams_tbs
temporary tablespace temp
quota unlimited on streams_tbs;

c) grant connect,resource,dba to stradm;

d) execute MS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRADM');
  1. The following init parameters has to be modified in both the sites,
a)    COMPATIBLE = 10.1.0 or higher
b) GLOBAL_NAMES = TRUE
c) JOB_QUEUE_PROCESSES =6
d) PARALLEL_MAX_SERVERS = 10
e) SHARED_POOL_SIZE = 10% of the shared_pool_size.
f) OPEN_LINKS = 10.
g) PROCESSES = DEFAULT.
h) SESSIONS = DEFAULT
i) SGA_MAX_SIZE =DEFAUL
j) TIMED_STATISTICS = TRUE
k) STREAMS_POOL_SIZE > 200M
l) UNDO_RETENTION > 900
m) CAPTURE SITE = ARCHIVE LOG MODE
n)_job_queue_interval=1
o)
alter system reset aq_tm_processes scope=spfile sid='*';
  1. The following steps has to be executed at the source site ( Capture site ) ,

$sqlplus "sys as sysdba"

SQL> alter database add supplemental log data (all) columns;

  1. The following steps has to be executed on the source site to create the queue,capture and propagation process,
conn stradm/stradm@source

To create queue,

begin
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRADM');
end;
/

To create capture process,

begin
dbms_streams_adm.add_schema_rules(
schema_name => 'SCHEMA NAME ',
streams_type => 'capture';
streams_name => 'capture';
queue_name => 'streams_queue',
include_dml => true,
include_ddl => true,
source_database => 'SOURCE DB NAME');
end;

To create a propagation process,we have to create a database link which connects to the streams administrator user in the destination site as,

Create database link as alh1 connect to stradm identified by stradm using ‘ALH1’;

To create propagation process,

begin
dbms_streams_adm.add_schema_propagation_rules(
schema_name => 'DEVLP',
streams_name => 'STREAMS_PROPAGATE',
source_queue_name => 'STREAMS_QUEUE',
destination_queue_name => STREAMS_QUEUE@DBLINK',
include_dml => true,
include_ddl => true,
source_database => 'SOURCE DB NAME');
end;
/

Once when capture and propagation process is created at the source site,do the following

Sqlplus > conn stradm/stradm

Sql > select start_scn from dba_capture:

Note down the SCN value mentioned by the query. This SCN has be to used for the instantiation process at the destination site.

  1. The following steps has to be followed at the destination site to create the apply proces

conn stradm/stradm@source

To create queue,

begin
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRADM');
end;
/

To create apply process,

begin
dbms_streams_adm.add_schema_rules(
schema_name => 'SCHEMA NAME',
streams_type => 'apply',
streams_name => 'apply',
queue_name => 'streams_queue',
include_dml => true,
include_ddl => true,
source_database => 'SOURCE DB NAME');
end;
/

begin
dbms_apply_adm.alter_apply(
apply_name => 'apply',
apply_user => 'SCHEMA NAME');
end;
/

  1. Export of the whole schema has to be taken at the source site using the following parameter Exp system/manager file=/u01/devlp.dmp owner=devlp consistent=y statistics=none

  1. Once when the export is over,take the file to the destination site and import it using the following parameters,

Imp system/manager file=/u02/devlp.dmp fromuser=devlp touser=devlp streams_instantiation=y ignore=y

  1. Once after the apply process is created,execute the below query inorder to set the schema instantiation SCN at the destination site,


  1. After setting the above instantiation,execute the following statement to start the apply process at the destination site,

Conn stradm/stradm

begin
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
source_schema_name => 'SCHEMA NAME',
source_database_name => 'SOURCE DB NAME',
instantiation_scn =>Start_scn );
end;
/


  1. After starting the apply process at destination site,Start the capture process at the source site by executing,

Conn stradm/stradm

begin
dbms_apply_adm.start_apply(
apply_name => 'apply');
end;
/

begin
dbms_capture_adm.start_capture(
capture_name => 'capture');
end;
/

Useful Views :

a. dba_apply;
b. dba_queue_tables;
c. V$STREAMS_APPLY_COORDINATOR
d. v$propagation_receiver;
e. dba_apply_progress;
f. V$STREAMS_APPLY_SERVER
e. dba_apply_error
g. gV$STREAMS_APPLY_READER
h. dba_apply_key_columns
i. dba_apply_dml_handlers
j. v$buffered_subscribers
select status,error_number,error_message from dba_apply

Thursday, May 22, 2008

Resync tables between primary and logical standby

Whenever there is a Table row difference in logical due to some Skip rule or transaction skip. Follow the below procedure to Resync the tables.

-- On primary
SQL> grant logstdby_administrator to system;

-- On logical standby
SQL> grant logstdby_administrator to system ;

SQL> Create public database link prod CONNECT TO system IDENTIFIED BY password USING 'TNSNAME'; ( Basically DB Link )

SQL> alter database stop logical standby apply;

SQL> exec dbms_logstdby.instantiate_table(USERNAME','TABLE_NAME','DB_LINK_NAME');

SQL> alter database start logical standby apply;

Note : This above operation does a export and import basically, generally my understanding is the table will be totally truncated and imported back.

Suggestions and Corrections are Welcome....!!!!

Wednesday, May 21, 2008

Oracle 10g Logminer

To find out what Oracle is writing to the redo log files called LogMiner. The redo log files store all the data needed to perform a database recovery and record all changes (INSERT, UPDATE and DELETE) made to the database

Below are the steps how to setup and mine a archive log :

1. Make sure you get the list of archives generated for the day using the below commnand. From the below output identify the archivelogs you are going to mine using logminer..

alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
select thread#, sequence#, completion_time from v$archived_log order by completion_time desc;


2. Now Build a Logminer dictionary , make sure you have the follwoing folder available and your unix account have acces to it :

execute dbms_logmnr_d.build('dictionary.ora','/d01/oracle');

3. Copy all the archivelog to a path and register all the logs as show below :


execute dbms_logmnr.add_logfile('/d02/archivelogs/backup/arch/8045.arc',dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile('/d02/archivelogs/backup/arch/8046.arc',dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile('/d02/archivelogs/backup/arch/8047.arc',dbms_logmnr.addfile);

4. Now from the below view , make sure you have all the registered logs available for mining.

select log_id, filename from v$logmnr_logs;

5.Using the below view's find the first scn and high scn to mine from the registered logs.


alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
select low_time,high_time,low_scn,next_scn from v$logmnr_logs;

6.From the above out gather the details and add it to the below logminer session :


execute dbms_logmnr.start_logmnr(dictfilename =>'/d01/oracle/dictionary.ora',starttime => to_date('2008-04-21 10:00:14', 'yyyy-mm-dd hh24:mi:ss'), endtime => to_date('2008-04-21 12:11:07', 'yyyy-mm-dd hh24:mi:ss'));

7. As v$logmnr_contents is a temporary view, once you disconnect your session , you won't be able to see the content, so make sure you create a table of all the contents of the view.


Note : If you mine a bunch of logs , create table will take more time to get created.

create table logmnr_table_1 as select * from v$logmnr_contents;



Let me know if you have any questions ...!!!!! Please post your comments..!!!