Tuesday, May 27, 2008

Move Controlfile from Filesystem to Diskgroup

We
SQL> select name from v$controfile;

NAME
--------------------------------------------------------------------------------
/home/pwprd/ctl.ctl'

2. Shutdown the database and start the instance:
SQL> shutdown normal
SQL> startup nomount

3. Use RMAN to move the controlfile to ASM :
$ rman nocatalog
RMAN>connect target
RMAN>restore controlfile to '+DG NAME ' from 'FILE PATH';

e.g.

RMAN> restore controlfile to '+PWPRD_DATA' from '/home/pwprd/ctl.ct';

Monday, May 26, 2008

Creating an Oracle 10g database manually on ASM

Just for the understading of junior members in my present team..!!

Before Proceeding with Manual Database Creation Make sure you have the asm instance is running, Diskgroup is mounted and Sufficient space available for creating the database.

/home/oracle $ ps -ef | grep +ASM
oracle 25792 1 0 May25 ? 00:00:00 asm_pmon_+ASM

SQL> select group_number,name,state,total_mb/1024 TOTAL_GB,free_mb FREE_GB from v$asm_Diskgroup;

GROUP_NUMBER NAME STATE TOTAL_GB FREE_GB
------------ ------------------------------ ----------- ---------- ----------
1 DATA MOUNTED 4 2.5





Specifying the Instance's SID

The SID can be set through the ORACLE_SID environment variable.

export ORACLE_SID=ORCL

Creating the initialization parameter file

*.audit_file_dest='/u01/oracle/product/10.2.0/db_1/admin/orcl/adump'
*.background_dump_dest='/u01/oracle/product/10.2.0/db_1/admin/orcl/bdump'
*.compatible='10.2.0.1.0'
*.control_files='+DATA
*.core_dump_dest='/u01/oracle/product/10.2.0/db_1/admin/orcl/cdump'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='test'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=testXDB)'
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=200278016
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=536870912
*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/u01/oracle/product/10.2.0/db_1/admin/orcl/udump'
db_create_file_dest='+DATA'
log_archive_dest_1='LOCATION=+DATA'
db_create_online_log_dest_1='+DATA'

Note : Make sure you create the dump file directories( adump,udump,bdump,cdump)

Create Password File

Make sure you create the password file before you proceed with the instance startup.

Starting the instance

SQL*Plus: Release 10.2.0.1.0 - Production on Sat May 24 16:05:15 2008
 Copyright (c) 1982, 2004, Oracle.  All rights reserved.
SQL> connect sys/password as sysdba
Connected to an idle instance.

SQL*Plus tells us that we're connected to an idle instance. That means that it is not yet started. So, let's start the instance. We have to start the instance without mounting (nomount) as there is no database we could mount at the moment.

SQL> startup nomount pfile='/home/oracle/pfile.ora'
ORACLE instance started.
Total System Global Area 113246208 bytes
Fixed Size 787708 bytes
Variable Size 61864708 bytes
Database Buffers 50331648 bytes
Redo Buffers 262144 bytes

This created the SGA (System Global Area) and the background processes.

Creating the database

SQL>create database orcl
logfile group 1 ('+DATA') size 10M,
group 2 ('+DATA') size 10M,
group 3 ('+DATA') size 10M
character set WE8ISO8859P1
national character set utf8
datafile '+DATA' size 50M autoextend on next 10M maxsize unlimited extent management local
sysaux datafile '+DATA ' size 10M autoextend on next 10M maxsize unlimited
undo tablespace undo datafile '+DATA' size 10M
default temporary tablespace temp
tempfile '+DATA' size 10M;

If something goes wrong with the creation, Oracle will write an error into the alert log.

Completing the DB creation

In order to complete the db creation, the following scripts must be run as sys

SQL> @?/rdbms/admin/catalog.sql
SQL> @?/rdbms/admin/catproc.sql

SQL> connect system/manager
SQL> @?/sqlplus/admin/pupbld

After Creating set your asm environment and you can see all these files from asmcmd.

ASMCMD> ls
ARCHIVELOG/
CONTROLFILE/
DATAFILE/
ASMCMD> pwd
+DATA/ORCL
ASMCMD>


Suggestions and Corrections are welcome..!!!

Saturday, May 24, 2008

Oracle Streams

  • Oracle Streams is a replication methodology where database can be replicated across different operating database.

  • Oracle streams consists of three stages,

  • Changes of the database will be written in the online redo log files, Oracle streams will extract the changes from the log as it is written.
  • Changes are formatted as Logical Change Record (LCR).
  • Streams publish captured events into staging area.
  • All events captured in the capture site will be sent as queue.
  • The events will stage in the queue until it is consumed by the subscribers.
  • All the propagated messages will be consumed by the apply process and it will be applied at the destination site.
Below are the steps which i followed for implementing Oracle Streams on replicating their database across two operating system ( between Compaq Tru 64 and Hp-Ux )

  1. The following steps has to be followed at both the sites(Source and Destination),

a) create table space streams_tbs
datafile ‘/u01/datafiles/streams.dbf’ size 1024m;

b) create user stradm identified by stradm
default tablespace streams_tbs
temporary tablespace temp
quota unlimited on streams_tbs;

c) grant connect,resource,dba to stradm;

d) execute MS_STREAMS_AUTH.GRANT_ADMIN_PRIVILEGE('STRADM');
  1. The following init parameters has to be modified in both the sites,
a)    COMPATIBLE = 10.1.0 or higher
b) GLOBAL_NAMES = TRUE
c) JOB_QUEUE_PROCESSES =6
d) PARALLEL_MAX_SERVERS = 10
e) SHARED_POOL_SIZE = 10% of the shared_pool_size.
f) OPEN_LINKS = 10.
g) PROCESSES = DEFAULT.
h) SESSIONS = DEFAULT
i) SGA_MAX_SIZE =DEFAUL
j) TIMED_STATISTICS = TRUE
k) STREAMS_POOL_SIZE > 200M
l) UNDO_RETENTION > 900
m) CAPTURE SITE = ARCHIVE LOG MODE
n)_job_queue_interval=1
o)
alter system reset aq_tm_processes scope=spfile sid='*';
  1. The following steps has to be executed at the source site ( Capture site ) ,

$sqlplus "sys as sysdba"

SQL> alter database add supplemental log data (all) columns;

  1. The following steps has to be executed on the source site to create the queue,capture and propagation process,
conn stradm/stradm@source

To create queue,

begin
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRADM');
end;
/

To create capture process,

begin
dbms_streams_adm.add_schema_rules(
schema_name => 'SCHEMA NAME ',
streams_type => 'capture';
streams_name => 'capture';
queue_name => 'streams_queue',
include_dml => true,
include_ddl => true,
source_database => 'SOURCE DB NAME');
end;

To create a propagation process,we have to create a database link which connects to the streams administrator user in the destination site as,

Create database link as alh1 connect to stradm identified by stradm using ‘ALH1’;

To create propagation process,

begin
dbms_streams_adm.add_schema_propagation_rules(
schema_name => 'DEVLP',
streams_name => 'STREAMS_PROPAGATE',
source_queue_name => 'STREAMS_QUEUE',
destination_queue_name => STREAMS_QUEUE@DBLINK',
include_dml => true,
include_ddl => true,
source_database => 'SOURCE DB NAME');
end;
/

Once when capture and propagation process is created at the source site,do the following

Sqlplus > conn stradm/stradm

Sql > select start_scn from dba_capture:

Note down the SCN value mentioned by the query. This SCN has be to used for the instantiation process at the destination site.

  1. The following steps has to be followed at the destination site to create the apply proces

conn stradm/stradm@source

To create queue,

begin
DBMS_STREAMS_ADM.SET_UP_QUEUE(
queue_table => 'STREAMS_QUEUE_TABLE',
queue_name => 'STREAMS_QUEUE',
queue_user => 'STRADM');
end;
/

To create apply process,

begin
dbms_streams_adm.add_schema_rules(
schema_name => 'SCHEMA NAME',
streams_type => 'apply',
streams_name => 'apply',
queue_name => 'streams_queue',
include_dml => true,
include_ddl => true,
source_database => 'SOURCE DB NAME');
end;
/

begin
dbms_apply_adm.alter_apply(
apply_name => 'apply',
apply_user => 'SCHEMA NAME');
end;
/

  1. Export of the whole schema has to be taken at the source site using the following parameter Exp system/manager file=/u01/devlp.dmp owner=devlp consistent=y statistics=none

  1. Once when the export is over,take the file to the destination site and import it using the following parameters,

Imp system/manager file=/u02/devlp.dmp fromuser=devlp touser=devlp streams_instantiation=y ignore=y

  1. Once after the apply process is created,execute the below query inorder to set the schema instantiation SCN at the destination site,


  1. After setting the above instantiation,execute the following statement to start the apply process at the destination site,

Conn stradm/stradm

begin
DBMS_APPLY_ADM.SET_SCHEMA_INSTANTIATION_SCN(
source_schema_name => 'SCHEMA NAME',
source_database_name => 'SOURCE DB NAME',
instantiation_scn =>Start_scn );
end;
/


  1. After starting the apply process at destination site,Start the capture process at the source site by executing,

Conn stradm/stradm

begin
dbms_apply_adm.start_apply(
apply_name => 'apply');
end;
/

begin
dbms_capture_adm.start_capture(
capture_name => 'capture');
end;
/

Useful Views :

a. dba_apply;
b. dba_queue_tables;
c. V$STREAMS_APPLY_COORDINATOR
d. v$propagation_receiver;
e. dba_apply_progress;
f. V$STREAMS_APPLY_SERVER
e. dba_apply_error
g. gV$STREAMS_APPLY_READER
h. dba_apply_key_columns
i. dba_apply_dml_handlers
j. v$buffered_subscribers
select status,error_number,error_message from dba_apply

Thursday, May 22, 2008

Resync tables between primary and logical standby

Whenever there is a Table row difference in logical due to some Skip rule or transaction skip. Follow the below procedure to Resync the tables.

-- On primary
SQL> grant logstdby_administrator to system;

-- On logical standby
SQL> grant logstdby_administrator to system ;

SQL> Create public database link prod CONNECT TO system IDENTIFIED BY password USING 'TNSNAME'; ( Basically DB Link )

SQL> alter database stop logical standby apply;

SQL> exec dbms_logstdby.instantiate_table(USERNAME','TABLE_NAME','DB_LINK_NAME');

SQL> alter database start logical standby apply;

Note : This above operation does a export and import basically, generally my understanding is the table will be totally truncated and imported back.

Suggestions and Corrections are Welcome....!!!!

Wednesday, May 21, 2008

Oracle 10g Logminer

To find out what Oracle is writing to the redo log files called LogMiner. The redo log files store all the data needed to perform a database recovery and record all changes (INSERT, UPDATE and DELETE) made to the database

Below are the steps how to setup and mine a archive log :

1. Make sure you get the list of archives generated for the day using the below commnand. From the below output identify the archivelogs you are going to mine using logminer..

alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
select thread#, sequence#, completion_time from v$archived_log order by completion_time desc;


2. Now Build a Logminer dictionary , make sure you have the follwoing folder available and your unix account have acces to it :

execute dbms_logmnr_d.build('dictionary.ora','/d01/oracle');

3. Copy all the archivelog to a path and register all the logs as show below :


execute dbms_logmnr.add_logfile('/d02/archivelogs/backup/arch/8045.arc',dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile('/d02/archivelogs/backup/arch/8046.arc',dbms_logmnr.addfile);
execute dbms_logmnr.add_logfile('/d02/archivelogs/backup/arch/8047.arc',dbms_logmnr.addfile);

4. Now from the below view , make sure you have all the registered logs available for mining.

select log_id, filename from v$logmnr_logs;

5.Using the below view's find the first scn and high scn to mine from the registered logs.


alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';
select low_time,high_time,low_scn,next_scn from v$logmnr_logs;

6.From the above out gather the details and add it to the below logminer session :


execute dbms_logmnr.start_logmnr(dictfilename =>'/d01/oracle/dictionary.ora',starttime => to_date('2008-04-21 10:00:14', 'yyyy-mm-dd hh24:mi:ss'), endtime => to_date('2008-04-21 12:11:07', 'yyyy-mm-dd hh24:mi:ss'));

7. As v$logmnr_contents is a temporary view, once you disconnect your session , you won't be able to see the content, so make sure you create a table of all the contents of the view.


Note : If you mine a bunch of logs , create table will take more time to get created.

create table logmnr_table_1 as select * from v$logmnr_contents;



Let me know if you have any questions ...!!!!! Please post your comments..!!!