Wednesday, July 27, 2011

Analyzing AWR/StatsPack top 5 events

From Oracle documentation:
“Wait events are statistics that are incremented by a server process or thread to indicate that it had to wait for an event to complete before being able to continue processing.”

For example, wait events can be related to I/O, locks, memory allocation, etc.

The top events tell us what the server processes wait for the most. Eliminating these wait events will definately reduce execution time, since server processes will not have to wait this time for other operations to complete.

There are many wait event in Oracle databases, this is a list of some important and common wait events:

Thursday, July 21, 2011

Cloning a database using SnapView/clones on the same server

Using SnapView/clones for database cloning:

prerequisites:
cloning on the same server
11gR2
No ASM [Automatic Storage Management]
No FRA [Flash Back Recovery Area]

steps:
1. configure the network: Adding new service on listener.ora file and tnsnames.ora so that there is connectivity to the cloned database:

Add new entry on listener.ora file (in red):
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /u01/app/oracle/product/11.1.0/db_1)
(PROGRAM = extproc)
)

(SID_DESC =
(SID_NAME = kloned)
(ORACLE_HOME = /u01/app/oracle/product/11.1.0/db_1)
)
)

Add new entry on tnsnames.ora file (in red):
KLONED =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = oracle1.ehugox.com)(PORT = 1522))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = kloned.ehugox.com)
)
)

... and check new service can be pinged:

$ tnsping kloned

2.Create a password file for the new database (under $ORACLE_HOME/dbs)
$ orapwd file=orapwkloned password=


3.Create an init.ora file from the source database.
Create the pfile using create pfile from spfile from the source database, then
edit it changing all occurrences of old database name to new name.

4.Create the new admin directory for the new database.

5.Start replication using Snapview clone/snapshot. Start a consistent session of the LUNs holding the database files. It not necessary to ALTER DATABASE BEGIN BACKUP because this EMC solution is storage-based consistency replication.

6. Stop replication using Snapview clone consistent fracture once all the LUNs involved are in a Sync and/or Consistent state.

7. Initiate a single Snapview consistent clone fracture.

8. Archive all unarchived redo logs>
sqlplus /nolog
SQL> connect / as sysdba
SQL> alter system archive log current;

9. export ORACLE_SID=kloned
10. startup nomount
11. Create new control file:
create controlfile set database "kloned" resetlogs
logfile
group 1 '/u01/app/oracle/oradata/kdbsingle1/redo01.log' size 50M,
group 2 '/u01/app/oracle/oradata/kdbsingle1/redo02.log' size 50M,
group 3 '/u01/app/oracle/oradata/kdbsingle1/redo03.log' size 50M
datafile
'/u01/app/oracle/oradata/kdbsingle1/example01.dbf',
'/u01/app/oracle/oradata/kdbsingle1/sysaux01.dbf',
'/u01/app/oracle/oradata/kdbsingle1/system01.dbf',
'/u01/app/oracle/oradata/kdbsingle1/users01.dbf',
'/u01/app/oracle/oradata/kdbsingle1/undotbs01.dbf';

12. Recover database and if everything goes fine open it:
SQL> recover database until cancel using backup controlfile;
ORA-00279: change 1197392 generated at 07/20/2011 18:02:22 needed for thread 1
ORA-00289: suggestion :
/u01/app/oracle/flash_recovery_area/DBSINGLE1/onlinelog/1_4_753801762.dbf
ORA-00280: change 1197392 for thread 1 is in sequence #4


Specify log: {=suggested | filename | AUTO | CANCEL}
cancel
Media recovery cancelled.

SQL> alter database open resetlogs;

y eso es todo.....





Control File Waits Occur

Why Control File Waits Occur
The performance around reads and writes against control files is often an indication of misplaced control files that share the same I/O access path or are on devices that are heavily used. It is interesting to note that Oracle has always defaulted the creation of control files in a single directory. You can check where your control files reside on disk with this simple query.

SELECT name FROM v$controlfile;

If you wanted to check the amount of total system waits that have occurred for control file reads and write you could do so by querying the V$SYSTEM_EVENT view. This will give you the total number of waits, timeouts, and accumulated time for this event.

SELECT *
FROM v$system_event
WHERE event IN (SELECT name
FROM V$EVENT_NAME
WHERE name LIKE '%control%');

Likewise you could query the V$SESSION_WAIT view to see which sessions are experiencing control file waits in real time.

SELECT event, wait_time, p1, p2, p3
FROM v$session_wait
WHERE event LIKE '%control%';

Here WAIT_TIME is the elapsed time for reads or writes. P1, P2, & P3 is either file#, block#, and blocks for 'control file sequential read' and 'control file single write' but is files, blocks, and requests for 'control file parallel write' Since there are no Oracle internal views for looking at control file layout like there is for 'normal' data and temporary files you can only accomplish a look into the control files by generating a dump. You can do this through the following ALTER SESSION command where is typically from 1 to 3 and represents dumping the file header, database & checkpoint records, and circular reuse record types. It is here in the trace file, which is generated in user_dump_dest, you can see that control file(s) have a file# of 0.

ALTER SESSION set events 'immediate trace name control level ';

Making Improvements
So how can I reduce the time spent for control file reads and writes? There are two distinct answers to this problem.

First, you can ensure that you have placed your control files on disks that are not under excessive heavy loads. When trying to determine how many control files to have, it is best to keep in mind that the more control files you have, the more I/O and time will be needed to write to all those copies. If is often better to have the O/S mirror the control files and reduce Oracle overhead.

Second, since the number of reads and writes are dictated by the type of activity within the database, it is often a good idea to revisit application logic and processing steps to ensure that excessive activities are not causing excessive reads and writes to the control files. For instance, I have seen code that produces excessive commits and even log switches. Since DBA activity is typically concentrated on modifying structures in the database, we need to be careful when performing batch runs of administrative scripts that could conflict with normal application processing. So be on the lookout for high activity levels such as log switching, checkpointing, backups taking place, and structural changes.

Summary
The control files are vital to database operations. Since they hold such vital information about the database it is imperative that they are safeguarded against disk I/O contention while making sure they are protected against corruption or loss. Balancing these two is often difficult but the common practice is to have multiple copies of the control files, keep them on separate physical disks, not putting them on heavily accessed disks. Also consider using operating system mirroring instead of Oracle multiplexing of the control files, taking and verify your control file backups daily, and create a new backup control file whenever a structural change is made. By taking control of your control files you can eliminate I/O contention and steer clear of becoming another war story on the net about yet another lost control file and unrecoverable database system.