Sunday, October 28, 2007

OWI... wan-kenobi?

Sure you remember the Star Wars zaga ... and Obi-wan-kenobi as the first 'Force'-instructor of Luke. Well, my point here is how to use the Force with Oracle Server... with OWI or Oracle Wait Interface as its long name stands.

That is not an 'interface' in the way you pass values to the DB Kernel, instead you get information of waits generated in the system and wait events within the sessions... yes, prepare you laser-saber to fight all those annoying performance issues, your mind will broad his perception.

v$session_event (9i)
Lists all wait events for all sessions, with number of ocurrences and timing stats.

v$session_wait (9i)
Shows current wait event for sessions.

This is one of my favourite scripts that shows ans example of OWI usage. I've been using this script to analyze a performance issue with Peoplesoft: disk contention.

substr(a.client_info,1,40) as "ClientInfo" ,
((sysdate-LOGON_TIME)*24*60) as "Sess_TIME"
FROM v$session a,
TO_CHAR((time_waited/100)/60,'9,990.999') as "TimeWaited_Minutes"
FROM v$session_event c
WHERE c.EVENT = 'db file sequential read') b
WHERE username = '{PS owner username}'
AND status = 'ACTIVE'
AND a.SID = b.sid

v$event_histogram (10g)
Shows current instance histrograms for every wait event, organized on time interval buckets that progress following the formula 2^n milliseconds. For instance:

11 Log archive I/O 1 29069
11 Log archive I/O 2 115
11 Log archive I/O 4 211
11 Log archive I/O 8 391
11 Log archive I/O 16 140973
11 Log archive I/O 32 103846
11 Log archive I/O 64 21678
11 Log archive I/O 128 4953
11 Log archive I/O 256 822
11 Log archive I/O 512 31
It's important to mention that the initizialization parameter TIMED_STATISTICS must be set to TRUE, and statistics are lost (or reset) when the database is shutdown or started up.

v$eventmetric (10g)
With this view you'll be able to see the last 60 seconds of wait event metrics, which gives a more recent time frame than v$event_histogram. This is a small sample of its content.

29/02/2008 11:49:48 a.m. 29/02/2008 11:50:47 a.m. 5960 0 2516578839 0 0 0
29/02/2008 11:49:48 a.m. 29/02/2008 11:50:47 a.m. 5960 1 3539483025 1 5564.8417 32
29/02/2008 11:49:48 a.m. 29/02/2008 11:50:47 a.m. 5960 2 3934444552 0 0 0
29/02/2008 11:49:48 a.m. 29/02/2008 11:50:47 a.m. 5960 3 866018717 10 50716.6575 9871
29/02/2008 11:49:48 a.m. 29/02/2008 11:50:47 a.m. 5960 4 3083157888 0 0 0
29/02/2008 11:49:48 a.m. 29/02/2008 11:50:47 a.m. 5960 5 2324796046 0 0 0

You'll get a more meaningful output if you use (or modify) the following script:

to_char(a.BEGIN_TIME,'DD-MON-YYYY') as BeginDay,
to_char(a.BEGIN_TIME,'HH24:MI:SS') as BeginTime,
to_char(a.END_TIME, 'HH24:MI:SS' ) as EndTime,
FROM v$eventmetric a, v$event_name b
ORDER BY time_waited DESC

Master these tables and you'll be invincible when fighting the Dark Side.

Subscribe to Oracle Database Disected by Email

Sunday, August 19, 2007

The squeezed dump

Ver este articulo en Español

Sometimes you'll try to move data from one DB to another, or just between platforms. If you use the old export/import duo there are some workarounds to split big dump files in smaller pieces... but ¿what if, even with smaller pieces my file is unmanageable?

There is a workaround when working with Unix and Linux platforms: pipes and IO redirection.

These simple scripts would allow you to compress and decompress dump files 'on the fly'


# mknod exp.pipe p
# gzip < ./exp.pipe > /backups/export.dmp.gz &
# exp user/password full=y file=exp.pipe
log=export.lis statistics=none direct=y consistent=y


# mknod imp.pipe p
# gunzip < /backups/export.dmp.gz > imp.pipe &
# imp file=imp.pipe fromuser=dbuser touser=dbuser log=import.lis commit=y

Important: you must have every program path in your PATH environment variable, or find where are located mknod, gunzip and exp/imp and modify these scripts with absolute references.

I've taken statistics for resulting file sizes and compression ratios are between 10% to 20% from original size.

Ver este articulo en Español/Look for this content in spanish

Add to Technorati Favorites

Subscribe to Oracle Database Disected by Email

Saturday, July 21, 2007

Compression ... expensive or cheap?

Given the current super-availability of CPU power and hard disk terabytes, give an answer to that question is not an easy task.

We may take the point of view of information explotion, then we think it's a never ending least that's the looouuuud complain the business and companies we work for, make every given years that they need to double or triplicate our SAN (read: any-storage-device-you own), expend some million bucks... and save for next loop. But it's a fact to be faced: information is being generated almost compulsively, even by devices a couple years ago where merely information 'mute'.

The scenario seems to get more complicated for years to come...

What if we look the recent direction microprocessor development has taken: multicore processors, devices with 2 or 4 or n independent processing units on one package. Visionary projects like Intel Terascale or IBM-Sony-Toshiba Cell will provide tremendous processing power.

And this huge leap forward takes processing growth rate much ahead of storage growth rate. Then, such powerhouses may help mitigate the information avalanche we live on... what do you think?

Before 11g

Yes, we may talk about compression before 11g, of course there isn't much noise about it because it's an incomplete feature on 10g, with restrictions that make it almost useless for regular OLTP applications: only works when data is inserted direct-path mode, that is bulk insert and Import Datapump (sorry, regular import doesn't have it).

But not everything is lost, there are some scenarios where it still can be usefull, very I must say.

1) Cut significantly storage for Historic/Archive and Testing/Development environments
2) Reduce IO (and increase performance) on Datawarehousing/BI applications

I've done comparative tests, showing that 10g compression gives a very good rate, reducing storage usage by 45% - 60%. I'm saying you may double your SAN "space-life" or slice the storage invoice, specially on scenario (1) when you fight fierce battles for budget that won't support direct business' operations.

With 11g

Oracle 11g offers data compression, but what are the benefits? what do we have to pay?

I've repeated the test under Oracle for Linux, and got a 80% compression rate using regular import and Datapump later; you must remember that import does not use direct insert, that's why you should alter your tables for compression under any operation.

My test bed is a Dell 2650 with 2 Xeon Procesors running at 2.4Ghz, 512Mb L2 cache, 4Gb RAM and 400 Gb Ultra SCSI, the OS selected is SuSE 10 SP1 for i386 (no need for above 4Gb memory space...unavailable either!).

I did use a copy of a test environment database, with more than 27,800 tables ranging from millions of rows to zero rows ... you guessed it: BaaN. After table creation, space used marked 1.83Gb for both 10gR2 and 11gR1. Following table shows results for this test, trying all basic scenarios

Additional CPU overhead? Apparently not, just takes more time to load your data. See following graph:

...To be Continued: You shure it takes longer? Look thru these multifocals

Subscribe to Oracle Database Disected by Email
Custom Search