Tuesday, November 23, 2010

From 3Tb to 1.5Tb Using standard COMPRESS and PCTFREE

Add to Technorati Favorites

Ver este articulo en Español

I've been working on a migration project, with very aggressive dates and a couple of requirements that increase difficulty: switch from filesystem to ASM, using 50% of new storage... yes, you got the right picture, we don't have full set of disks for ASM to fit all data, therefore I need to move data trying to release complete datafiles in order to free those logical volumes or LUNs, and later add them to ASM.

Anyway, I realized that given this is a DSS or Datawarehouse, there is some margin to increase "data density" per block, in other words decrease the amount of space within one data block that is wasted reducing PCTFREE.

Another opportunity to decrease storage usage came from the fact customer is upgrading from 10.2.0.4 to 11.1.0.7, so I can use STANDARD improved compression features on DATA!!!

The final touch came from a long time (and some times under-used feature), Index Key Compression. If you want more detail, can see my posts back from 2008 about this feature Index Key Compression - part I, Index Key Compression - part II and Index Key Compression - part III. Well, thanks the customer provided an environment dedicated to this project I was able to start doing som very expensive analysis on Index compression, with this strategy on mind: choose the biggest indexes to compress, then doing the ANALYZE INDEX ... VALIDATE STRUCTURE OFFLINE and SELECT * FROM INDEX_STATS for each candidate. (Please note you can consider not only massive indexes, but those heavily used too... for that you need to get usage info from V$SEGMENT_STATISTICS). Some indexes compressed very well, with 40%-50% space savings, while others didn't got that good rates on 4% or 6%, just depends on information nature/distribution.

First got all my tables with current allocated space, from DBA_SEGMENTS, filled one spreadsheet with this info and considered extra columns for PARALLEL degree for movement and compression flag, and so on. Using a formula was able to get 3000+ statements like this one:

ALTER TABLE ... MOVE PCTFREE 2 TABLESPACE DAT_ASM_HP NOLOGGING COMPRESS PARALLEL 8;

or like this one for indexes:

ALTER INDEX ... REBUILD TABLESPACE IDX_AHP COMPRESS 1 PARALLEL 8 NOLOGGING PCTFREE 2;


Well, final figures were:

Tables Starting:1.9Tb Ending:1Tb
Indexes Starting:1.0Tb Ending:0.5Tb

I have to remark that this space reductions:
1) Depend on this particular data distribution or patterns
2) After index rebuild, with time flowing and info gets inserted, Indexes will tend to "stabilize" and make block splits thus free space. (see this post on Index Dynamics)
3) Inserts and updates on tables will tend to "expand" some data, therefore next step will be to implement Partitioning to isolate some other regions of data blocks AND enf-of-period maintenance tasks in order to keep data compressed.

Well, if you want to follow this route, please share with us on the comments section how it went for your database or what your findings were, we will appreciate hearing (or better said, reading) from You!

Thanks for your time, have a good time...

Subscribe to Oracle Database Dissected by Email

Follow IgnacioRuizMX on Twitter
Delicious

Bookmark this on Delicious

No comments:

Custom Search