Tuesday, November 30, 2010

My Steps to Implement Exadata EHCC

Add to Technorati Favorites


Ver este articulo en EspañolSubscribe to Oracle Database Disected by Email

Last time I was engaged with an Exadata migration, customer asked me about EHCC and how to implement it for their datamarts.

My approach (at that time) consisted on the following:
1) Get a sample of very big objects to play with
2) Used DBMS_COMPRESSION in order to get estimations for compression rates
3) Try every compression type: Basic, Query Low, Query High, Archive Low, Archive High and recorded size reductions, time to import and time for FTS.
4) Recorded and compared timings for various batch processes and important queries

With all those statistics, went to the customer and then he was able to take a decision based on nature of data, popular timeframe of querying within the data window, parallel degree, partitioning defined (BTW customer already had it on place, best practice!), and so on.

I've used Datapump to import because it uses direct path load and Basic compression needs it to kick in.

Further thinking on this matter resulted on proposals to include table usage (V$SEGMENT_STATISTICS) & table data dynamics (Monitoring+DBA_TAB_MODIFICATIONS) in order to fine tune the compression mode selected. The next time I've the opportunity to include this on the process, I'll share with you the results...

Thank you very much for your time, and if you liked this post please share with others freely...

Follow IgnacioRuizMX on Twitter

Delicious Bookmark this on Delicious

No comments:

Custom Search