Sunday, August 19, 2007

The squeezed dump

Ver este articulo en Español

Sometimes you'll try to move data from one DB to another, or just between platforms. If you use the old export/import duo there are some workarounds to split big dump files in smaller pieces... but ¿what if, even with smaller pieces my file is unmanageable?

There is a workaround when working with Unix and Linux platforms: pipes and IO redirection.

These simple scripts would allow you to compress and decompress dump files 'on the fly'


# mknod exp.pipe p
# gzip < ./exp.pipe > /backups/export.dmp.gz &
# exp user/password full=y file=exp.pipe
log=export.lis statistics=none direct=y consistent=y


# mknod imp.pipe p
# gunzip < /backups/export.dmp.gz > imp.pipe &
# imp file=imp.pipe fromuser=dbuser touser=dbuser log=import.lis commit=y

Important: you must have every program path in your PATH environment variable, or find where are located mknod, gunzip and exp/imp and modify these scripts with absolute references.

I've taken statistics for resulting file sizes and compression ratios are between 10% to 20% from original size.

Ver este articulo en Español/Look for this content in spanish

Add to Technorati Favorites

Subscribe to Oracle Database Disected by Email


My Thoughts said...

Good one!

Ignacio Ruiz said...

Thank you! hope it is useful... I use it daily for test/dev refresh.

Custom Search