
Sometimes you'll try to move data from one DB to another, or just between platforms. If you use the old export/import duo there are some workarounds to split big dump files in smaller pieces... but ¿what if, even with smaller pieces my file is unmanageable?
There is a workaround when working with Unix and Linux platforms: pipes and IO redirection.
These simple scripts would allow you to compress and decompress dump files 'on the fly'
Export
# mknod exp.pipe p
# gzip < ./exp.pipe > /backups/export.dmp.gz &
# exp user/password full=y file=exp.pipe
log=export.lis statistics=none direct=y consistent=y
Import
# mknod imp.pipe p
# gunzip < /backups/export.dmp.gz > imp.pipe &
# imp file=imp.pipe fromuser=dbuser touser=dbuser log=import.lis commit=y
Important: you must have every program path in your PATH environment variable, or find where are located mknod, gunzip and exp/imp and modify these scripts with absolute references.
I've taken statistics for resulting file sizes and compression ratios are between 10% to 20% from original size.
Ver este articulo en Español/Look for this content in spanish

Subscribe to Oracle Database Disected by Email