« TOSG is Back | Main | Please support my up-coming VMworld talk »

May 14, 2010


Joshua Morast

For backups + dedupe, take a look at Avamar. Amazing bit of technology from EMC.

Dedupe takes place at the host, backups don't take all that long if you've got a bit of spare cpu and memory.


Thank you so much for your precious information.

Terry Lewis


wouldn't it be even faster if you use block change tracking? I've done this in almost all my rman databases as described on this blog http://www.beyondoracle.com/2008/11/24/fast_incremental_backups/

The problem to me is that i even need more speed of my terabyte database backups...



This is an interesting idea, and I've been testing it on dumb storage while our new Data Domain comes online. One issue I've been running into is that rman recovers the fastcopy (a physical copy, in my setup) rather than the original copy. In other words, I'm seeing the following:

1. I run the initial backup incremental level 1 for recover of copy... and rman creates the first set of datafile copies in destination X with tag Y
2. I copy that set of datafile copies from X to destination Z
3. I catalog the files in destination Z
4. I run backup incremental level 1 for recover of copy as in step 1.
5. I run recover copy of database with tag Y and notice RMAN recovers the set of files in destination Z, rather than those in destination X. In effect, it recovers the set of datafiles I want to preserve.

Not sure how to get around this; I suspect the problem is that the files in the step 3 copy are also tagged Y (rman catalog start with doesn't seem to allow one to change the tags). Any thoughts (other than falling back to user-managed backups)?



Thanks for useful information. I have a question. We have set up "two" NFS mount points off basically one identical dd storage to have more visibility. One, /proj/fra, is for Oracle FRA which is fastcopy source and the other, /proj/fradumps/, is for fastcopy destination. Oracle sends backups to FRA subdirectories which are /proj/fra/databasename/backupset/DATE, datafile, and etc. Basically I intend to dump daily backups from there to /proj/framdumps/database/backupset/DATE. By the way I figured that fastcopy command requires dd relative path instead of OS path for source and destination. What whould be the source & destination path for fastcopy command in my case?

Ubee Kwon

I encountered the exact same issue as that of Rob above. RMAN recovers wrong set of backup files. Any idea?


I had the same issue as Rob in that RMAN merged into backups in Z destination instead of X destination. My workaround was NOT to catalog Z backups as they are needed only in case of recovery, not during backup. My other issue was about retention policy. My company policy is to keep backups for 14 days. RMAN policy other than 'redundancy 1' RMAN keeps all the backups since database creation with merged incremental backup. To resolve it I had to put "until time 'sysdate -x'" for 'recover copy' command so that RMAN marks backups older than 14 days as obsolete. This makes the whole fastcopy story unnecessary in my opinion.


Interesting idea and thanks for taking the time to share the information.

I've done a lot of testing on this and have a few things to comment on this.

1. The script given above has some flaws in it. For example the code below...

if [ ${NEWBACKUP} = Y ]
rm -f ${BACKUP_DIR}/most_recent_full
echo ${FULL_BACKUP_DIR} > ${BACKUP_DIR}/most_recent_full

This code implies that your most recent full is your last FULL backup and not the last full that you have recovered using the incremental merge. We should always update the most_recent_full with the recent incremental merge full that we have just recovered, so that the next incremental will be applied to this and NOT to the first FULL you have taken.

2. The other thing is about the discussion above about cataloging the backups and the way the incremental is applied to merge it to a full. Not cataloging your rman backups, I believe is an ugly solution. To be able to catalog your backups, you can do the following.

Make two FASTcopies of your recent full. One with a DBNAME-retain.date extension and the other with a DBNAME-nextdaycopy.date extension. Catalog DBNAME-retain.date first and then the DBNAME-nextdaycopy.date. The order here is very important. catalog the -retain copy first and then -nextdaycopy. Now, update your most_recent_file with the location of "DBNAME-nextdaycopy-date". Please realize that the tag for both these copies is still the same. Now that you have two full's cataloged in your rman catalog, the next time you apply the incremental, it will be applied to the last cataloged full backup, which means DBNAME-nextdaycopy. This way, you still have your DBNAME-retain.date in your catalog.

This should resolve the issue with cataloging your backups.


Also, using dNFS (if your oracle version is 11.2) will improve the performance of these rman backups by atleast 10-20%.

The comments to this entry are closed.

Powered by TypePad
View Jeff Browning's profile on LinkedIn

disclaimer: The opinions expressed here are my personal opinions. I am a blogger who works at EMC, not an EMC blogger. This is my blog, and not EMC's. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.