Quantcast
Channel: IBM Mainframe Computers Forums
Viewing all articles
Browse latest Browse all 9405

JCL & VSAM :: RE: Using DFDSS for incremental backups

$
0
0
Author: Pete Wilson
Posted: Tue Jun 07, 2016 11:57 am (GMT 5.5)

What you're being asked to do is much more complicated than you or management realise. You don't say whether you're planning to move to DFHSM instead of FDR/ABR either which does make a difference.

FDR/ABR is a properly developed product that tracks and maintains its backup copies as well as Archive copies if that feature is enabled. To develop an equivalent with DFDSS will be very involved. Also, FDR/ABR is not just a backup tool, it does archiving with auto-recall, and it has the incredibly powerful FDREPORT built into it, and the Compaktor functions. It is more flexible than DFDSS and DFHSM in function and scheduling ability, and it is A LOT less CPU hungry than DFHSM. I'd suggest making a very detailed comparison of FDR/ABR vs DFDSS and DFHSM (if applicable) otherwise it might end up being more costly.

Some considerations:
1. What if some datasets are migrated/archived? DFDSS will not back those up and in some cases won't even warn you that's the case unless you use a SET PATCH command.
2. You will need to have periodic FULL backups with incrementals in between to cater for loss of backups. These will need to be managed to ensure obsolete versions are released.
3. Will the backups be to GDG's, or will you have to generate your own naming scheme and some sort of version management.
4. How will the backups be retrieved if needed? You will need some sort of indexing record and an automated means to recover the correct version from the correct backup. Who will do the restores?
5. How will you manage the size and number of backups that can run concurrently? You don't want any one backup to be too large or they become unwieldy. You possibly don't want to be running too many concurrently. The scheduling could be quite complex to fit in with Application schedules.
6. Whatever design you make has to cater for massive future growth.
7. How do you decide where the backups get created. Will it always be Tape or could some be to DASD if they're small? Each option requires different management techniques.
8. If required, how will your backups be replicated across to Disaster Recovery sites and properly managed there.
9. How do you prove you're backing up everything that is required? Sometimes you get datasets that are not opened (so change flag not on) but still need a backup in case they're expired because they may be referenced in JCL somewhere.
10. What is the scope of the backup? Does it need to include DB2 for example when there are Image Copies being taken?
11. Will the backup be driven mainly by dataset name or by volume or storgrp, or a combination?
12. Some datasets such as ZFS's may need special treatment by using Concurrent Copy for example to minimise the enqueue time against them.
13. There are some benefits in using DFDSS COPY with Fast Replicate to create backup copies on DASD but this would require another layer of management for the naming standards. The benefit though is that the backup version is accessible as a normal dataset and doesn't have to be restored, although it could be migrated. The copying can be extremely quick with Flashcopy. (Flashcopy only works intra-controller though)

Another option you might consider is using ABARS for your backups. This is initiated through DFHSM and has some advantages such as the ability to include migrated dataset and tape datasets in the backups.

Or you could just use the DFHSM incremental Backup facility if you have DFHSM.
_________________
DinoZos


Viewing all articles
Browse latest Browse all 9405

Trending Articles