Hmmm ... I've not heard of it, but of course that doesn't mean anything, I only know what I know!, but if it is backing up to Disk then my gut feeling is that I would much prefer to be using native SQL Server backups for that part. The replication off-site is a different issue. We use RoboCopy (that's a Microsoft utility) for that, but its a piece of String and Gum Batch file that we wrote ourselves, so doesn't really qualify as "bullet proof" without decent testing and some knowledge. RoboCopy has an option to MIRROR COPY - it will maintain (on the Destination) identical file structure to Source, so as new backup files are created they are copied, and as old ones are purged they are deleted from the Destination. It can also throttle the copying, so as not to hog bandwidth.
A normal Log Backup achieves that. The log file itself is written to / appended sequentially, Once the log is backed up all the blocks included in the backup are marked as "free" and available for reuse (note that there may be some additional blocks, written at the very end of the log file as the log backup finishes / which were not yet committed, which are not released, but this would only be an issue if you needed to Shrink the log file because they are at the very end (generally its a bad idea to do that as repeated Shrink / Grow of the log fragments the file). Those extra blocks [at the end of the file] will be in the next log backup, and will then be marked for reuse.
Note that the Log Space will not be reused if something else needs that log data - I'm thinking that some types of Replication may also be using the Log File pages, so until "everyone" has finished with a log page it won't get reused. I only mention that for completeness, Log Shipping (for example) should be able to use the same Log File Backups for both the Shipping and for your physical secure backups (not sure how CommVault would fit into that picture; you can build your own log shipping system e.g. using Commvault's Log backups (I've done it using SQL's Log backups, its not very hard), but obviously easier to use the built-in Wizards in SQL Server to set-up and maintain Log Shipping.
Indeed! The key to that will be increasing then frequency of the Log Backups. If you have 1TB a day of log file usage, and your backups are once an hour that's an average log file size (and individual log backup file) of 41GB. Change the backup to every minute and that falls to 700MB [on average].
If you backup every minute you will have lots of log Backup Files [in a 24 hours period], but the same overall total disk space usage as a once-a-day backup (although there is a little overhead on each file). The other difference is the reduction of lost data when recovering after a "total loss" disaster - assuming all the files are off site
Possible point of interest: After a disaster, if you can still access you SQL server, you may be able to take a "Tail Log backup" (i.e. even if the Data MDF file is damaged beyond use / repair). In that situation you would prevent any further access / connection / etc. to the DB, take the Tail-Log Backup, then restore the last known good FULL backup, and all the LOG backups since, including the Tail Log Backup, and have zero data loss The chances are good (unless server / building has been destroyed
) that in the event of severe damage to the Data File that the Log File will still be in good shape for tail log backup in this way (worth making sure that your storage guys are indeed segregating all your Data and Log files onto different spindles / controllers / etc. - I'm not a hardware guy, so my terminology may be way-off, but I'm sure you get the idea ...)
if you use Log Shipping direct to the Report Database (rather than to a TEMP that you rename, once a day, to the Report Database) you may find that there is outage when the Log Restores happen. I can't remember what the issue is, if any, but I read something recently along those lines and was surprised to hear it, as it was news to me. Hopefully I have the wrong end of the stick on that point!
Hopefully I've clarified that ... a Transaction Log Backup will mark the pages for reuse, for any transactions that are committed, and assuming nothing else (e.g. replication) is wanting to also use that page. (There is also a COPY_ONLY option to (both Database and Log) Backup, in case you need to make a backup of the Log and NOT purge it. That might be an option if both Commvault and Log Shipping need to, both, independently, make backups of the Log File. I suggest you try to avoid that, its a fairly large job to backup a log file as big as yours, just having one backup for both Secure Backup and Log Shipping would be better than two separate ones IMHO !!
If you aren't already doing so I suggest you stage a Trial Restore periodically (no less than quarterly, monthly would be better, every night, by automated scripts, would be ideal), to prove that you can - and then run DBCC CHECKDB on it to ensure that it is not corrupted (you probably do not want to run CHECKDB on your production DB, it will take forever and slow the server down, but running CHECKDB on a restored database copy is identical to doing it on the original, so no actual need / benefit to run that on the production DB itself).