SQLTeam.com | Weblogs | Forums

What are some options for sending SQL backups to the cloud?


#1

Do any of you leverage the cloud for storage, and if so, what is your approach to backups?

We are reviewing our backup strategy, in particular how to approach backing up the native SQL backups. Currently we perform SQL backups (full and t-log) to disk. At night we backup the SQL backups to tape using a product called ArcServe. ArcServe is having to backup over 7TB of backups nightly. We are wanting to leverage the cloud and find an approach where the SQL backups can be backed-up to the cloud, either using some appliance or third-party software. We would also like to avoid placing agents on our SQL Servers, which is a requirement of DPM.

Thanks


#2

We've replaced Tape Backup (of all our servers) with a Cloud backup. I think we pay about GBP 75 / server / month (and there is a saving, for us, of GBP 20 per week for a (permanent) tape, plus quite a lot of "hassle saving" when a tape backup fails and IT have to provide support to diagnose the cause etc. - although perhaps we will find that we have just as many similar-issues, and IT support time, with Cloud backups.

We backup SQL databases to disk files, as we used to in the past, and those files get backed up to the cloud overnight - similar to how they were previously backed up to Tape. Cloud Backup is relatively new for us and Stage 2 of that project will be to implement incremental backups during the day (which will include the Log Backups that we take every 10 minutes). We are not using [the Cloud] Agent for SQL backup as (presumably) it would interfere with our differential backups, We also have a [near real-time] mirror backup (using RoboCopy) which copies any file (and also mirrors any deletes) created in the SQL Backup folder(s) onto another server located in a different place in the organisation - the intention being to provide protection against a localised fire, or catastrophic hard disk failure, in the primary server room.

The main reason for Cloud Backup for us is the speed and agility of recovery in the event of a total disaster.


#3

I am currently reviewing the backup option as well, if you do thinking about MS Azure, you might can have another option then going direct from your server to cloud. Which is using a MS appliance call StorSimple, it got local storage which you can mount to your physical server (or VM) through 10Gb iscsi connections, it can replace your backup drive and you can perform your standard backup script (native backup) to that drive as it were your SAN/local disks. The beauty of it is that it can schedule the backup within itself and backup to cloud (Azure), where you can keep a number of copies local on the appliance and set the retention period on cloud storage. It all completed within the appliance itself and you don't need to do any of the robo copy or change any of your scripts.

I'm running test on it and it seems positive so far, so if you are interested, I can share more info with you.


#4

Hi Dennis,

StorSimple is a product that a couple of people on our project have heard of, so I would definitely be interested to hear more about what you have learned regarding StorSimple, and the entire process you are testing. I appreciate whatever input you are able to offer. Leveraging Azure is a big initiative at my company, so we’re very curious to hear what other companies are doing in this area.

Thanks again,

Dave


#5

Hi Kristen,

Is Cloud Backup an actual product you are using to send the SQL backups to the cloud, or are you just stating you perform cloud backups? If it’s a product how do you like it so far, and have you encountered many challenges with your implementation? How large are the backups that you are sending to the cloud? I would love to hear more about your experiences with this approach.

Thanks so much,

Dave


#6

Yes, it is a product but I have no idea what is being used. The guy is on vacation at the moment, I'll check with him when he is back. We are backing up 4 servers, I would guess somewhere between 100GB and 1TB each.


#7

Hi Dave,
Not a problem, I won't mind sharing any info that we have.

Basically, we test on the use case of StorSimple, using it for backup and the throughput it has compare to our current infra. So far the result is not bad in terms of throughput, but since it POC, we don't have many servers connects to it, so I cannot say what will happen when its on full load, but I guess if we shift a bit on the backup windows, it should works great.

Let me know what is your scope, and how you plan to use it, then I can give you more info on what you need to lookout for and what I have tested.

One thing comes in mind is that if you using VM, their support on Hyper-V is better (as you can guessed), By better, I mean if you like to restore it on Azure, you can create a VM and mount the drive then restore only the files you need, If you using VMware, you most likely need to get the whole vmdk back (although its dedup).

Let me know what you want to know and see if I can provide more info.


#8

Our Cloud Backup provider allows us to have some sort of snapshots - historically we have kept a weekly backup tape "indefinitely", and we have something similar with the Cloud backup. Goodness knows where THEY put all that rubbish!! ... since we moved to Paperless Office we retain everything, for ever, as it is too difficult to have people wade through all the Emails etc. that they stuffed into the DMS ... no idea how to solve that, going forwards, but clearly mankind can't, surely??, keep everything-forever?


#9

Storsimple, as a product isn't ready for production in my example. Our's has caused several production outages not to mention exceptionally slow speed once adequately utilized.

It's important to be aware, that in the '40tb' 8600 model storsimple, you only get ~22tb of actual storage. After you exceed that threshold, it will begin shifting data to Azure, the throughput limit according to Microsoft's own documentation is 11MB /sec.

If you can guarantee yourself access to the entire appliance, it might work for backups (totaling less than 22tb) but, seeing as that is unlikely, I highly recommend against it.

And don't get me started on the 20iops Virtual storsimple.

yes 20.


#10

Interesting to know the experience from tripodal, we haven't put any production workload on it yet and still POC on StorSimple. But our usage will be a bit different as I purely use it for backup, for normal data/log usage, it will be provided by another storage device, hence it only the backup that will be in there. Thus it will not have that performance issue for DB workload.

I heard of the 22Tb actual usage, and that seems to the limit that MS gives out. If you using it for DB workoad as well, maybe its best to pin the LUN locally? So that it will never go to the Azure tier as it will be super slow. Even for local Tier, it cann't be all in SSD. However when I test the throughput using diskspd, it seems to be fine, however, once I put more VM performing diskspd, the performance did drop a lot when more and more VM were added.

I would say if you purely using it for backup, it should be a good option (for us anyway), as you might need to balance the cost with performance on the backup tier.