We have several SQL Server 2014 instances with databases ranging from 3 to 8 terabytes. We are upgrading our datacenter and we will be moving these databases to new servers with SQL Server 2022. What would you recommend for the best transferring method to minimize or even eliminating downtime? Should we set up a mirror or replication? or just bare the few days to copy the database files to the new location?
Once the initial full database backups are restored on the SQL 2022 instances, you can continue log shipping as long as you like, until you're ready to finally switch over. At that time, you'd stop all connections to the SQL 2014 instance, take final transaction log backups there, and restore them on the SQL 2022 instance using the WITH RECOVERY keyword.
Depending on your transaction log backup frequency, and size of the backup files, this last restore and switchover can be done in under 1 minute. I've used log shipping for version upgrades many times, all of them had less than 5 minutes of unavailability.
I prefer not to go through the backup and restore procedure. Each backup takes approximately 4 to 5 days. Is there any way to do this without backup/restore? Like mirroring the DB and switching over?
Your backups should not be taking days - I back up over 10TB's in about 1.5 hours and that is just one database on that one server.
You cannot mirror or set up an AG across versions - so those are out.
What I would do is:
Restore the latest backup available - with norecovery
Take a differential 1 to 2 hours prior to cutover - restore that with norecovery
Restore transaction log backups from the differential forward - with norecovery.
At cutover - perform a tail-log backup (this will take the database offline once completed) - restore this log backup with recovery.
Test and validate
Total downtime will just be the amount of time between the tail-log backup and the restore with recovery.
Now - if it is taking days to copy the backup files to the new data center - that is a different issue. But the same process can be used - you just need to switch over from daily full backups to weekly full backups so you have time to copy the backup file and perform the restores.
If that timing doesn't work out - then you need to figure out how to get the backup files over to the new data center faster. I have even seen processes where the backups are copied to 'tape' - overnighted to the new data center - and restored from 'tape'.
I agree with Jeff's general approach. That's roughly how we do our migrations. (Although, since we are allowed some down time, we stop all apps/activity to the dbs, take a final diff backup, then shut down the original SQL instance. This allows us to avoid having to take and apply log backups.)
Note, too, that you can apply multiple diff backups with norecovery (as long as you are always going forward, of course).
If you are not already taking advantage of compression of your backups, particulalry if the original table data is not page compressed, be sure to look into that as well.
Just to be clear - unless you already have an AG already created you cannot setup an AG with different versions. Your approach would be to create an AG and configure that across data centers - which is not a trivial exercise.
Then - perform an in-place or rolling upgrade from 2014 to 2022.
Since the OP is actually moving to a new location - it is unlikely that time and effort will be spent on setting up the network to allow cross-site communication as well as replicating AD across sites, setting up the firewall rules, routing - etc.
So yeah - an AG or mirroring are not actually possible here without doing a lot of pre-work to build out an environment where that can be done.
Personally - that is a lot of extra work that will all be torn down after the migration.
It's funny how many people say that but never offer some of the "tricks of the trade" to make such things happen. Perhaps you could point someone to a link for an article somewhere?
Question on the side: Do you have any scheduled downtime?
How do you patch the servers (both DB and OS)? If the databases cannot be stopped, do you use any High Availability solution? It is not possible do not patch servers for 10 years.
We usually create a new separate server; migrate test backups; check everything, get the users' approvals, etc.
Then we find the suitable time for the business to find some downtime. We do not migrate all databases at once. One by one the databases will be migrated.
Some databases can take longer than others, but eventually all will be migrated.
You can start with Dev first and get all the procedure step by step.
Last but not least: the decision to migrate the databases to the new release is not because the DBA is bored with the old one and wants something new. Any upgrade is business driven. You provide your requirements and recommendations to the management and they must decide. I don't see a picture of arguing between a DBA and a CFO, who needs his applications every minute.
If you are a manager - than collect the vendor's recommendations, business requirements and potential risks of doing the upgrade vs. not doing the upgrade, etc.
IMHO
Additional way:
Install MS SQL 2022 server on the same machine (or make it instantly copied to a different server using SAN/VM tools. De-attach the databases from 2014 and attach it right away to 2022.
The downtime will be pretty short.
You should check each step before the prod implementation to get potential issues and timing.
No tricks of the trade - the OP stated they have 3 to 8 TB and that it takes 'days' to back up. Sorry - but if it takes days to back up then you have storage and network issues. Nothing you can do in SQL Server will fix that.
Even making recommended changes to buffercount and maxtransfersize isn't going to help much - although that can have a significant impact, in this case there is no way to reduce the backup times to less than 'days' if that is how long they are currently taking.
Sorry but, like everything else, "IT DEPENDS"... It would appear that you have a different opinion as to what the "Tricks-of-the-Trade" consist of, including hardware.
The first "Trick of the Trade" is to ensure that you're using backup compression.
The second "Trick of the Trade" is to ensure that you're using settings in the actual BACKUP command such as BUFFERCOUNT (I use a setting of 17 here) and MAXTRANSFERSIZE (I use 1048576 here). Contrary to what was stated, it made a huge difference for us. If you set the settings incorrectly, it can actually slow things down a bit. Start with the settings I used and then carefully "play" with them, if you want. For me, it made such a drastic improvement that I've not made any changes to those two settings for over 10 years and over 3 new servers.
The third "Trick of the Trade" is to ensure that the backup system is in the same domain.
The 4th "Trick of the Trade" is to ensure that there are as few extra switches and routers between the system being backed and the NAS (Network Attached Storage on a different machine and at least a different rack... preferably in a different building. Of course, this doesn't doesn't include copy to a DR site but you also need to do that. The switches should NOT be setup to "auto-negotiate" speed. Set them up for the fastest speed and hard-set them to that. This should also include having multiple correctly configured NICs on the server itself.
The 5th "Trick of the Trade" is to put your largest tables each into their own file group and file. That'll also help with a "Swap'n'Drop" if you ever need to rebuild their large clustered (and other) indexes.
Last but certainly not least, you should be using nVME SSD drives and not spinning rust. What you think is a savings by using spinning rust is actually putting your at sever risk for TeraByte size databases.
With the things I've outlined and as just one example, I do a nightly backup of a 4.7 TeraByte database every night in about 2 hours and 6 minutes and some consider that to be "a bit slow" but that's because I'm saving to a single backup file instead of many, which would be yet "another trick-of-the-trade".