Optimizing SQL Server Backups on Large Databases

Hello

I am managing a SQL Server database around 1TB in size and facing challenges with long backup times and resource usage during full backups.

I am considering differential ; incremental backups and enabling compression.

Looking for advice on the best strategies to reduce backup time and resource impact while maintaining data integrity. I have checked SQL Server Administration - SQLTeam.com Forums splunk guide but still need help.

Any tips ; recommendations on managing large database backups efficiently would be appreciated.

Thank you ! :slightly_smiling_face:

Backup compression is a must.
Differential backups are also very useful.
SQL Server does not inherently offer incremental backups.

Other things to consider:
(1) use (page) data compression in the table indexes themselves, to reduce the 1TB size.
(2) don't use nvarchar (rather than varchar) unless you genuinely need to (standard advice, but I've seen people use nvarchar(2) for U.S. state codes, which is just a waste of space)

In addition, using more than one file can speed up backups. When testing I would start with one file per core.

Backup to multiple files for faster and smaller SQL Server files (mssqltips.com)

If you have a lot of historic data, which does not change, you can also look at keeping the old data in read-only filegroups and doing filegroup backups. This makes restores a lot more complicated.

You could throttle the CPU using the Resource Governor. It won't speed up the backups. But it will prevent them from using too much CPU

1 Like

Oops, forgot the most important:

Be sure to increase the BUFFERCOUNT and MAXTRANSFERSIZE. Typically around 15-21 buffers is best. Set MAXSIZE to max size. (Note: SQL will allocate RAM equal to BC * MAXSIZE to support the settings.)

1 Like