SQL 2005: backup transactionlog fails after rebuild index task

Hi Kristin

Thank you for advice.
The disk is only used for the logfiles and temporary for transaction logbackups (they are copied to another server afterwards).

I prefer not to change the size of the ldf-file, because you can - as you mention - have physical external but also logic internal (VFS) fragmentation problems.
I think it can be a best practice to have a LDF as big as the MDF. Size a bit overallocated when you create the db.
In that way you can avoid performance problems due to fragmentation.

Best regards

Tom

I agree with all that, if disk space is not a (current) issue then provide enough elbow room for the LOG file such that it shouldn't even need to grow. We hand-optimise the VLFs on databases that have "huge" LOG files, and we use a tool to physical defrag the files on the disk - even without Shrink/Grow cycles the files get extended every few months ... (we use a tool from what used to be called SysInternals, MS bought them and I have no idea what it is now called ... but I can dig out the details if anyone wants them). We do that whenever we have scheduled downtime to optimise the files.

That said, I like the files to be at the minimum sensible size, that way when the disk is full I can easily justify my "more disk" request rather than being told to "Shrink whatever I can"

Personally I would never want to write LOG Backups to the drive that the logs are on - both the LOG file itself (both its normal operation and during a backup) and the Backup file being made are essentially sequential operations, and writing to the drive being read is going to going to give the disk heads a lot of work to do, and will most likely delay some writes (i.e TLog writes) which will manifest as a delay to Users. It might be that modern caching controllers (or solid state drives, Natch!) hide that entirely making it a non-issue, but I'm old-school.

Our backup files are on two media (local and copied to A.N.Other remote server and then also, eventually (within 24 hours) to tape). I want the files "local" for speed of recovery, and remote in case "local" is now up in flames and is turning into toast! so if you are copying to remote and then deleting you might want to consider how long it takes for "copy back" if you have to restore. Apologies if you have already done all that Disaster Recovery analysis. When we do our DR planning we assume that our only good Full Backup is a week old (I'll shoot someone if we have to go back further than that! as it means we somehow got pass a DBCC CHECKDB step without realising that the DB was already corrupted), so that leaves us with the unenviable task of also having to restore a full week's worth of LOG backups. Getting them restored is a big job, having to also copy them back from a remote location can prolong that ... if I ever had to (additionally) get them back from Tape then everyone can go home for a couple of days and my next job will be to turn out the lights as I follow the rest of the team out the door for the last time!

We are only going to restore using Tape if a Jumbo Jet mistook our office for a runway, and our remote location has been evacuated for a gas leak ...

Thank you for your advise Kristen

LDF and transaction log backups on same disk:
You've got a point. But I've got only 3 disks:

  • C: system disk with windows etc.
  • D: data disk with MDF and backup of MDF
  • E: data disk with LDF and backup of LDF

A solution could be to put the transaction log backup files on D: with the MDF.
But won't that also give performance issues?

DR:
the backup files of the transaction log are copied to a disk of another server in the same room.
Just to prevent everything is lost in case of a complete server failure.
They are backed up to tape during the night and tapes are stored off site.
so that's not bad isn't? It follows the 3-2-1 rule:
3 x data: LDF + trn on other disk + trn on tape.
2 technologies: disk + tape
1 off site
Restoring from another server in the same room will also go quick, no?

Also MDF BAK-file is put on tape every night.

Tom

Been there, done that (yeah, got some T-shirts!) ... but ... I don't do that any more :smile:

Depends on budget, but I get the budget by asking what the critically it of the data is and the cost of downtime. 10 Executives and 20 Worker-bees cost a lot when not working, we even do Cost-Benefit-Analysis (grand name for "Budget justification"!) on a [frequent] operation taking 10 seconds instead of 1 second.

My minimum is:

RAID (something suitable for Random Access) for MDF
RAID (just mirroring) for LDF
RAID (just mirroring) for backups

If budget stretches then more Mirroring Rade for TEMPDB ... then separate for MDF and LDF for TEMPDB. If your TEMPDB is tiny that's probably overkill - but it IS worth sticking it on the same disks as the MDF / LDF for data. SQL Server install wants to put MASTER and MSDB MDFs and LDFs on a single drive (I expect its possible to move them, it isn;t in the intsall [that I have seen] and I can't be Harris'd to manually move them)

That would be my choice. If only 2 drives then I would do:

D: MDF and Log Backups
E: LDF and Data backups

that way the backups are not on the same physical media, so media-failure should still mean you can get the backups back.

Worth considering how you would be able to take a TAIL LOG BACKUP if the machine is still alive but the MDF is toast. You could probably argue that Log backups NOT on the MDF drive is safer (if MDF drive is bust all you have got is Full Backups [on the other drive]). If Log Backups are on LOG drive and MDF drive is bust then you have access to them (and can still make a Tail Log Backup and you can then restore your database with 0% data loss).

My preference would be to have Log Backups on separate drive to MDFs so that if MDF drive is bust I can still get to all my LDF backups, but have the risk that I cannot make a Tail Log Backup ... but ... if all my backups are duplicated on Machine B the ability to get a Tail Log Backup is different - I can make that backup either way, but I can only restore it if I have all the earlier LOG backups. For you they are also on Machine B so no real issue.

Thus, for you, I would choose based on speed with one eye on resilience and my guess is that, for performance, backing up to a different physical drive [compared to the Source being backed up] is probably faster.

Its open to debate though!

No, its very good - way WAY above average. Machine B being somewhere remote, even a different building on site, would be preferable.

I doubt it, but its worth testing.

Test 1 restore a FULL backup direct from local machine (to a NEW TEMPORARY database name, natch !!). That's your baseline

Test 2 - restore direct from Machine B

Test 3 - Time how long it takes to copy the backup from from Machine B to Machine A and add the local restore time. (This can win over if you are restoring a lot of data as you can be copying from B-to-A at the same time as you are restoring some (already copied) local backup files. Moot point if all your stuff is small and can be copied / restored in a minute or two. Our stuff is bigger / slower than that ...

Ok let's say I've talked about 3 (logic) disks to keep it easy.
Of course I'm using a RAID-config.

For the other idea I would indeed open the debate, before I change our config.
So you other guys (and girls :smile: ) is it a good idea:
disk 1: MDF + TRN
disk 2: LDF + BAK
instead of the current config
disk 1: MDF + BAK
disk 2: LDF + TRN
of course knowing that a backup is also made to tape.

This is my choice, I'd be interested to know what others would do.

Depending on version (e.g. SQL2008 and earlier) I think high likihood that Master / MSDB and TEMPDB are in *C:\Program Files* :frowning:

SQL2012 install seemed to encourage TEMPDB to be somewhere sensible, but then chucks MASTER MDFs and LDFs in the same folder (somewhere NOT the same as default path for MDFs and LDFs but instead in a folder based on the Instance Name). Not quite what I would prefer, but better than *C:\Program Files* of yesteryear

any one else for the disk config proposed by Kristen?

We use mount points:

S:
S:\MSSQL - system databases, except for tempdb,plus the typical stuff found off MSSQL such as LOG, repldata
S:\BACKUPS - all backups
S:\DB_DATA - data files for user databases
S:\DB_LOG - log files for user databases
S:\TEMPDB_DATA - data files for tempdb
S:\TEMPDB_LOG - log file for tempdb

How these are split up on the SAN backend, I have no idea.

Mount points are new to me (I'm not a sys admin).
So basically that is logical one big disk?

Do you have some advice about our disk config:
keep to current or change to what Kristen adviced?

I'm a bit sceptical.
When I monitor the resource, I've got the impression that
Read B/min of mdf is at least double of
Write B/min of ldf