SQLTeam.com | Weblogs | Forums

SQL 2005: backup transactionlog fails after rebuild index task



  • backup db: Monday => Friday 21h00
  • backup transaction log: every 30 min. every day
  • rebuild index: Monday 2h15

This worked fine until 2 weeks ago.

The backup of the transaction log does not longer work after/during the rebuild index task.
Back Up Database Task (SQL-server\navision)
Backup Database on Local server connection
Databases that have a compatibility level of 70 (SQL Server version 7.0) will be skipped.
Databases: All user databases
Type: Transaction Log
Append existing
Task start: 2015-07-06T03:00:02.
Task end: 2015-07-06T03:08:41.
Failed:(-1073548784) Executing the query "BACKUP LOG [db_name] TO DISK = N'E:\path\backup_transactionlog_name.trn' WITH NOFORMAT, NOINIT, NAME = N'backup_transactionlog_name, SKIP, REWIND, NOUNLOAD, STATS = 10
" failed with the following error: "Write on "E:\path\backup_transactionlog_name.trn' " failed: 112(error not found)
BACKUP LOG is terminating abnormally.
10 percent processed.
20 percent processed.
30 percent processed.
40 percent processed.
50 percent processed.
60 percent processed.
70 percent processed.
80 percent processed.
90 percent processed.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.

Apperently the backup chain is broken
When I get this error, I change the the recovery from full to simple and back to full.
Afterwards I run a full backup and the situation works again until the next rebuild index task.

I don't think this has something to do with diskspace.

15 GB free space on the disk where the transactionlog backup can be created.
Transactionlogfile 27,5 GB (database 32 GB).
But of course I'm not sure.

Anyone an idea what the problem could be?


Even though you said this is not due to disk space, my suspicion is that it has to do with disk space. When you do an index rebuild, it writes lot of log records, and the log file will have lot of data in it. When you do the log backup after the rebuild, all that data will be copied to the log backup.

Before the next index rebuild, see how much space in the log file is in use. You can do that using the following command


It will show you the log file size, and how much of it is in use. The important number here is the Log Space Used. Assume that for each database for which you are doing the log backup, it will require the Log Space Used. Is there enough space on your backup disk to accommodate all of that?


Thank you JamesK for your response.

The actual usage is only +/- 3,5 % (so of the 27,5 GB).

It don't realy understand "Assume that for each database for which you are doing the log backup, it will require the Log Space Used. Is there enough space on your backup disk to accommodate all of that?"
For this db 27,5 GB is reserved for the logfile. So it can take more, isn't it? Or it can not become larger than the db-size (32 GB)?
Should I not see an entry in the windows eventviewer if there is a disk problem?

The rebuild task takes a little hour.
The backup of the transaction logs are made every half hour.
So at least one backup is made during the rebuild task.

Best regards



In the normal course of its operation, SQL Server writes to the log file. If there is not enough room to write, and if auto grow is allowed (which is the default setting), SQL Server will grab more space resulting in the log file growing.

When you do a log backup, the data in the log file will be copied to the log backup file and the log file will be "cleared". It won't release the space in the log file back to the operating system. SQL Server will hold on to it and will reuse this cleared space to write new log records.

Rebuilding index is one of those operations which can take up a lot of log space. So after the rebuild, it is possible that a lot of that 27.5 GB that you have available in the log file will be used up and the usage percentage might be much higher.

When you do the log backup after the index rebuild, SQL Server will try to write all those log records to the log backup file and clear the log file. So it is sort of like holding tank - SQL Server dumps log records into the log file, and backup process takes it from there and dumps into the backup files. Index rebuild is an operation that can fill up the tank almost to the the brim

Given that scenario, the question that I was asking was whether the backup disk has sufficient space to backup the log file when it is full (or near full) which may be the case immediately after the index rebuild.


Hi JamesK

If I understand well,
you're saying that possible all the 27.5 GB of the log file will be full after the rebuild index task
and that at the same time a transactionlog backup file of the same size can be made.
Then another 27.5 GB of free space is needed.
There is only 15 GB of free space.
Is that it?

My question remains if that scenario is true:
should this lack of space not be logged in the windows event viewer?

Best regards



Yes, that is what I was thinking. I don't know whether or not that condition should get logged to windows event viewer. Easy enough to check if that is really the cause by running the DBCC command that I had posted earlier immediately after the index rebuild, but before the log backup.


If doing those steps resolves the issue, then it's because the reindex is breaking the transaction log chain. How is the reindex being done? Is it through a maintenance plan or custom script? I suspect it's truncating the log or switching the recovery model to break the chain. Whichever it is, stop doing that and the log backups will stop erroring.


I was suspecting disk space rather than anything wrong in the script because the OP said that it was working correctly up until two weeks ago. I couldn't imagine someone switching the recovery model to simple and then triying a log back up :dizzy_face:


Ok JamesK

Now I understand it.
Since the index rebuild tasks is running during the night
the DBCC command also has to be run during the night.
So it has to be done automatically instead of manually.

I'm not realy a dbadmin.
How can this be achieved? Another maintenance plan?
Will the result be saved to a file?



The index is been done by a default maintenance plan.
I swich the recovery model after the chain has been broken.

The maintenance plans were running fine since a few weeks ago.
So nothing wrong with that, I think.


You can set up an agent job to do this. Create a table that matches the output of DBCC, like this:

	(DBName VARCHAR(256), LogSize FLOAT, LogSpacePercent FLOAT, STATUS INT);

Then, set up the agent job (right click on the SQL Server Agent node in SSMS object explorer, and select New Job. The wizard will walk you through it.

Your SQL command that you run via the agent job would be this:

INSERT INTO dbo.Diagnostics

Set the schedule to run before the backup job starts.


How often are you backing up the transaction log? Does the SQL Server Error Log show that the transaction log was full?


I think we have gotten a bit off-track...

The OP has stated the following:

  1. The Transaction Log file (ldf) is currently sized at 27.5GB. That means, at some point - the system required all 27.5GB of space. Which means the transaction log backups would require that much space available on disk to backup the log.

  2. The available space for the log backups is 15GB. This means there is not enough space available to hold all of the log backups.

  3. Error 112 - is not enough space on disk - which tells us the error he is getting is because there is not enough space available on the log backup drive.

In order to support log backups - you have to have at least 1.5x the size of the transaction log. In many cases that won't be enough space to hold all the log backups you need. Essentially, you need the size of the transaction log - plus enough space to hold a full days worth of log backups, plus enough space to keep the number of days backups you want to keep online and available.

Before deleting your log backups - make sure you have them copied to offline storage, at least somewhere off this server and with a copy of the full backup. This will insure you have the necessary files available to perform a restore from that full backup - and then applying the log backups up to a point in time.


I'm on the same page now. I had skimmed the thread.

The reason why switching the recovery model is fixing this issue is because it is clearing out the transaction log and thus those huge amount of log records didn't then need to be backed up.

And likely the reason why this is occurring all of a sudden is because a large index is being rebuilt that either didn't need to be rebuilt before 2 weeks ago or is bigger than before.

I would recommend that you increase your log backup frequency though. 30 minutes is a lot of data to lose in the world of OLTP. It's most common to backup the log every 5-15 minutes. We do log backups every 5 minutes on most systems, but especially the critical ones. There are some places that do them every 1 minute.


thank you JamesK!


Thank you Jeffw8713

  1. Actually initually, I created a logfile of this size to prevent internal fragmentation. So the size is not enlarged by autogrow. So not sure it's completely full. I have to check it with the script JamesK posted.

  2. If like JamesK stated the transactionlog backup can become as big as the logfile itself, then it's true. (I didn't mention the each transactionlog backup file is copied to another server, so there is only 1 logfile at the same time on the disk)

  3. Error 112 means not enough space? Than it's clear. It's a pitty that the error itself in the log is so unclear: "failed: 112(error not found)"


Thank you TaraKizer

I'll start to increase the frequency of the transactionlog backup. This will create smaller trn-files during the rebuild index task.
Maybe this will be the solution. I will see it next Monday.


Backing up the transaction log every 10 minutes solved the case.
Now 5 trn files of almost 28 GB in total.are now made (and copied to another server) during the rebuild task.
Before the adaption, half of that had to be saved on the local disk with 15 GB free disk space.
So it was just to close ...

Thank you all for help.

Some research also teached me that backing up each 10 minutes has a lower impact on the performance instead of each half hour. Like many other, I tought the oppesite.

Best regards



:thumbsup: :thumbsup: :thumbsup: Glad to know that you got it resolved.

When I started at my new job, I reviewed the log backup plans, and increased the frequency of log backups substantially, to something like every ten minutes or so. I had to spend lot of time with the infrastructure guys defending why I am doing that. They were afraid that all those backups are going to fill up all the disks and take up all the resouces.


This may also mean that the size of the Log File on the database doesn't need to be so big - if it once needed 27.5GB for 30 minutes log activity it might only need, say, 10GB for 10 minutes log activity now.

Repeated Shrink/Grow fragments the physical structure of the file and adversely effects performance, but if the space on the LOG drive would be useful to you (in particular if the LOG and the BACKUP share the same physical media) it might be worth cutting the size just this once. Make sure it is set to AUTOGROW so that it can grow back if it needs to, and then check the size in a day or two, and week or two, and see how big it has become.

We run Log Backups every 2 minutes during index rebuilds - they, alone, use far more log space than anything else that my users manage to achieve during the day!!