Transaction Log Size Confusion

I am trying to delete 500MB of data from the SQL Server (2000), the tables use triggers to copy the deleted data into history tables I am automating this via an Access database using a SQl pass through query.

First I exported the data into separate Access databases (one for each table) and the total disk size used by the standard tables doubled (to include the history copies) is 500MB.

My question is this:

Why is it SQL Server when attempting to delete records from the tables (via a transaction in an Access database using a SQL pass through query) does SQL server create a transaction log in excess of 11GBs(!!!) which eventually fills the drive?

It is only 500MB of data in total for all the tables, so what on earth is it doing? Do I have to resort to purging the transaction log between processing each table??

The amazing thing is it is running out of disk space while processing only 10 of the 70 tables!!

If access is deleting using RBAR - Row-by-agonising-row - then the deletion of each individual row will be logged, together with each individual history table insert. The whole thing is almost certainly treated as a single transaction, so the log file will manage that in a way that if, for example, you got a power cut just before the least one was deleted the whole transaction would be rolled back (when the server restarted) to prevent anything being left in inconsistent half-and-half state; that probably takes more log space than if each delete was a transaction in its own right.

Deleting a lot of data in SQL is usually a log-intensive task and where it is not necessary to treat the operation as a single ATOMic transaction (i.e. because, if it fails, it can just be restarted to delete whatever-is-left, and also where there are NO referential integrity problems for other people using the database) then it is usually more efficient to delete the records in batches. "How big" depends on how quickly the delete runs (i.e. long runs will block other users of the database) and how big the log entries it creates are. On a heavily used database we usually reckon on about 10,000 rows "per iteration".

If the database is in FULL Recovery Model then you need to, also, backup the Log frequently (otherwise, even with multiple batch-sized transactions it will still grow exceptionally large requiring the file to be extended, and then housekeeping needed to shrink it back to a "Normal" size). During scheduled bulk-deletes we schedule a log backup every minute.

The data is being deleted based on the Year of a define datetiem value being say more than 3 years than 2015 e.g.

DELETE FROM PLT WHERE Year(PLT_Date) < Year(GetDate())-3

So this doesn't explain the massive log file, even if it was writing 4 transactions for each deleted record it would not exceed 11GB as is it doing using Access....

Have transferred entire code into SQL Server as a stored procedure, left it running and the moment the log file is 4GB and rising, hence the question what the heck is it doing??

Pass ... but IME it works better doing it in a loop in reasonable sized batches.

We get all the PKeys into a #temporary table, which also has an IDENTITY column, ordered by the clustered index on the underlying table (so we delete sympathetically with that index), and then delete in batches based on the IDENTITY by joining the #tempTable (on PKey values). We delete about 10M rows each day (i.e. that's the table with the largest daily stale-data purge) using this method.

Perhaps something else is being logged - lots of indexes on the underlying table being purged? Whatever it is it is being logged to ensure that SQL can roll the transaction back if necessary, giving SQL smaller transactions to work with makes that job a lot smaller ./ easier. If you are running in SIMPLE Recovery Model then each batch will checkpoint the DB and the log space will be reused for each successive batch (more-or-less)

Using

WHERE	Year(PLT_Date) < Year(GetDate())-3

is likely to slow the operation (although not responsible for the size of the log file!) and cause SQL not to use any index on PLT_Date column.

I would use something like this (untested) :

DECLARE	@dtCutoff datetime = DATEADD(Year, DATEDIFF(Year, 0, GetDate())-3, 0)
--
SELECT [T_ID]=IDENTITY(int, 1, 1) PKey1, PKey2, ...
INTO	#TEMP
FROM	PLT
WHERE	PLT_Date < @dtCutoff
ORDER BY ClusteredIndexKey1, ClusteredIndexKey2, ...
--
DECLARE	@intBatchSize int = 10000,
	@intOffset int = 1,
	@dtStart datetime,
	@intRowCount int = 1	-- Force first iteration
--
WHILE @intRowCount > 0
BEGIN
	SELECT	@dtStart = GetDate()	-- Start time, used to calculate duration
--
	DELETE	P
	FROM	#TEMP AS T
		JOIN PLT AS P
			 ON P.PKey1 = T.PKey1
			AND P.PKey2 = T.PKey2
			...
	WHERE	T.T_ID BETWEEN @intOffset AND @intOffset + @intBatchSize
	SELECT	@intRowCount = @@ROWCOUNT
--
	SELECT	@intOffset = @intOffset + @intBatchSize
--
	PRINT 'Deleted: ' + CONVERT(varchar(20), @intRowCount) + ', Duration: ' + CONVERT(varchar(20), DATEDIFF(Second, @dtStart, GetDate())) + ' seconds, remaining to delete=' + CONVERT(varchar(20), @intRowsToDelete)
	RAISERROR (N'Waitfor', 10, 1) WITH NOWAIT	-- Messages display may be delayed without this
	WAITFOR DELAY '00:00:03'	-- Optional pause to release BLOCK on any other processes
END

If database is in FULL Recovery Model make sure Log Backups are running every minute, until the delete operation is completed.

Nice code! Thanks for the advice :smiley:

I think you've highlighted rather well how bad Microsoft's transaction system actually is, that is as usual, they purposely make something important very badly designed and never fix it but you don't notice it straight away as 90% works okay until you decide to do something different...

Found an interesting bit of data, the one table (PLT) has 529,996 records that are to be deleted. The exported size of this data is 96MB, yet SQL Servers "brilliantly written" transaction system creates a transaction log of 14GB for 529,996 records @ 96MB stored size. (??)

What rubbish! If they bothered to design it properly there would be either an inbuilt stored procedure version of DELETE or an addition to the DELETE command with an optional parameter e.g. DELOPT_NOTRANS which would allow the user to fully delete without traceability. Or they could just stop making the transaction log turn from something beneficial into an advert for their Bloatware division just because you decided to delete something.

As is I am moved to doing something similar to what you suggested and breaking the delete process into single years, which is farcical or as Harry Potter would say "ridiculartous!"

Have split the stored procedure into 4 as each handles different parts of the system, so I can see the transaction log file growth per procedure and inserted this at the start and end of each one:

--now delete excess freespace
dbcc shrinkfile ('mrptblslog',1000)
--now clear the transaction log completely and resize
backup log mrptbls with truncate_only
dbcc shrinkfile ('mrptblslog',200)

Cheeky but it seems to work. I don't need the ability to "undo" the delete the data is already copied elsewhere but obviously I don't want to "switch off" the transaction log system.

Now the transaction log doesn't go beyond 1GB - thanks Microsoft for continuing your 20+ year edict of making the simple a long drawn out convoluted process :slight_smile:

Thanks for your help :smiley:

You have absolutely no idea whether that is the case or not. You are just assuming that because, in your case, you are getting a massive transaction log the code must be written badly ignoring the huge systems that MS SQL supports day-in, day-out. On the day you get a power cut in the middle of a critical operation you'll be grateful, compared to Access. Lack of familiarity is certainly an issue - as with many things; My instinct, based on experience, is to delete large amounts of data in batches but I can't imagine anyone coming new to it would decide to do that ... and who knows? maybe that wouldn't make any difference in your case; perhaps you have a cascading delete FKey on the table, or some other code that is running which you are not aware of, and that is processing and logging shed loads of data. Your database is probably badly tuned, may well have auto-create of statistics and so on turned on, and as the massive deletes are occurring the stats are being updated for all I know.

Under no circumstances should you truncate your log. If you don't need point-in-time log backups / recovery then change the recovery model to SIMPLE. If you do then truncate just breaks the backup chain which means you have then lost the ability to recover to point-in-time.

You seem to be so hell bent on slagging off Microsoft without having done any diagnosis of why your logs are huge and as such I'm sorry but I can't be of any further help to you.

2 Likes

I have the same feelings as Kristen regarding SQL Server. It is a beautifully designed product with awesome capabilities and features. I think of it as a Stradivarius, which can delight and enrich you(r work life) if tuned well and played expertly. Even for the rest of us who are not gifted virtuosos, it can produce reasonable results as long as we treat it with a bit of care and consideration.

2 Likes

The trigger code could be the big problem. How is it written? In particular, does it use set-based processing or, yikes, a cursor?!

I agree with Scott on this. It's a cakewalk for SQL Server to delete 500MB. Let's see the code for your history trigger.

Also, run the following and let us know what it returns on the original table...

sp_spaceused PLT