SQLTeam.com | Weblogs | Forums

SQL Logs - Running out of Space


#1

My boss wants us to look into the SQL logs. Seems like we are running out of space. The guy that usually does this is no longer with the company. So I am not even sure where to look.

What do we need to do to make more space? Last time someone delete a few logs and then our server stopped working.

One message I got when I was running a SSIS package was this.....

The transaction log for database tempdb is full due to active_Transaction


#2

You are running out of space probably because your server is in Full or Bulk-Logged recovery mode, and you are not doing transaction log backups.

First see what your recovery model. You can look it up in properties in the object explorer for the database, or run this query

SELECT  name,
        recovery_model_desc
FROM    sys.databases

If it is simple, you are running out of disk space because you are running some long transactions, and the space is it is using is actually required. If it is full or bulk logged, check if you are doing transaction log backups. Use this query:

SELECT  database_name ,
        backup_finish_date
FROM    msdb.dbo.backupset s
        INNER JOIN msdb.dbo.backupmediafamily m ON s.media_set_id = m.media_set_id
WHERE   database_name = 'YourDatabaseName'
        AND s.[type] = 'L'
ORDER BY backup_finish_date DESC 

If you are not doing log backups (no records returned) or have not done log backups regularly, do a log backup. That should clear space in the log file, and stop the growth. Then, schedule regular log backups.

Look up some articles on log file management - for example here


#3

Thank you JamesK!

I used your queries. Looks like we only have FULL and SIMPLE model. I also ran your last query. Looks like we have not backed up our files.

So my next question is, how do we go about doing a log backup? Then when this is finished, we don't need to delete anything, correct? It will just clear out itself?

Thanks again for you help


#4

To backup the log, right click on the database name in the SSMS object explorer, select Tasks->Backup. Change the backup type to Transaction Log and make other appropriate changes to file location etc. You will need to have enough space to backup the log. A rule of thumb is approximately the same size as the log file if you are not using backup compression.

Since you are not doing log backups at all, another easier alternative would be to just change the recovery model to SIMPLE. That will clear the transaction logs. Then you can shrink the log file to a smaller size. I usually don't do this because you break the log chain. But if you are never doing log backups, that is a moot point.

BTW, when you ran the second query, you did change the name of the database to the one that you are concerned about didn't you? Or, remove the "database_name = 'YourDatabaseName' AND" altogether to see log backups of all the databases on your server.


#5

I just removed the database name. Looks like we are backing up 3 tables. The last ones that were backed up was on May 25, 2015.


#6

You have to find which database log file is causing the problem. Look at the file sizes using the following query. Then see if the database that is causing the large log file usage is in SIMPLE Or FULL recovery model. If it is in full recovery model, see if its log file is being backed up. If it is being backed up, then don't change the recovery model to simple. Otherwise you can change the recovery model to simple and shrink its log file. If the log file for the database in question is being backed up, do a manual log backup.

If the last backup was on 25th, and given that today is 28th, that is 3 days. That is not nearly enough frequency for log backups. I backup log files on my databases every 15 minutes. More likely than not, the backups are being done using the SQL Server Maintenance Plan. You can open up the Management node in SSMS, and look under Maintenance plans to see what is being done, and how frequently.

SELECT  
	DB_NAME(database_id) AS DatabaseName ,
    Name AS Logical_Name ,
    Physical_Name ,
    ( size * 8 ) / 1024 SizeMB
FROM
	sys.master_files
ORDER BY 
	SizeMB DESC

#7

So it looks like the 2 databases that are large are simple databases.

Now what What does Full recovery and Simple recovery mean? What is the difference?

Thanks again, you have been a HUGE help!


#8

Regardless of the recovery model - the transaction log will grow to a size needed for the transaction. You stated earlier that you have seen errors when running SSIS packages where tempdb filled up due to an active transaction.

What this means is the package you are running is trying to load a large amount of data in a single transaction. This will cause not only your tempdb to fill up - but your transaction logs to grow and fill the drive.

First - you need to modify the OLEDB destination on your data flows and set a reasonable batch and commit size. This tells SSIS to batch the inserts and commit when that number of rows is reached. This will keep the transaction log and tempdb from filling up the drives.

Next, you need to review all databases transaction log size and growth settings. If these are too small - or the growth is set to a percentage - you need to modify this to improve performance. See the following articles:


Good luck...


#9

Read this blog of SQL Server log Explorer and know how to check active transactions in SQL Server.


#10

@wsilage,

To be honest, you good folks don't have the knowledge to do any of this easily or quickly and this sounds like a persistent and nasty problem. You need to hire someone that can isolate the items that are using so much of TempDB. To be sure, it sounds like an accidental CROSS JOIN in the form of a many-to-many join cause by someone either not understanding the data or because of bad design. Queries that cause this type of problem are usually fairly large and have the DISTINCT keyword in them. Sometimes, they have a GROUP BY, instead. And, no... you don't need to make more space... you just need to use less of it.


#11

@JamesK,

All good suggestions, to be sure. Unfortunately, no amount of backups will help this problem. This problem is all about the log file for TempDB running out of space.