Log files running out of space

Hello, We have a Production Database size 800 GB+, Log file size is 240 GB+ its taking all the drive space out of 250 GB
As this is Prod i cannot shrink the DB. what do i need to do, get this log file shrink and get the drive space back.

run this to see the actual log use %.

dbcc SQLPERF(logspace);

If it is low, your database will not become read-only right away.

Is it on full recovery mode? When did you have a successful transaction log back up? You may need to run a tlog backup.

Log space used is 0.641 %.

DB is in Simple, and we don't have any log backups currently.

In SIMPLE recovery mode, you don't/can't do a log backup. Since only less than one percent of the log space is used, you can shrink the log file. You can do that in a production environment as well. The recommendation about not shrinking applies to the database file (usually with the extension .mdf). You don't want to shrink mdf files, but you can shrink the ldf file.

Thanks! I appreciate your help.

Sorry, don't agree with that ... shrinking the log will fragment the file (on disk) and require "time & effort" to regrow it.

No problem with shrinking a log file (or a data file for that matter) if there has been some wayward process that has dramatically extended it, but other than that both should not be shrunk below the "working space" that they need.

I would want to find what is causing the log file size to grow to 240GB. Personally I'm not surprised that an the log size is 25% of the data size, for normal activity (such as rebuilding an index), but it may be that a smaller log would be sufficient.

An option is to shrink it to a smaller size and then see what size it grows back to - but by then you have fragmented it :smile: We manually grow the Log File, when we create the database, to get the right number of VLFs etc. (so I'd be annoyed if someone else chopped it to something smaller :frowning: ) but if you are doing that make sure that the Growth is set to a reasonable size in MB (NOT in PERCENT!!) and I would create a scheduled logging task (running every 10 minutes, say) which records the size of the Log File so that, retrospectively, you can report on exactly when it increased in size and then, hopefully!, attribute that to the process that was running at the time.

Actually, thinking about it, rather than shrinking it you could log the actual log use % figure, but you will probably have to log that more often, in Simple Recovery Model, in order to catch it before the transaction finishes and the used portion of the log is released.

I would also suggest you check the largest table on the DB, as I agree with Kristen that shrinking might not a good idea even on log file, if the latest table is around 200Gb (for example), then re-indexing might get your log file back to 240Gb.

you might start to think of the next step where if that is the space of what the DB requires, might think about adding more space to the drive.

Backups also on that drive? (hopefully not, but ... :slight_smile: ) then if using SQL2008R2 then turn on Compressed Backups to save some disk space?

Sure, if the regular workflow requires that much space then there is no point in shrinking it. It will simply regrow and cause "time&effort" as you rightly pointed out, exacerbated by the fact that log files cannot use instant file initialization (SQL Server has to clear out all the bits that are added). If autogrowth is set to a percentage, it will have a few very small VLFs, and a few very large VLFs, which is not ideal either.

However, if the log file growth was the result of a one-time event, then nothing wrong with shrinking it to the right size. The thing that I have tried to do is to leave autogrowth setting on, but to make sure that log file is sized sufficiently large so SQL Server never has to make use the autogrowth feature.