SQLTeam.com | Weblogs | Forums

Shrinking log files on every database



We have 4 stage/test SQL 2008R2 Servers where developers have the option to restore databases taken from our Prod SQL server.
Since it's only test/stage servers I have a schedueled script transforming all databases from Full to Simple recovery model.
The thing I want to do now is to shrink all the log files (not data) on these databases. Is it possible to script and scheadule this? Anyone having a script for this?




Never thought I would ever recommend this!! but I wonder if this is a candidate for turning on Autoshrink? Possibly on the DB too?

Huge hit on shrinking the DB, and it will fragment the indexes, but if you were to REORG (rather than REBUILD) the indexes that should be OK.

I presume in DEV there is the possibility that someone does a massive transaction / import to straighten out data etc. so AutoShrink would (some time thereafter) take care of the space used by that.

No idea whether AutoShrink shrinks to the defined size of the LOG file (which might be huge, to start with) or as tiny as it will get. I think it also waits until the file is 25% full, or something like that, which may never happen on a DEV DB

If you have lots of "just in case" production databases restored on your DEV server maybe there is a case for setting AutoClose too ...


No need for autoshrink (NEVER turn on autoshrink):

EXEC sp_msforeachdb 'if DB_ID(''?'')<=4 return; use [?]; dbcc shrinkfile(2);'

Not recommended for normal production use, but for a staging/test/dev environment that's set to simple recovery it should do fine. If you want to do the whole thing in one shot:

EXEC sp_msforeachdb 'if DB_ID(''?'')<=4 return; alter database [?] set recovery simple; use [?]; checkpoint; dbcc shrinkfile(2);'

Run that after you restore every database. You might get an error on tempdb since its recovery model can't be changed, but I think the script will still run.


I would have said that too, until this thread. I'm assuming some massive data manipulations happen on DEV during the day, from time-to-time ... AutoShrink would fix those (i..e in near-real time)? Might not be necessary though ... perhaps there are reasons why it is not desirable either?

Maybe it will play merry-hell with performance when AutoShrink runs? - so better left until a scheduled task in the middle of the night? (If I had that scheduled on DEV server I , personally, wouldn't bother to do anything after a RESTORE, I'd just let the overnight job sort it out). I know some places that restore all PROD DBs to DEV overnight, so they have "fresh" data for any work next day. The shrink job probably needs to run after each DB is restored (I'm assuming not enough disk space to restore all DBs in full)

TEMPDB excluded by "if DB_ID(''?'')<=4 return;" I think?


It won't process tempdb because of the IF, but sp_msforeachdb parses the statements for each DB, and if it's not valid will generate an error.

Autoshrink always affects the database, not just specific files. I'll take the advice of the person WHO WROTE AUTOSHRINK:

Shrinking log files is relatively painless, and in champ_'s case especially so. I have a (nearly) identical setup at my current job. (we don't shrink the files because we control them in prod and have enough space in dev)

I really shouldn't be recommending sp_msforeachdb either, as it's undocumented, but it's soooo soooo handy.


I remember reading in Pauls blog that when he was in charge of that code he wanted to deprecate the feature, but it was deemed necessary for backward compatibility. Not sure I would save "never" though ... surely if we can say "never" then it could be deprecated?

I think all the same issues arise (with the exception of a sudden CPU hit) if the Shrink is scheduled to run overnight. If there is enough dis space to get through the day (i.e. during huge data manipulation processes on DEV) then overnight must be better. If not then I think there might be an excuse to use AutoShrink (or some hourly scheduled task that detects an huge logs on a DB and then shrinks it ... which I suppose is the same as the overnight task, but with a higher threshold on Log Size)


I think its a bit crazy and insecure for developers to have access to production backups. I also treat dev/staging databases as if they were production for point-in-time backups just in cast a developer wipes something out that would take hours to rebuild or recover. This would be especially true if you're allowing developers to restore production over dev/staging databases.


Good point Jeff ... we run our DEV databases in FULL recovery model, and we restore to point-in-time FAR more often than a production database ever has done!!

But we also grow the TLog like crazy trying-this and trying-that and periodically have a who-did-that blame session! and downtime running manual scripts trying to get things back to normal. I think it would be better that I invested the time to have all the DEV databases shrink any unruly TLogs files back to "normal" (e.g. overnight). "Normal" for us would be somewhat tight ... we have 20 databases in DEV, I suspect that I haven't touched 18 of them in 6 months ... tomorrow I might have an urgent need though. I can live with fragmentation and somewhat impaired performance on DEV, but I don't have huge amounts of spare disk space. What I do have I would prefer to give me more backup history [without having to wait for a restore from tape] than more log space :slight_smile:

In our case we don't restore PROD over DEV, we back-merge data from client PROD data tables into DEV. Normally our program meta data, and the table structures, in DEV is ahead of the PROD version (except just after a new version rollout, natch!) so overwriting the lot would not be helpful. But if I needed a TEST copy of a database so that I can run some diagnostics then I would favour a restore of PROD to TEST and shrinking it to waste less disk space seems reasonable. I'd never get sanction for 10's of GB for log files that never needed it on TEST ... but I can imagine in much bigger shops it would be critical that the hardware in TEST was identical to PROD and everything would be like-for-like. (In reality most of our TEST DBs are on the PROD server, its the only one we've got on which we can experience real-world loads and performance, if the code gets from DEV to TEST it is deemed ready for release, so TEST is only to prove that all is well and check performance before the code gets deployed onto PROD, or to make some diagnosis of a problem on PROD.

Sorry, my apologies, I've wandered off into a general discussion ...


Oh, you understand my pain very well. :slight_smile: We don't even have particular sensitive info in there, mostly just email addresses, but I've had to add data cleansing due to some unfortunate accidents in DEV. (Before I started where I am now). Fortunately they seem happy to let me do all the restores whenever they need a refresh of PROD.



Sorry for the delay. I read that sp_msforeachdb isn't supported by Microsoft.
I also saw some comments regarding that sp_msforeachdb don't always work 100%.
But if that's the only solution that can solve my issue I maybe have to give it a try...
Thanks for the anwers!