Autogrowth Question

Hello -

I've accidentally become the dba for our team and looking at some of the databases on the server (Sql 2012), i see that all of them have been created with the default settings of 1MB Autogrowth for Data Files and 10% Autogrowth for Log Files.

I am in the process of changing those settings to something more appropriate per database. I have a question on the strategy.

For example, one of the databases has two large ETL activities per day and, looking at the history, it is growing at approx 360MB per day. The current 1MB Autogrowth is causing approx 350 autogrowth events per day. The database is currently about 26GB.

I am thinking that to reduce the constant autogrowth events, I will need to do two things: (1) Increase the current size of the database to 35 to 40 GB (about a month's worth) and (2) increase the size of the data file auto growth to 500MB to 1GB....

After that, I am planning on monitoring the growth and then manually adjusting the size as necessary. So, in one month, I would need to increase the Data File size again.

I am wondering about the impact on fragmentation too. Would it be best to just increase the size of the data file to maybe 4 or 5 months worth?

Thanks for any recommendations and guidance.

...

What are you doing in terms of backups and truncation of logs, etc
as a dba, one of the questions you always want to ask yourself is "Whatever it is I am planning on doing on this one server as far as fiddling with settings, could I do the same process with 100 server, 1000 servers, 10000 servers?" The answer becomes clear real fast and it is "That would be totally unmanageable". If it is a one time thing, that is ok, but if it is something that would require you to tinker around your dbs I would reconsider the approach.

Hi and thanks for the reply.

So, there are just two database servers...but I get your point about thinking how it would scale out.

Right now, all the databases are set up with Simple recovery and there is a Full backup done once a week and differential backups done daily. This is NOT a transactional system...more of a reporting and staging database that is populated via ETL (ssis).

I am also trying to reduce the amount of incessant and constant autogrowth events...which I suspect might lead to some performance issues and/or fragmentation as well?

thanks!

  • will

ok I see your situation. so in what forms are these event notifications coming your way?

Some people have mentioned that at times the ETL loads are slower than normal....and when i look at the history of the autogrowth (via a query), i can see that many of the times match up. For example...I will see 300 to 400 autogrowth events (because the current setting is 1MB) during the course of an ETL load.

I am hoping/thinking that by increasing the current file size (maybe by 10GB) and then setting autogrowth to 1GB or so....that would prevent the constant autogrowth. I could then monitor over the next month and see if I need to modify those settings.

I would vet the ETL Loads that are putting a strain on the db first. no amount of tweaking will satisfy etl developers (I am both dba and dev). devs might think it is server configuration. When that happens I ask the devs I want to audit your ssis packages. collaborative not accusative. and sure enough in many cases the issue has been the way things are designed. devs need to work with dba collaboratively so there are no such surprises. throwing more disk , ram or autogrowth will come to bite you.

Yes, as you noted, you definitely need to increase the data file increment. You also need to use a fixed amount for all files, including logs, never a percentage. I've found percentages to be much slower and they can cause long waits as log files get larger.

First, make sure you have IFI enabled for SQL Server. That will drastically reduce the time needed for additional file growth. Once that's in place, it's up to your company whether to preallocate months' worth of data space or to allocate (relatively big) chunks as more space is needed. With IFI in place, a data file addition is a very fast event.

For log space, you probably have way too many VLFs right now, and that can hurt performance big-time. You need to do a one-time shrink and reallocation of the log files. You want to grow the log in relatively big chunks but not in insanely big chunks.

So if a log is currently 8GB in size, you might do this:
/* get the logical file name for the log file to use in the ALTER DATABASE commands*/
EXEC sp_helpfile;
/* Then run this /
DBCC SHRINKFILE(2)
ALTER DATABASE db_name MODIFY FILE ( NAME = db_name_log, SIZE = 4GB );
ALTER DATABASE db_name MODIFY FILE ( NAME = db_name_log, SIZE = 8GB );
DBCC LOGINFO; /
verify that you don't have too many VLFs; the actual data displayed doesn't matter, just the total number of rows of output */

If a log is currently 40GB in size, you might do this:
DBCC SHRINKFILE(2);
/* if 10GB increments are too much for your disk subsystem to do quickly enough,
reduce the allocation as required to match your disk speed. /
ALTER DATABASE db_name MODIFY FILE ( NAME = db_name_log, SIZE = 10GB );
ALTER DATABASE db_name MODIFY FILE ( NAME = db_name_log, SIZE = 20GB );
ALTER DATABASE db_name MODIFY FILE ( NAME = db_name_log, SIZE = 30GB );
ALTER DATABASE db_name MODIFY FILE ( NAME = db_name_log, SIZE = 40GB );
DBCC LOGINFO; /
verify that you don't have too many VLFs; the actual data displayed doesn't matter, just the total number of rows of output */