SQLTeam.com | Weblogs | Forums

Database File Size Increased

Hi there,
I have few database in sql server 2008. On daily basis there are some procedures which update data in few tables after every couple of min to hours.

And due to size limitation on server hard disks I delete old data from same tables to make some space and due to performance issues. Even data is deleted regularly the file size of DB is increased.

What is the right way to deal with this problem.

Thanks in advance.

I also truncate log files regularly but the DB size which is creating issues. Please help

By chance, is this a heap table? Heaps don't play well with deletes.

Yes ... these tables do not have clustered indexes but do have non clustered indexes.

Then I suggest that you add a clustered index to your table. There are very few reasons for not having a clustered index.

Clustered indexes usually take more disk space and resources. This is the only thing which prevent me to have one.

Ok ... If I have a clustered index? how it will provide help to address current issue.

With heaps, space is not deallocated when DELETEs happen, unless you hold a table lock. With clustered indexes, this issue is not present. Read the article I linked in my first reply.

Moreover, clustered indexes typically don't use more resources. Why do you believe it is so?

Ordinarily that would worry me ... but I suspect your ONLY??!! reason for doing that is as a band-aid, rather than as a long term solution?

Free Space
BACKUP DATABASE (full backup, differential backup, file/filegroup backup, copy only) does not free up space in any files. If anything, it uses up a bit of space in the log file to record its actions.

BACKUP LOG truncates the log, which generally does free up space.

A Simple recovery model database truncates its log when a checkpoint happens.

File Size
None of the backup operations affect the physical size of the files.

The only commands which do that are DBCC SHINKFILE and DBCC SHRINKDB (in SSMS, right-click a database, choose Tasks, Shrink).
Please go this one here I have discussed simply how can we rduce the log file size. http://sqltechtips.blogspot.in/2016/01/truncate-and-shrink-log.html

It sounds like you are trying to scoop water out a sinking life raft with a Dixie cup.

It's hard to come up with a short term solution while ignoring the long term solution here. However, I'm wondering what the requirement is with the daily updates/loading of data into your growing database. Since you suggest that you delete old data following the loading of the new data, perhaps there is a more efficient process that can be put in place. How much of the data in these tables is updated daily? Are destructive loads more of an option that will simply truncate the old data and add the new daily?

Well these are call records in a cellular company. Millions of events generated in an hour like voice calls, SMS & Data. Although there is a DWH setup to retain all data but for quick analysis on daily basis I need to load one day CDRs to perform analysis. That's why I have to remove one hour data and load new data in DB.

So what is the best way to perform such operations.

Sliding window partitioning?

If you are using FULL Recovery Model then change to SIMPLE (if you can recreate all the data in your warehouse database and thus don't need FULL) Then you won't need to make frequent LOG backups

If your database is already in SIMPLE then don't shrink the logs. They will just grow again. If you need them smaller then decrease the size of your batches - but, for me, I definitely wouldn't go down that route to save log space, I'd buy bigger disks!

If you need FULL Recovery Model then increase the frequency of your Log Backups. Every minute, if you have to. I'd worry if they were more than, say, 10 minutes apart.

But if you can TRUNCATE a table, rather than DELETE rows,that will save disk space. One trick, in case it applies, is if you have to delete most of the data in a table (most is probably "more then 60%" and if it is "more than 80%" it is probably going to be a good solution), is:

Create new, temporary table
(Don't allow any new inserts/updates)
INSERT into temp table the "rows to keep"
DROP the original table (including dropping any FKeys etc.)
RENAME the Temp Table to Original Name
Re-create and additional indexes / constraints / FKeys etc.

But sliding window partitioning probably better still, if that applies to your use-case.