SQLTeam.com | Weblogs | Forums

The size of mdf file is much bigger than its gdb file



The size of mdf file is much bigger than its gdb file,

As all the content of mdf file is exported (or imported) to a gdb, the size drops dramatically. The size of mdf is 8GB while its corresponding gdb is 200MB!!!

Then how to “clean” the mdf file so that it gets as small as the gdb?

and what is the source of this huge size?



No idea. Indexes? GDB is compressed perhaps? MDB is heaps perhaps, and poorly reorganised as a result. Maybe GDB is just better at storing data efficiently - but maybe, as a consequence, it is slower at retrieving data? Maybe you've rebuilt the indexes and they have been moved to fresh, contiguous, space at the end of the DB file (thereby extending it). Maybe lots of data has been deleted. All sorts of possible reasons ...

I would check the slack space on all the tables.


One thing I would worry more about, besides the data file (mdf) size, is the log file (ldf) size. They are simply too large, which could indicate you backup isn't running/finishing, and without a backup, you could eventually be in a whole other kind of "trouble".


We pre-create ours (large ...) optimised for minimal numbers of VLFs - so that might be a reason - i.e. the file is large, but not through actual transaction amounts


Dear Kristen
thanks for your advice's

i want to ask you: shrink can help me to avoid this problem
if i try to shrink my mdfs and ldfs is this solve the issue ?


Shrink usually to be avoided UNLESS the file became that size for some unexpected / unusual activity. Otherwise the file will just grow back to the same size, and repeated Grow / Shrink will fragment the file and upset the performance.

Also, if the reason that the MDF file is large is because of index rebuild (to optimise them) the act of SHRINK will fragment the indexes, and destroy the optimisation

If the file is big because of one-off activity then OK to Shrink, but only Shrink to the original intended size, not to minimum possible size. Or shrink to minimum size and check what it grows to after, say, one week and note that the growth is "normal requirements" and do not shrink smaller in future to avoid fragmentation issues.


Dear Kristen

I applied shrink to two databases first one (parcel.mdf, parcel.ldf) second one (u.mdf, u.ldf)
And I got the attached results

As you see in the table, shrink made significant change on parcel database but on u database there was no effect on u.mdf and little change on u.ldf do you have any idea why this happened?

And for parcel database do you think that this huge change in size after shrink will cause effect on the performance of this database?


There are transactions in the LDF file. If it is using FULL Recovery Model then they are not being backed up - put a regular backup in place.

For the MDF file then its probably full. Try a shrink on the FILE rather than the DATABASE. Might be that the actual SIZE setting for the DB is the size that it is currently -i.e. it was created to be that size, initially (which doesn't mean to say that that was the right size!! but if the person who created it knew what they were doing then it might be an indication that it should stay at that size)

No idea. maybe. If you have fragmented an, otherwise, neatly organised index then "yes". If you have shrunk the file such that it will now have to grow again then "yes"

Its generally a bad idea to shrink databases, particularly if they are just shrunk without consideration to what the correct working size should be, as I said.

But if you have only a few people using the DB, and they are making minor queries then I doubt you will notice.

I presume you do actually need that disk space for something else?


Actually, the size of db is not accepted and I want return it to its original size, on the other hand I just find one way to reduce the size by using (shrink) and you said that shrink may damage my db so how I can solve this contradiction?

Is the increasing in mdf file size normal in sql? Is this always due to indexes ?


If won't damage your DB, but it may "damage" the performance.

Your could SHRINK and then REINDEX everything. If you get back to the original MDB size then that's the "working-size" that you need (and you will have fragmented the file, a bit, in the process).

As I said earlier ("I would check the slack space on all the tables.") it would be worth seeing how much free space there is within the tables (as compared to "space at the end of the file to allow for future expansion"

As you add more content the database will grow. If you rebuild an index SQL may use fresh space, at the end of the file (including extending the file if necessary) to make a nice, contiguous, index. But the space where the old index was will then be reused, so over time the file will reach equilibrium (just growing in proportion to the new data that you add)

If your file is 8GB and only contains a few MB of "content" then clearly the DB could be shrunk a lot smaller.

If your file is 8GB, contains 7GB of data and of that only a few MB is "slack space" within the tables, then it is pointless to try to shrink it.

An 8GB file, with 7GB of data but the tables containing, say, 6GB (i.e. "a lot") of slack space would indicate that something needs optimising.

If you don't have Clustered Indexes on ALL Tables then SQL may struggle to keep-a-lid on the space usage, because "rebuild" may not do a thorough job (answer: create a clustered index if it makes sense to do so - there are some occasions where it is preferable to just have a Heap table; the only alternative that I know of is to export all the data and re-import it "clean" (but I expect there is a better way, we don;t have any Heap tables in our systems), so better to create Clustered Indexes if possible).