Is it OK to Defrag the Data Drive?

Hi experts,

We have a software vendor who says we should defrag the drive where the data base file resides. This is SQL Server 2012 on the Windows platform.

They believe data drive is heavily fragmented.

As this is not typically done for sql server files, I'm looking for some advice.

Thanks

From the work I've done doing that, and the test cases from others I've seen, it can help overall performance.

But, definitely also make sure you review the clustered indexes too, as they are the single most critical factor in performance, orders of magnitude bigger impact than disk fragmentation. In particular, note that an identity column is most often not the best clustering key.

1 Like

Thank you, Scott. I value your opinions.

Staying with the data file fragmentation issue:

Would the "Reorganize Pages" option on the Shrink File wizard accomplish the same thing - would it reduce the fragmentation? Even if we don't reduce the size, only reorg the pages?

The "Reorganize Pages" option does something else. It logically reorganizes the pages within the file, but it does not adjust the size, type or fragmentation of disk extents as allocated by the drive subsystem.

For example, say I have a file that has 10,000 separate allocations/"chunks" of 2MB each (total file size of 20GB) on a drive. Say also that the data is out of logical order, such that key 1000 is in the allocation 1, key 400 is in allocation 2, key 20 is in allocation 3, etc..

The physical defragging would reduce the number of allocations from 10,000 to a much smaller number, possibly as few as one. This reduces the overhead on the drive because fewer extent lookups and head seeks are needed to read the entire data.

The logical reorganization would put the keys in logical order -- key 20 in alloc1, key400 in alloc2, key1000 in alloc3, etc., but there would still be 10,000 separate physical file allocations.

1 Like

Btw, I use (the free) utility contig.exe to physically defragment and I have never had an issue with it, even on active dbs. Naturally I wouldn't recommend using on extremely active dbs, but in theory even that should be OK.

1 Like

I really appreciate your information, Scott.

The data file fragmentation occurred (I believe) because this database was set to the default grow by amount of 1 MB. I have increased that of course, but we still have the thousands of segments or allocations as you say.
There are only 1 to 4 active connections to this database. I will give the Contig.EXE utility a try.
Thanks. John

Yep, I had exactly the same situation on many dbs I inherited. I ran I/O tests to verify, and contig.exe did give me a decent gain in overall performance, probably because there were literally thousands of file extensions.

"Have you tried using defragmenter tools? Stop the services and disconnect the active connections.

The tools below can be used to defragment the SQL Server database files:

  1.   Contig.exe
    
  2.   Diskeeper (Windows, $29.99)
    
  3.   MyDefrag (Formerly JKDefrag) (Windows, Free)
    
  4.   Defraggler (Windows, Free)
    
  5.   Auslogics Disk Defrag (Windows, Free)
    
  6.   PerfectDisk (Windows, $29.99)
    

"

1 Like

it is possibly from technical side and save.
But it causes heavy IO loads and it slows down extremly the response of your databases.