SQL Server 2012 - DB Compression and Performance

Hi All,
I'm currently working on a project, where the core system is still in development - the system processes high volumes of data, certain processes result in over a million transactions occurring in less than 20 minutes, translating to >1M DB writes in the same period.

During our testing, which includes simulating several weeks’ worth of processing; we're seeing the DB size increase quite significantly as transaction history grows.

In an attempt to curb the space consumed by the DB (as Transaction History increases towards anticipated Go Live volumes) it's been suggested that applying DB Compression across the entire DB (All tables, Page Compression) will save space.

There's a spit in opinion amongst the team:

  1. Some want full compression now, saying it will save disk space and improve performance.
  2. Others believe that would impact performance, especially when processing results in a lot of DB writes, and that DB Compression isn't an option that should be taken until all functionality of the system is complete. Furthermore, they state thorough analysis of the behaviour of each table would be needed before it could be considered for compression.

As you can see it’s quite split in opinion – all comments/advice would be very welcome.

Any questions or comments please let me know.


I would want to test Compression from Day One. So might as well implement it from Day One too.

But I'll eat my hat (wouldn't be the first time!!) if Performance will be OK with compression.

Compressing backups would be fine though (significant reduction in I/O, both for Backup AND for Restore [handy during Disaster recovery when time-is-of-the-essence] at very little / marginal increase in CPU. (There's a single flag you can set which changes all backups to default to Compressed)

There are other ways to tackle this. For example partitioning of the data. That won't help overall size, but it might provide a management process which will facilitate improvements in one way or another. maybe you can move older, less-likely-to-be-needed partitions to slower / cheaper storage or (outside my knowledge) maybe you can compress JUST those, older, partitions.

I'll be interested to hear what others think, and what you eventually decide to do

As long as you have sufficient CPU available, page compression normally helps performance greatly overall.

There might be a few cases where only row compression should be used, but they are less than 50% of the time.

That includes clustered and nonclustered indexes.

So I'm clearly in agreement with the "1)" faction on your team.

Interesting, thanks Scott. So do you think this should be a starting point? i.e. Use Page Compression by default - unless [insert some specific, rare, circumstances where it is not appropriate]

(I've got my hat at the ready!)

Yes. In this case, I agree with MS's recommendation to mostly use compression across the board. I will note, though, that in the past there was often vast leftover CPU time anyway. That could be less true now if you specifically buy fewer cores to cut down on licensing costs. Generally, though, I/O is the real performance killer, and the compression reduces I/O so much it's still a huge net gain. And I suspect the compression algorithms and performance will get better and better over time.

Finally I do need to acknolwedge that rebuilding a compressed table is a much longer process than rebuilding a non-compressed one. Therefore, it's even more critical than it already was to properly key particularly the clustered index (the table itself) rather than just "defaulting" to clustering by identity. There should never be a "default" clustering key, it should always be carefully chosen to match that table's specific usage and requirements. That way, the clus index won't have to rebuilt just due to it being changed for tuning after you realize that identity was a very poor choice for clustering for that specific table.

Very helpful thanks.

(Hat not very tasty ... I need to avoid having to eat any more!)