I'm currently working on a project, where the core system is still in development - the system processes high volumes of data, certain processes result in over a million transactions occurring in less than 20 minutes, translating to >1M DB writes in the same period.
During our testing, which includes simulating several weeks’ worth of processing; we're seeing the DB size increase quite significantly as transaction history grows.
In an attempt to curb the space consumed by the DB (as Transaction History increases towards anticipated Go Live volumes) it's been suggested that applying DB Compression across the entire DB (All tables, Page Compression) will save space.
There's a spit in opinion amongst the team:
- Some want full compression now, saying it will save disk space and improve performance.
- Others believe that would impact performance, especially when processing results in a lot of DB writes, and that DB Compression isn't an option that should be taken until all functionality of the system is complete. Furthermore, they state thorough analysis of the behaviour of each table would be needed before it could be considered for compression.
As you can see it’s quite split in opinion – all comments/advice would be very welcome.
Any questions or comments please let me know.