I have a table with twenty columns that contains about 4 million records (40GB). Each record only lives in the table for about 10 days before it is discarded. Each record has a datetime field to identify when it was inserted into the table. The table has 1 clustered and 4 non-clustered indexes (1GB).
Currently we have a task that runs every minute that prunes out the old data based on that datetime column. Every time it runs approximately a thousand records get deleted.
At the same time (different process), new data is being inserted into this table at about the same rate as the deletions - ~1000/minute.
This is working well to keep the table at a pretty stable size but we are starting to experience issues where occasionally either the CPU rises up to 80-90% or we run into locking issues that might momentarily prevent updates.
The issue has started to worsen so now we are looking for other ideas on how to better manage the data churn in this table. I don't know for sure but I'm imagining that all the inserts and deletes are causing the index updates to push our server to its limits.
I can't control how many records are flowing into the table but I can control when the data gets deleted. The inserts are pretty constant throughout the day, maybe slowing slightly in the middle of the night. Would batching up the deletes to occur every 6 or 12 hours be more efficient than what I'm currently doing? The table size would certainly grow but I'm thinking that with the fewer index updates I might be able to avoid other performance issues.
Thoughts?
Thanks,
Jeff
SQL 2014 Enterprise