Rebuilding a heap - need advice

Hi, guys!

For last 2 weeks I've been studying possibilities of rebuilding a heap. Our business line is:

  • ETL job running every day
  • number of rows inserted into heap table: more than 20 millions
  • on this table only INSERT and DELETE actions are taken
  • current status of table:
    a) almost 4 billions of rows
    b) data size = 160 GB
    c) index size = > 500 GB (4 indexes)

I want to implement Ola Hallengren's solution of Index and Statistics Maintenance. Indexes are fragmented over 99% and I think nobody has implemented an optimization plan on this database.

Studying and reading all kind of articles regarding this, I really don't know how to proceed:
a) creating a clustered index on table and then drop it. After that, running Ola's job for maintenance on heap every 2-3 days
b) dropping all non-clustered indexes and creating them from scratch.
c) rebuild indexes on heap

What do you advise me to do? I know this subject has been discussed so many times here and on some other places but I can't decide which option is the best.

Thank you!

why is it a heap?

Also, consider Minion Reindex

@gbritton, when I came here I found this DB structured like this.
Perhaps of performance impact they're using like this because every day more than 20 millions rows are inserted into this table.

well if the rows are not inserted in CI order, then yes that could be an issue. (Even if they are inserted in CI order, you could get a hotspot). Usually I wouldn't recommend a heap except for staging purposes either into the DB or out of it (e.g. perhaps for reporting -- it depends).

Note that when you read the data in index order (whatever indexes you have), you run the risk of having SQL jump all over the disk for it, since there is no ordering imposed --- even between two logically adjacent rows.

Given the size of your table, I'd be looking hard at partitioning by date or some other reasonable business column. I'd make that a priority and then address the rebuilding your indexes.

1 Like

I would suggest adding a clustered index. Heaps are bad except for perhaps staging tables.

Here are some notes about heaps:
Deletes leave the space behind unless table lock is used during the delete, either via a table hint or by lock escalation. Empty space takes up space in backups, restores and memory. If you scan the heap, you must also scan the empty space even if there are no rows.

Updates can cause forwarded records if the updated data does not fit on the page. Forwarding pointers are used to keep track of where the data is (a different page). This means extra and random IO.

Heaps cause fragmentation, extra reads and sometimes a huge waste of space.

1 Like

20M rows a day isn't all that much for SQL Server, particularly if they're spread out across all 24 hours of the day.

You need to review the existing index stats to see how often the existing indexes are being used, and how they are being used. You also need to look at missing index stats to see if maybe you are missing an index that is really needed.

From that info, determine the best clustered index for that table, including consideration of the WHERE conditions used to DELETE rows.

Finally, you will need to drop all the nonclus indexes, create the best clus index, then add any additional nonclus indexes that are actually needed and will be used. If you have Enterprise edition, you can rebuild online and keep the table available for use. If not, you have a much bigger challenge on how best to put the new indexes into place.

1 Like

In a normal ETL based system - the process of loading the table(s) would be:

  1. Disable all NC indexes
  2. Load the data
  3. Rebuild all indexes (including clustered index)

The rebuild step will enable the NC indexes as well as rebuild them. There is no reason to worry about using a process to identify what indexes should be rebuilt as all indexes need to be rebuilt after loading the data - especially when you are loading 20 million rows.

Thank you for replying, guys!

First of all, I don't have for the moment space on stage server to restore a production backup for testing. My thought was also to implement a clustered index and monitor reaction of ETL job and than to make a realistic plan for my manager. In couple of months I should have a consistent storage only for testing. Till then...I have to implement a temporary solution.

Second, I can postpone ETL job only for 24 hours maximum. I really don't know how much time will take implementing a clustered index and after that one or more non-clustered indexes. For the moment, the whole ETL process takes around 9 hours but after that we have to run different reports on DB and more. From what I've read, putting a clustered index is not quite ok when you dealing with inserts every day...and from time to time with deleting data.

In this case, for a temporary solution, you say would be a bad idea to rebuild that heap as it is? I didn't meet such situation of heaps and such indexes sizes and my maintenance window is limited.

What problem are you trying to solve with your heap?

TaraKizer, hope to bring some performance of ETL job and make some order in that table.

As I said, I don't think someone implemented an optimization plan so far and it's time to do it. Missing of a free space for restoring a backup and testing is not possible at the moment...I don't know what impact would have a clustered index on that table.

I can't reclaim free space if I don't make a clustered index, I'm aware of that.

If you are worried about performance, I would definitely start with a clustered index. Picking the right one could help tremendously.

Not if they aren't being used! That is the point of first using existing index usage and missing index stats, because they contain a time period of actual activity against the table, and they will disappear the second the indexes are dropped.

Hi, guys!

I succeeded last week to rebuild/reorganize indexes as I was saying.
Around 177 GB free space were available after whole this operation and after 6 days of ETL, fragmentation rate is somewhere at 3%.

Improvement to ETL job was significant the next day when duration time decreased with one hour. After that, started to rise again but not drastically.

I decided to schedule index optimization job once at week based on those 3% fragmentation.

Next thing that I want to test is adding a clustered index but when new servers will be available.

Thank guys!

I missed the original discussion (some really good points raised there :slightly_smiling: )

So a bit late now, but in terms of a "temporary fix" would there have been any mileage in BCP out the table, and then TRUNCATE and BCP back in again, to clean up the table, and perhaps release enough disk space that there was then space on the server to restore a TEMP copy of the DB (or this one table at least) for further testing?

(The BCP action would need scheduled downtime when other folk, and scheduled jobs, were not accessing the DB)

Performance is rarely due to fragmentation. It's more likely to be a statistics issue. When you rebuilt the index, statistics were updated. Instead of rebuilding indexes, just update stats and see if that solves your pain point.

3% threshold is extremely low.