How to know if rarely used indexes are important?

I currently use the following "rarely used indexes" script to find possible candidates for deletion...

SELECT
o.name
, indexname=i.name
, i.index_id
, reads=user_seeks + user_scans + user_lookups
, writes = user_updates
, rows = (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id)
, CASE
WHEN s.user_updates < 1 THEN 100
ELSE 1.00 * (s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates
END AS reads_per_write
, 'DROP INDEX ' + QUOTENAME(i.name)

  • ' ON ' + QUOTENAME(c.name) + '.' + QUOTENAME(OBJECT_NAME(s.object_id)) as 'drop statement'
    FROM sys.dm_db_index_usage_stats s
    INNER JOIN sys.indexes i ON i.index_id = s.index_id AND s.object_id = i.object_id
    INNER JOIN sys.objects o on s.object_id = o.object_id
    INNER JOIN sys.schemas c on o.schema_id = c.schema_id
    WHERE OBJECTPROPERTY(s.object_id,'IsUserTable') = 1
    AND s.database_id = DB_ID()
    AND i.type_desc = 'nonclustered'
    AND i.is_primary_key = 0
    AND i.is_unique_constraint = 0
    and i.name like '%idx_%'
    AND (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) > 10000
    ORDER BY reads_per_write, writes desc

The problem is that some of the indexes that have only been used 10 times in two weeks could still be very important. I just deleted one last week that was rarely used but resulted in consider locking in our database until I recreated it.

Is there a modification to the above script that would show the impact not just the usage count? And, if not, is there a better way to approach this problem?

Thanks.

No easy way to find an index that is only used by a once-a-month report, but which makes that report run in seconds instead of hours ...

So your method, which I refer to as the "Scream Test" :slight_smile: is one way.

You could DISABLE an index when you think it is not used, rather than dropping it. Then you can reactivate it using REBUILD. That saves you having to find a suitable script to re-create the index from scratch (which you might not have, or might not be the correct/current version) and will maybe bring it back online more quickly (than if your dropped it).

I'll be interested to hear if other folk have better suggestions :slight_smile:

Have you got some problem that makes you want to get rid of some indexes?

Hi Kristen,

Because we don't currently have a separate OLAP database, our OLTP database serves as both the transactional and reporting database. The table in question is called "ActivityEnrollments" and is massive. It is heavily inserted into and updated. It also has 20 indexes on it all of which are heavily used..... except one that only had 10 reads. We do seem to have general issues with insert/update performance especially for mass inserts and updates so I always strive to eliminate unneeded indexes. I usually don't have a problem but this one that I deleted ended up causing a fair amount of problems specifically because the SELECT statement that made use of it was doing an "INDEX SPOOL" in the query plan which, from what I understand, means it was creating an index on the fly which resulted in the SELECT statement causing huge blocking issues on a critical table.

When I run my "Missing Indexes" script, I get an estimated "performance impact". So, I guess what I'm asking is if there is some way of tapping into the fields that go into calculating that "performance impact" in order to include that in the rarely used index script.

The performance problems may not be solely related to excess indexes ... of course the conventional wisdom is that more indexes = slower Inserts and Updates.

You could try disabling indexes (but not the clustered index, nor any index that is required for Referential Integrity checking) during Bulk Import and then Rebuild them after. You'll then find out how long that takes - it might be unacceptably long to rebuild them (assuming that you don't gain a huge amount from disabling them - but you might wind up a net-winner, particularly if the bulk import finishes much more quickly and means the system, without rebuilt indexes, is then available sooner)

If disabling the indexes makes no/little difference to the Bulk Import then the problem is elsewhere, so in a sense that would be useful info to know. For example, it might be that there are really inefficient triggers on the table, or something like that.

You might also find that there are some infrequently used indexes which could be rebuilt immediately before the task that needs it, and then disabled again afterwards.

The problem is that the indexes I'm most concerned about are on massive tables and, since we aren't using Enterprise Edition, a rebuild locks the table and in the case of "ActivityEnrollment" takes 5 minutes to rebuild.

We've been talking about going to Enterprise Edition for a long time but given the number of cores in use (64 I believe now) it is a huge financial hit. It might be time regardless....

I hadn't thought of that.

Would a drop and (scripted) re-create of the index suffice and not lock the table?

Sadly recreates lock the table as well. It would seem that your "Scream Test" is the best solution here :wink:

1 Like