I currently use the following "rarely used indexes" script to find possible candidates for deletion...
, reads=user_seeks + user_scans + user_lookups
, writes = user_updates
, rows = (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id)
WHEN s.user_updates < 1 THEN 100
ELSE 1.00 * (s.user_seeks + s.user_scans + s.user_lookups) / s.user_updates
END AS reads_per_write
, 'DROP INDEX ' + QUOTENAME(i.name)
- ' ON ' + QUOTENAME(c.name) + '.' + QUOTENAME(OBJECT_NAME(s.object_id)) as 'drop statement'
FROM sys.dm_db_index_usage_stats s
INNER JOIN sys.indexes i ON i.index_id = s.index_id AND s.object_id = i.object_id
INNER JOIN sys.objects o on s.object_id = o.object_id
INNER JOIN sys.schemas c on o.schema_id = c.schema_id
WHERE OBJECTPROPERTY(s.object_id,'IsUserTable') = 1
AND s.database_id = DB_ID()
AND i.type_desc = 'nonclustered'
AND i.is_primary_key = 0
AND i.is_unique_constraint = 0
and i.name like '%idx_%'
AND (SELECT SUM(p.rows) FROM sys.partitions p WHERE p.index_id = s.index_id AND s.object_id = p.object_id) > 10000
ORDER BY reads_per_write, writes desc
The problem is that some of the indexes that have only been used 10 times in two weeks could still be very important. I just deleted one last week that was rarely used but resulted in consider locking in our database until I recreated it.
Is there a modification to the above script that would show the impact not just the usage count? And, if not, is there a better way to approach this problem?