Same indexes across all databases?

We used to house all of our clients in a single shared database. Recently, we've started putting larger clients in their own database. Our clients use the application differently and have different data sets. For example, one client might have 100,000 users and only 10 courses while another client may have 1,000 users and 20,000 courses. We are currently up to 40 databases. What is the best indexing strategy. If an index is considered "missing" as per my "missing indexes" script should I apply it on the recommended database or all of the databases, i.e. should I keep the indexing for all 40 databases in sync? If the answer is yes, what is the best way to do this since it could take 4 hours to run against all database (standard edition).

Thanks.

I would keep all the indexes the same on all the databases. If currently some databases have only some of the indexes I would create a script of all the indexes which conditionally creates an index of it does not already exist. If you think there is a possibility that an existing index might be incorrectly created, for example it's an old version and the newer version has extra columns included, then I would drop all the indexes and recreate them to ensure that they are identical.

If recreating all the indexes will take too long then you may need a combined strategy that only drops and recreates indexes if they are not currently identical to the latest version, that would require checking which columns the indexes use etc which is probably quite tedious for a one-off operation.

If some indexes are not required on some databases then you might consider using conditional indexes. For example, if you have a user-defined column which some clients use and some clients don't, then you could make the index conditional on that user-defined column not being NULL, so any client not using that column would have an empty index that would not waste any disc space.

If you think that each client needs a different indexing strategy, because of the nature and scope of their data, then you could entertain the idea of having different indexes on each database, but realistically I think you would need to be providing consultancy to individual clients in order to accommodate that sort of variation.

You should customize the indexes, as otherwise performance could suffer greatly. Index tuning should be based on actual usage, which will naturally differ between different clients.

But, you should not automatically add indexes based solely on a "missing indexes" script. A knowledgeable person needs to review and verify any index adds / changes / deletes before they are done.

Couple of thoughts:

Some indexes required (probably!) on all systems - to support JOINS and commons searches on non primary key / clustered index columns.

But beyond that I think that "index tuning for specific clients" is all well and good, but only if the cost is recoverable. Maybe it is :slight_smile: but if this is just a package for a fee, and no promise of performance optimisation frills, then maybe there is nothing in the budget to accommodate hand-tuning of each individual clients' database. And IF that IS the case then I think "all databases on the same indexes" would be my starting point.

You'd hope so, but unfortunately it's not that simple. Just because you create an index that matches a WHERE clause, or even moreso a JOIN, does not mean that SQL will actually use the index to do that WHERE/JOIN. Thus, you still need to review db by db. There is just no way around it.

Fortunately, though, it is probable that most dbs will benefit from the same clustered indexes, which is most critical for performance anyway. Still, you should review them just to be sure.

We have hundreds of clients each in their own databases. About 85-90% have the same clus index on the really big tables, but there a few outliers that access the data differently.

How often does your codebase get updated Scott? Our application is subject to patches/releases every two weeks and I often see the need to modify at least a couple of indexes after each of these. Given the dynamic nature of the application and a rapidly growing client base, the load on our "DBA" resources is quite high. If we had enterprise edition we could update the indexes anytime but since we have standard edition they have to do it late at night. I realize that is standard for these positions but I hate to impose so frequently. how do you handle that?

Yeah, I do have the advantage of having Enterprise Edition, and thus I am able to, and do, make most of my changes during the day. It will take much longer having to stretch these changes out. I've found, though, that we have certain clients that have dead or near-days where they have almost no activity. Of the business you support may not have those type of days.

The most critical thing thing is to get the best clustered index on each table. After that, there'll of course be an initial round of nonclustered index tuning. Assuming good DBAs, it should be relatively minor adjustments after that, which can normally need done only infrequently and won't have a serious impact on overall performance.

I do fear that our clustered indexes are not ideal. When we rewrote our application 4 years ago we decided to use GUIDs as unique identifiers / primary keys (a decision I didn't support). We therefore created a "rowid" autonumber column that serves as the clustered index on every table. I don't know this for certain but it seems that when your primary key is also your clustered index that SQL essentially stores all of the other columns with the primary key as "include" columns behind the scenes. But because our clustered index isn't our primary key it doesn't have those other columns stored with it. I'm therefore seeing tons of "key lookups" in the plans.

You're right. While identity / autonum is very commonly used by default as the clustered index on table, it's a terrible idea. As a PK it's fine, as long as you don't make it clustered. Even a guid PK is OK, although it's not ideal.

If you run the script below and post both results, I can give you some pointers on determining the best clustered indexes. Post it as an actual file, such as a spreadsheet, and don't use comma-delimited, as the data itself has commas in it. Tab or pipe delimited should be fine.

SET NOCOUNT ON;
SET DEADLOCK_PRIORITY -8;

IF OBJECT_ID('tempdb.dbo.#index_specs') IS NOT NULL
    DROP TABLE dbo.#index_specs
IF OBJECT_ID('tempdb.dbo.#index_missing') IS NOT NULL
    DROP TABLE dbo.#index_missing
IF OBJECT_ID('tempdb.dbo.#index_usage')  IS NOT NULL
    DROP TABLE dbo.#index_usage

CREATE TABLE dbo.#index_specs (
    object_id int NOT NULL,
    index_id int NOT NULL,
    min_compression int NULL,
    max_compression int NULL,
    drive char(1) NULL,
    alloc_mb decimal(9, 1) NOT NULL,
    alloc_gb AS CAST(alloc_mb / 1024.0 AS decimal(9, 2)),
    used_mb decimal(9, 1) NOT NULL,
    used_gb AS CAST(used_mb / 1024.0 AS decimal(9, 2)),
    rows bigint NULL,
    table_mb decimal(9, 1) NULL,
    table_gb AS CAST(table_mb / 1024.0 AS decimal(9, 2)),
    size_rank int NULL,
    approx_max_data_width bigint NULL,
    UNIQUE CLUSTERED ( object_id, index_id )
    )

DECLARE @list_missing_indexes bit
DECLARE @list_missing_indexes_summary bit
DECLARE @include_schema_in_table_names bit
DECLARE @table_name_pattern sysname
DECLARE @order_by smallint --1=table_name; 2=size; -2=size DESC;.
DECLARE @format_counts smallint --1=with commas, no decimals; 2/3=with K=1000s,M=1000000s, with 2=0 dec. or 3=1 dec. places;.
DECLARE @debug smallint

--NOTE: showing missing indexes can take some time; set to 0 if you don't want to wait.
SET @list_missing_indexes = 1
SET @list_missing_indexes_summary = 0
SET @include_schema_in_table_names = 0
SET @table_name_pattern = '%'
SET @order_by = -2
SET @format_counts = 3
SET @debug = 0

PRINT 'Started @ ' + CONVERT(varchar(30), GETDATE(), 120)

DECLARE @is_compression_available bit
DECLARE @sql varchar(max)

IF CAST(SERVERPROPERTY('ProductVersion') AS varchar(30)) LIKE '9%'
OR (CAST(SERVERPROPERTY('Edition') AS varchar(40)) NOT LIKE '%Developer%' AND 
    CAST(SERVERPROPERTY('Edition') AS varchar(40)) NOT LIKE '%Enterprise%')
    SET @is_compression_available = 0
ELSE
    SET @is_compression_available = 1

SET @sql = '
INSERT INTO #index_specs ( object_id, index_id,' +
    CASE WHEN @is_compression_available = 0 THEN '' ELSE '
    min_compression, max_compression,' END + '
    alloc_mb, used_mb, rows )
SELECT 
    base_size.object_id, 
    base_size.index_id, ' +
    CASE WHEN @is_compression_available = 0 THEN '' ELSE '
    base_size.min_compression,
    base_size.max_compression,' END + '
    (base_size.total_pages + ISNULL(internal_size.total_pages, 0)) / 128.0 AS alloc_mb,
    (base_size.used_pages + ISNULL(internal_size.used_pages, 0)) / 128.0 AS used_mb,
    base_size.row_count AS rows
FROM (
    SELECT
        dps.object_id,
        dps.index_id, ' +   
        CASE WHEN @is_compression_available = 0 THEN '' ELSE '
        ISNULL(MIN(p.data_compression), 0) AS min_compression,
        ISNULL(MAX(p.data_compression), 0) AS max_compression,' END + '    
        SUM(dps.reserved_page_count) AS total_pages,
	    SUM(dps.used_page_count) AS used_pages,
	    SUM(CASE WHEN dps.index_id IN (0, 1) THEN dps.row_count ELSE 0 END) AS row_count
    FROM sys.dm_db_partition_stats dps ' +
    CASE WHEN @is_compression_available = 0 THEN '' ELSE '
    INNER JOIN sys.partitions p /* WITH (NOLOCK)*/ ON
        p.partition_id = dps.partition_id ' END + '
    WHERE dps.object_id > 100
    GROUP BY 
        dps.object_id,
        dps.index_id
) AS base_size
LEFT OUTER JOIN (
    SELECT 
        it.parent_id,
        SUM(dps.reserved_page_count) AS total_pages,
	    SUM(dps.used_page_count) AS used_pages
    FROM sys.internal_tables it /* WITH (NOLOCK)*/
    INNER JOIN sys.dm_db_partition_stats dps /* WITH (NOLOCK)*/ ON 
        dps.object_id = it.parent_id
    WHERE it.internal_type IN ( ''202'', ''204'', ''211'', ''212'', ''213'', ''214'', ''215'', ''216'' )
    GROUP BY
        it.parent_id
) AS internal_size ON base_size.index_id IN (0, 1) AND internal_size.parent_id = base_size.object_id
'
IF @debug >= 1
    PRINT @sql
EXEC(@sql)

UPDATE [is]
SET approx_max_data_width = index_cols.approx_max_data_width
FROM #index_specs [is]
INNER JOIN (
    SELECT index_col_ids.object_id, index_col_ids.index_id, 
        SUM(CASE WHEN c.max_length = -1 THEN 16 ELSE c.max_length END) AS approx_max_data_width
    FROM (
        SELECT ic.object_id, ic.index_id, ic.column_id
            --,object_name(ic.object_id)
        FROM sys.index_columns ic
        WHERE
            ic.object_id > 100
        UNION
        SELECT i_nonclus.object_id, i_nonclus.index_id, ic_clus.column_id
            --,object_name(i_nonclus.object_id)
        FROM sys.indexes i_nonclus
        CROSS APPLY (
            SELECT ic_clus2.column_id
                --,object_name(ic_clus2.object_id),ic_clus2.key_ordinal
            FROM sys.index_columns ic_clus2
            WHERE
                ic_clus2.object_id = i_nonclus.object_id AND
                ic_clus2.index_id = 1 AND
                ic_clus2.key_ordinal > 0 --technically superfluous, since clus index can't have include'd cols anyway
        ) AS ic_clus
        WHERE
            i_nonclus.object_id > 100 AND
            i_nonclus.index_id > 1
    ) AS index_col_ids
    INNER JOIN sys.columns c ON c.object_id = index_col_ids.object_id AND c.column_id = index_col_ids.column_id
    GROUP BY index_col_ids.object_id, index_col_ids.index_id    
) AS index_cols ON index_cols.object_id = [is].object_id AND index_cols.index_id = [is].index_id

UPDATE ispec
SET table_mb = ispec_ranking.table_mb,
    size_rank = ispec_ranking.size_rank
FROM #index_specs ispec
INNER JOIN (
    SELECT *, ROW_NUMBER() OVER(ORDER BY table_mb DESC, rows DESC, OBJECT_NAME(object_id)) AS size_rank
    FROM (
        SELECT object_id, SUM(alloc_mb) AS table_mb, MAX(rows) AS rows
        FROM #index_specs
        GROUP BY object_id
    ) AS ispec_allocs        
) AS ispec_ranking ON
    ispec_ranking.object_id = ispec.object_id

IF @list_missing_indexes = 1
BEGIN
    SELECT
        IDENTITY(int, 1, 1) AS ident,
        DB_NAME(mid.database_id) AS Db_Name,
        CONVERT(varchar(10), GETDATE(), 120) AS capture_date,
        ispec.size_rank, ispec.table_mb,
        CASE WHEN @format_counts = 1 THEN REPLACE(CONVERT(varchar(20), CAST(dps.row_count AS money), 1), '.00', '')
             WHEN @format_counts = 2 THEN CAST(CAST(dps.row_count * 1.0 / CASE ca1.row_count_suffix 
                 WHEN 'M' THEN 1000000 WHEN 'K' THEN 1000 ELSE 1 END AS int) AS varchar(20)) + ca1.row_count_suffix
             WHEN @format_counts = 3 THEN CAST(CAST(dps.row_count * 1.0 / CASE ca1.row_count_suffix 
                 WHEN 'M' THEN 1000000 WHEN 'K' THEN 1000 ELSE 1 END AS decimal(14, 1)) AS varchar(20)) + ca1.row_count_suffix
             ELSE CAST(dps.row_count AS varchar(20)) END AS row_count,
        CASE WHEN @include_schema_in_table_names = 1 THEN OBJECT_SCHEMA_NAME(mid.object_id) + '.' 
             ELSE '' END + OBJECT_NAME(mid.object_id /*, mid.database_id*/) AS Table_Name,
        mid.equality_columns, mid.inequality_columns, mid.included_columns,        
        user_seeks, user_scans, cj1.max_days_active, unique_compiles, 
        last_user_seek, last_user_scan, 
        CAST(avg_total_user_cost AS decimal(9, 2)) AS avg_total_user_cost,
        CAST(avg_user_impact AS decimal(9, 2)) AS [avg_user_impact%],
        system_seeks, system_scans, last_system_seek, last_system_scan,
        CAST(avg_total_system_cost AS decimal(9, 2)) AS avg_total_system_cost,
        CAST(avg_system_impact AS decimal(9, 2)) AS [avg_system_impact%],
        mid.statement, mid.object_id, mid.index_handle
    INTO #index_missing
    FROM sys.dm_db_missing_index_details mid /*WITH (NOLOCK)*/
    CROSS JOIN (
        SELECT DATEDIFF(DAY, create_date, GETDATE()) AS max_days_active FROM sys.databases /*WITH (NOLOCK)*/ WHERE name = 'tempdb'
    ) AS cj1
    LEFT OUTER JOIN sys.dm_db_missing_index_groups mig /*WITH (NOLOCK)*/ ON
        mig.index_handle = mid.index_handle
    LEFT OUTER JOIN sys.dm_db_missing_index_group_stats migs /*WITH (NOLOCK)*/ ON
        migs.group_handle = mig.index_group_handle
    LEFT OUTER JOIN sys.dm_db_partition_stats dps /*WITH (NOLOCK)*/ ON
        dps.object_id = mid.object_id AND
        dps.index_id IN (0, 1)
    CROSS APPLY (
        SELECT CASE WHEN dps.row_count >= 1000000 THEN 'M' WHEN dps.row_count >= 1000 THEN 'K' ELSE '' END AS row_count_suffix
    ) AS ca1
    OUTER APPLY (
        SELECT ispec.table_mb, ispec.size_rank
        FROM dbo.#index_specs ispec
        WHERE
            ispec.object_id = mid.object_id AND
            ispec.index_id IN (0, 1)
    ) AS ispec
    WHERE
        1 = 1 
        AND mid.database_id = DB_ID()
        AND OBJECT_NAME(mid.object_id) LIKE @table_name_pattern
        AND OBJECT_NAME(mid.object_id) NOT LIKE 'tmp%'
    ORDER BY
        --avg_total_user_cost * (user_seeks + user_scans) DESC,
        Db_Name,
        CASE WHEN @order_by IN (-2, 2) THEN ispec.size_rank * -SIGN(@order_by) ELSE 0 END,
        Table_Name,
        equality_columns, inequality_columns,
        user_seeks DESC
    SELECT *
    FROM #index_missing
    ORDER BY ident
    IF @list_missing_indexes_summary = 1
    BEGIN
        SELECT 
            derived.Size_Rank, derived.table_mb,
            derived.Table_Name, derived.Equality_Column, derived.Equality#, derived.User_Seeks, 
            ISNULL((SELECT SUM(user_seeks)
             FROM #index_missing im2
             CROSS APPLY DBA.dbo.DelimitedSplit8K (inequality_columns, ',') ds
             WHERE im2.Size_Rank = derived.Size_Rank AND
                 LTRIM(RTRIM(ds.Item)) = derived.Equality_Column
            ), 0) AS Inequality_Seeks,
            derived.User_Scans, derived.Last_User_Seek, derived.Last_User_Scan,
            derived.Max_Days_Active, derived.Avg_Total_User_Cost, derived.Approx_Total_Cost
        FROM (
            SELECT 
                Size_Rank, MAX(table_mb) AS table_mb, Table_Name, LTRIM(RTRIM(ds.Item)) AS Equality_Column, 
                SUM(user_seeks) AS User_Seeks, SUM(user_scans) AS User_Scans,
                MAX(last_user_seek) AS Last_User_Seek, MAX(last_user_scan) AS Last_User_Scan,
                MIN(max_days_active) AS Max_Days_Active,
                MAX(avg_total_user_cost) AS Avg_Total_User_Cost,
                (SUM(user_seeks) + SUM(user_scans)) * MAX(avg_total_user_cost) AS Approx_Total_Cost,
                MAX(ds.ItemNumber) AS Equality#
            FROM #index_missing
            CROSS APPLY DBA.dbo.DelimitedSplit8K (equality_columns, ',') ds
            WHERE equality_columns IS NOT NULL
            GROUP BY size_rank, Table_Name, LTRIM(RTRIM(ds.Item))
        ) AS derived
        ORDER BY Size_Rank, Table_Name, Approx_Total_Cost DESC        
    END --IF
END --IF



PRINT 'Index Usage Stats @ ' + CONVERT(varchar(30), GETDATE(), 120)

-- list index usage stats (seeks, scans, etc.)
SELECT 
    IDENTITY(int, 1, 1) AS ident,
    DB_NAME() AS db_name,
    --ispec.drive AS drv,
    ispec.size_rank, ispec.alloc_mb - ispec.used_mb AS unused_mb, 
    CASE WHEN @include_schema_in_table_names = 1 THEN OBJECT_SCHEMA_NAME(i.object_id /*, DB_ID()*/) + '.' 
         ELSE '' END + OBJECT_NAME(i.object_id /*, i.database_id*/) AS Table_Name,    
    CASE WHEN @format_counts = 1 THEN REPLACE(CONVERT(varchar(20), CAST(dps.row_count AS money), 1), '.00', '')
         WHEN @format_counts = 2 THEN CAST(CAST(dps.row_count * 1.0 / CASE ca1.row_count_suffix 
             WHEN 'M' THEN 1000000 WHEN 'K' THEN 1000 ELSE 1 END AS int) AS varchar(20)) + ca1.row_count_suffix
         WHEN @format_counts = 3 THEN CAST(CAST(dps.row_count * 1.0 / CASE ca1.row_count_suffix 
             WHEN 'M' THEN 1000000 WHEN 'K' THEN 1000 ELSE 1 END AS decimal(14, 1)) AS varchar(20)) + ca1.row_count_suffix
         ELSE CAST(dps.row_count AS varchar(20)) END AS row_count,
    ispec.table_gb, ispec.alloc_gb AS index_gb,
    SUBSTRING('NY', CAST(i.is_primary_key AS int) + CAST(i.is_unique_constraint AS int) + 1, 1) +
    CASE WHEN i.is_unique = CAST(i.is_primary_key AS int) + CAST(i.is_unique_constraint AS int) THEN '' 
         ELSE '.' + SUBSTRING('NY', CAST(i.is_unique AS int) + 1, 1) END AS [Uniq?],
    REPLACE(i.name, oa1.table_name, '~') AS index_name,
    --fc_row_count.formatted_value AS row_count,
    i.index_id,
    ispec.approx_max_data_width AS [data_width], 
    CAST(CAST(ispec.used_mb AS float) * 1024.0 * 1024.0 / NULLIF(dps.row_count, 0) AS int) AS cmptd_row_size,
    key_cols AS key_cols,
    LEN(nonkey_cols) - LEN(REPLACE(nonkey_cols, ',', '')) + 1 AS nonkey_count,
    nonkey_cols AS nonkey_cols,
    ius.user_seeks, ius.user_scans, --ius.user_seeks + ius.user_scans AS total_reads,
    ius.user_lookups, ius.user_updates,
    dios.leaf_delete_count + dios.leaf_insert_count + dios.leaf_update_count as leaf_mod_count,
    dios.range_scan_count, dios.singleton_lookup_count,
    DATEDIFF(DAY, STATS_DATE ( i.object_id , i.index_id ), GETDATE()) AS stats_days_old,
    DATEDIFF(DAY, CASE 
        WHEN o.create_date > cj1.sql_startup_date AND o.create_date > o.modify_date THEN o.create_date 
        WHEN o.modify_date > cj1.sql_startup_date AND o.modify_date > o.create_date THEN o.modify_date 
        ELSE cj1.sql_startup_date END, GETDATE()) AS max_days_active,
    dios.row_lock_count, dios.row_lock_wait_in_ms,
    dios.page_lock_count, dios.page_lock_wait_in_ms,    
    ius.last_user_seek, ius.last_user_scan,
    ius.last_user_lookup, ius.last_user_update,
    fk.Reference_Count AS fk_ref_count,
    i.fill_factor,
    ius2.row_num,
    CASE 
        WHEN ispec.max_compression IS NULL THEN '(Not applicable)'
        WHEN ispec.max_compression = 2 THEN 'Page'
        WHEN ispec.max_compression = 1 THEN 'Row'
        WHEN ispec.max_compression = 0 THEN ''
        ELSE '(Unknown)' END AS max_compression,
    ius.system_seeks, ius.system_scans, ius.system_lookups, ius.system_updates,
    ius.last_system_seek, ius.last_system_scan, ius.last_system_lookup, ius.last_system_update,
    GETDATE() AS capture_date
INTO #index_usage
FROM sys.indexes i /*WITH (NOLOCK)*/
INNER JOIN sys.objects o /*WITH (NOLOCK)*/ ON
    o.object_id = i.object_id
CROSS JOIN (
    SELECT create_date AS sql_startup_date FROM sys.databases /*WITH (NOLOCK)*/ WHERE name = 'tempdb'
) AS cj1
OUTER APPLY (
    SELECT CASE WHEN EXISTS(SELECT 1 FROM #index_specs [is] WHERE [is].object_id = i.object_id AND [is].index_id = 1)
        THEN 1 ELSE 0 END AS has_clustered_index
) AS cj2
LEFT OUTER JOIN dbo.#index_specs ispec ON
    ispec.object_id = i.object_id AND
    ispec.index_id = i.index_id
OUTER APPLY (
    SELECT STUFF((
    SELECT
        ', ' + COL_NAME(ic.object_id, ic.column_id)
    FROM sys.index_columns ic /*WITH (NOLOCK)*/
    WHERE
        ic.key_ordinal > 0 AND
        ic.object_id = i.object_id AND
        ic.index_id = i.index_id
    ORDER BY
        ic.key_ordinal
    FOR XML PATH('')
    ), 1, 2, '')
) AS key_cols (key_cols)
OUTER APPLY (
    SELECT STUFF((
    SELECT
        ', ' + COL_NAME(ic.object_id, ic.column_id)
    FROM sys.index_columns ic /*WITH (NOLOCK)*/
    WHERE
        ic.key_ordinal = 0 AND
        ic.object_id = i.object_id AND
        ic.index_id = i.index_id
    ORDER BY
        COL_NAME(ic.object_id, ic.column_id)
    FOR XML PATH('') 
    ), 1, 2, '')
) AS nonkey_cols (nonkey_cols)
LEFT OUTER JOIN sys.dm_db_partition_stats dps /*WITH (NOLOCK)*/ ON
    dps.object_id = i.object_id AND
    dps.index_id = i.index_id
LEFT OUTER JOIN sys.dm_db_index_usage_stats ius /*WITH (NOLOCK)*/ ON
    ius.database_id = DB_ID() AND
    ius.object_id = i.object_id AND
    ius.index_id = i.index_id
LEFT OUTER JOIN (
    SELECT
        database_id, object_id, MAX(user_scans) AS user_scans,
        ROW_NUMBER() OVER (ORDER BY MAX(user_scans) DESC) AS row_num --user_scans|user_seeks+user_scans
    FROM sys.dm_db_index_usage_stats /*WITH (NOLOCK)*/
    WHERE
        database_id = DB_ID()
        --AND index_id > 0
    GROUP BY
        database_id, object_id
) AS ius2 ON
    ius2.database_id = DB_ID() AND
    ius2.object_id = i.object_id
LEFT OUTER JOIN (
    SELECT
        referenced_object_id, COUNT(*) AS Reference_Count
    FROM sys.foreign_keys /*WITH (NOLOCK)*/
    WHERE
        is_disabled = 0
    GROUP BY
        referenced_object_id
) AS fk ON
    fk.referenced_object_id = i.object_id
LEFT OUTER JOIN (
    SELECT *
    FROM sys.dm_db_index_operational_stats ( DB_ID(), NULL, NULL, NULL )
) AS dios ON
    dios.object_id = i.object_id AND
    dios.index_id = i.index_id
OUTER APPLY (
    SELECT OBJECT_NAME(i.object_id/*, DB_ID()*/) AS table_name
        --, CASE WHEN dps.row_count >= 1000000 THEN 'M' WHEN dps.row_count >= 1000 THEN 'K' ELSE '' END AS row_count_suffix
) AS oa1
CROSS APPLY (
    SELECT CASE WHEN dps.row_count >= 1000000 THEN 'M' WHEN dps.row_count >= 1000 THEN 'K' ELSE '' END AS row_count_suffix
) AS ca1

WHERE
    i.object_id > 100 AND
    i.is_hypothetical = 0 AND
    i.type IN (0, 1, 2) AND
    o.type NOT IN ( 'IF', 'IT', 'TF', 'TT' ) AND
    (
     o.name LIKE @table_name_pattern AND
     o.name NOT LIKE 'dtprop%' AND
     o.name NOT LIKE 'filestream[_]' AND
     o.name NOT LIKE 'MSpeer%' AND
     o.name NOT LIKE 'MSpub%' AND
     --o.name NOT LIKE 'queue[_]%' AND 
     o.name NOT LIKE 'sys%' 
    )
ORDER BY
    --cj2.has_clustered_index, ispec.size_rank, --heaps first, by size
    db_name,
    --i.index_id,
    --ius.user_seeks - ius.user_scans,
    CASE WHEN @order_by IN (-2, 2) THEN ispec.size_rank * -SIGN(@order_by) ELSE 0 END,
    --ius.user_scans DESC,
    --ius2.row_num, --user_scans&|user_seeks
    table_name, 
    -- list clustered index first, if any, then other index(es)
    CASE WHEN i.index_id IN (0, 1) THEN 1 ELSE 2 END, 
    key_cols


SELECT *
FROM #index_usage
ORDER BY ident

PRINT 'Ended @ ' + CONVERT(varchar(30), GETDATE(), 120)

SET DEADLOCK_PRIORITY NORMAL

Yes, good point. Perhaps I wonder if I could have a hint in the code (e.g. a COMMENT) as to which index should be used, such that I could mechanically-generate a Check Report that THAT index is used for THAT query ... seems a bit far removed from the Code to get to the DMV later on, but I hate having entirely separate reports that then get out of Sync with the code.

We do something, sort of, similar. We have an IDENTITY column on every table which is the primary key, but that is unrelated to the clustered index. We use the IDENTITY column in order to have a unique, single column, reference to a row which is used in all mechanically generated code in order to avoid having to handle complex multi-part keys. We have a report of all recently created tables that have IDENTITY column as the clustered index to make sure it is reviewed and checked that it is the best choice. It sometimes is :slight_smile:

The main reason for doing this was laziness that it is easy for the mechanical code to deduce the IDENTITY/ single-part-unique column by checking which column is defined as the primary key. Perhaps if I was doing it again I would use an extended property on the column instead.

Nothing wrong with an identity on rows for convenience, as long as you don't (lazily!) default to using that for the clustering key.

For tables across sites, rather than a guid you can assign a site code, then use a combination of site code and identity to be unique, even when values are combined. Typically a small int is plenty enough for the site code value.

Hi Scott,

That's a very generous offer. Are you able to access the file here?

Removed

Note: Any index prefixed with "IX" is a foreign key index and anything prefixed with "idx" is a non-clustered index I added at some point in time.

Thanks!
Craig

I can see it, but I can't download it, which makes it really hard to analyze.

Can you just post the spreadsheet here? You can obfuscate any names you need to, of course, I don't care what the actual object, etc. names are.

It won't let me post Excel documents here. Only images. Unless I'm looking in the wrong place.

Maybe this one...

Removed

If not, can you copy and paste into a local Excel spreadsheet?

Cheers,
Craig

Hmm, you're right, this site seems to allow only image files to be uploaded. I didn't know that before.

I'll try sending you a message. If that's private, I can send you an email address and you could send the files that way.