I was going to mention that...I wrote a custom procedure to do this which can be very complex depending on the rules you want to implement. Here is a query to start with:
Select ss.[object_id]
, ss.stats_id
, object_schema_name(ss.[object_id]) As SchemaName
, object_name(ss.[object_id]) As ObjectName
, ss.name As StatsName
, Case When ss.auto_created = 0 And ss.user_created = 0 Then 'Fullscan'
When ss.user_created = 1 Then 'Default'
When ss.auto_created = 1 Then 'Default'
Else 'Default'
End As SamplingRate
, sp.last_updated
From sys.stats ss
Inner Join sys.objects so On so.[object_id] = ss.[object_id]
Left Join sys.indexes i On i.[object_id] = ss.[object_id]
And i.index_id = ss.stats_id
Cross Apply sys.dm_db_stats_properties(ss.[object_id], ss.stats_id) sp
Where sp.[rows] > 0
And so.[type] In ('U', 'IT')
And sp.modification_counter > 0
And (i.is_hypothetical = 0 Or i.is_hypothetical Is Null)
Order By
Case When ss.auto_created = 0 And ss.user_created = 0 And sp.[rows] <> sp.rows_sampled Then 1
When ss.user_created = 1 And sp.[rows] <= @lowerLimit And sp.[rows] <> sp.rows_sampled Then 2
When (ss.auto_created = 0 Or ss.user_created = 1)
And sp.modification_counter > sqrt(1000 * sp.[rows]) Then 3
When (ss.auto_created = 0 Or ss.user_created = 1)
And sp.modification_counter + (sp.modification_counter / (datediff(hour, sp.last_updated, getdate())) * 24) > sqrt(1000 * sp.[rows]) Then 4
When ss.auto_created = 0 And ss.user_created = 0 Then 5
When ss.user_created = 1 Then 6
Else 7
End
, cast(sp.last_updated As date)
, sp.modification_counter desc
, ss.stats_id;
You can add logic to set norecompute based on your specific requirements - additionally you can specify the sampling rate or any other values based upon your requirements.
The order by statement is where the real work is done...we are attempting to order the results so the statistics that really need to be updated are at the top...this way we can place a limit on how long or what times this runs.
The idea is that any statistics that support an index where the stats were previously updated using a sampling rate are selected first, then we select user created that are less than a limit of rows with a sampling rate, then user created that would be selected based on the modification count, etc...
Of course - that can all be adjusted based on what works for your systems. This is used on my larger systems (3TB and 8TB) - and we never get through the whole list over 8 to 10 hours or processing.