Execution Count

Hello, I know how to get the execution count (since server start or DBCC FREEPROCCACHE), but I would like to know is there an article that describes how to archive this information to track over time? We do not restart the server often and hardly ever run freeproccache (I think I did run it a few years ago), but I would still like to keep track when we do.

SELECT DB_NAME(st.dbid)                 AS DBName
        ,OBJECT_NAME(st.objectid,dbid)  AS StoredProcedure
        ,cp.usecounts                   AS Execution_count
    FROM sys.dm_exec_cached_plans cp
    CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) st
        AND cp.objtype = 'proc';

I have thought about saving the data with a date but do not have an idea of how to show usage over time.

Thank you,

TO archive the data, build and Agent job to run your query and append the results (with rundatetime) to an archive table. FREEPROCCACHE is a problem of course. You'll need to restrict its use, probably audit it and remember to reset the archive records (from the job) for that day.

Thank you for the replay.

I understand the table, however I was hoping for help with the mathematics (statistics?). :smiley:
I did not want to go into detail of how I see the problem as I thought an article might have the answers.

What are the statistics that you are looking for? One thing you might find useful is the standard reports. There are a variety of them that you can access by right-clicking the server name in SSMS and selecting Reports -> Standard Reports.

Thank you for the reply. I was thinking it would be nice to have a historical usage table.
I found an article on baseline, which gave me the way to do what I want. So after a snapshot I can then find the delta and keep track of that.

I doubt it is any help to you but ALL our SProcs "Log themselves". On entry they create a record in a log table, and on exit they update it with Error Number (0=Success) and GetDate(). That gives us execution count, elapsed time, and number of error exits. On very busy systems we automatically recompile SProcs when the elapsed time "increases" (by an amount we consider statistically significant ...)

It is surprising how little overhead this adds - our Log Table is WAY bigger than anything else in the database (and the daily delete-of-stale-data is a significant task), but is essentially a write-only table (we are careful that the UPDATE does not change the size of the row, except if there is an error and it has to log a message too, but Hey! we write bug-free code so not a problem eh? :smiley: Certainly not a "frequent occurrence" - we also have a Log Error routine which adds a row to the log table for an "unexpected situation"; that (compared to adding a variable length message to the Sproc's Exit's Log Update) is definitely Write Only, plus of course it allows an Sproc to log multiple such messages).

We also log the calling parameters - so can use the Log for debugging when necessary, or even analysis of parameters to a specific sproc.

P.S. Sprocs which are called VERY frequently have an additional optional parameter to the logging SProc indicating "Don't log me if system is busy". We then re-create the logging sproc with code to exclude those (this is NOT a config parameter, we don't want the logging SProc slowing down by having to look such a thing up each execution, but that said I believe there is an in-memory variable that we could have used, and there are certainly in-memory tables available in newer versions of SQL these days). In practice we have almost-never used this option. The only time we have is when the performance of the Logging was poor which, it turned out, was due to something being recompiled unnecessarily and the re-compile being SYNC rather than ASync - so we used it to fix a performance problem with Logging, rather than an overall performance problem. Pleased that we had built in that opportunity when the ship hit that particular rock though!)