Logging Stored Procedure Calls

We want to log all calls to Stored Procedures with a certain prefix, mainly to see what Reports are being called by the Customer, although performance Data would be nice.

Anyone with advice on how to log Stored Procedure calls?

I'm surprised but I thought there would be a Database Level Trigger for when a Stored Procedure executes.

We have too many SPs so to add a line in each to call a logging routine would be too much.

I thought I could have something running that continually reads from sys.dm_exec_requests and then saves the SP name to a table. But if I don't put a delay inside it, it may add too much overhead. And if we do put a delay in it, we'll miss quick calls.

I'm thinking of setting up a Server-Side trace.

You can use dm_exec_procedure_stats to get general info about the stored procedure executions. I would not recommend a server-side trace, perhaps an extended event but I still think that's overkill.

For logging of all calls, I'd recommend adding that to the application layer. It won't overburden the database server that way. You can mine the data with Splunk.

1 Like

Thankyou. You're not recommending a Server Side Trace, why - will have performance overhead?

A few places said the overhead is negligible:
http://www.sqlservercentral.com/articles/SQL+Trace/71841/

Yes a server-side trace will add performance overhead. While it can be low if you configure it minimally, it still can be too high. Plus there are other ways to get the data, and you might not even need the details that a trace provides.

1 Like

We have that in every one of our Sprocs, regardless.I appreciate that it would be a lot of work to retro fit, and there are no doubt other ways to get Stats on how-often-do-they-run, but we find logging from within each Sproc incredibly useful.

We had an expectation that the additional load would be a burden on the server (our logging table is very large, and requires millions of deletes a day to purge the stale data (we hold it for 7 days), but on the one occasion when user-load brought a server to its knees (eCommerce site in the run up to Christmas, so we had the excess load every night for a couple of weeks and were able to experiment !!!) the first thing we did was to disable the logging and it made no appreciable difference - which was pleasing. (That said we have got a switch in the logging procedure that turns it off, so we still get the "call" to log, but a small modification of the log SProc causes it to just do a return unless the caller provides a "Must log me" flag)

We pass a parameter which is a concatenation of all the parameters, the log SProc returns the log record's ID and the Sproc calls a "Finished" SProc (with the ID and any error return value / message) for the log record to be updated.

All our APPs are public-facing websites, so no chance to "trot round and look over the shoulder of a user", unlike the old desktop-app days ... so any failures in the field require us to do a post-mortem to see exactly what the user did. Together with logging which pages they viewed in their session being able to see every Sproc called, and its parameters, allows us to reverse engineer exactly what they did.

A report of "Logged the start but not the finish" and also "Logged an error return value" is viewed in real time during the day so that we see if anything is (i.e. "has become") broken. Might be that as THROW/CATCH has improved (our code methodology pre-dates any of that) such that things are now better [although hard to properly test such code, in every SProc, is watertight], but we don't have to be that vigilant within each SProc: if it fails to finish we can report on that, along with it finishing with an error returned to APP/Caller.

During DEV our APPs use the logging data to display in the page (as a diagnostic popup) to see what SQL calls were made, and so on. (That includes calls by Sprocs to other SProcs, not just the initial call)

Stats reports show number of calls / average run time, but also how average run time changes over time. We have used it to recompile sprocs, automatically, during busy periods as soon as they trip over a threshold, as a band-aid whilst sorting out the underlying problem, and so on.

1 Like

Why not use partitioning for the logging table and then you just have to get rid of a partition rather than all those deletes?

I don't like logging on the SQL Server side even if it's not adding much of a load. It's still adding load.

1 Like

Code written a long time ago, and not much appetite to change it :slightly_smiling: but I agree the DELETE is annoying (and uses a lot of LOG space).

If you have any pointers to how to set up the partitioning that would be appreciated. I presume we need a new partition each day, or 7 partitions (on a round-robin basis)?

I never could figure out how to move this outside of SQL. The APP knows what parameters it passes to SQL and could log them [outside of SQL] - but I suppose they could be mangled between APP and SQL itself ..., but we like that we log the parameters to any sub-Sproc that is called, any SProc that is called from Jobs, 3rd party app/Excel spreadsheet, whatever, which connects to the DB. Perhaps there is a smarter way to do that now? all our code that handles this dates back to 1999 ... and perhaps I should put some effort into refactoring it or using something different / more appropriate.

You'd use the sliding window partitioning to achieve the deletes. You'd have a partition per day, number of partitions would be dependent on your retention plus some future windows to stay ahead of things. Deleting a day would be a metadata operation by dropping that partition or switch to another table for archiving.

Yeah I would be doing this in the application. While it might be nice to have all of this data for troubleshooting etc, every single call is probably not necessary. Logging sp_WhoIsActive, or other tools, is generally enough to troubleshoot.

That's Tara. I'll have a look in a few days time when I'm out-from-under the current rush.

If you tell me which SProcs I'll never need the logging data from, I'll leave them out of the log :smiley: At least in marketing they say that only! 50% of advertising money is wasted ... 99.999% of my logging is useless ...

That would be "after the fact" though? i.e. need someone to "do it again" to then log it? In our case we are specifically looking for problems that users had so we can fix them, in particular problems that users had which appear to have make them click-off the site. We clearly need to fix anything that causes that sort of user experience, and we need to do it as soon as we can before more people follow the first person out of the door! and, sadly, we have no idea who the person was - let alone the ability to ask them to describe the circumstances.

Ah ... the good old days of Desktop APPs where we could wander round and look over the user's shoulder to see how they reproduced the problem for us :slightly_smiling: