Hi experts,
We have a few queries that get suspended from time to time.
I run this to see them:
SELECT req.session_id
,blocking_session_id
,ses.host_name
,DB_NAME(req.database_id) AS DB_NAME
,ses.login_name
,req.status
,req.command
,req.start_time
,req.cpu_time
,req.total_elapsed_time / 1000.0 AS total_elapsed_time
,req.command
,req.wait_type
,sqltext.text
FROM sys.dm_exec_requests req
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS sqltext
JOIN sys.dm_exec_sessions ses
ON ses.session_id = req.session_id
WHERE req.wait_type IS NOT NULL
This Select query has been hung for almost 24 hours:
This is a client issue - basically SQL Server is waiting on the client to consume the data it has sent to that client. This is not an issue in SQL Server and if the query has been hung for that long - it is probably because the client is taking a long time to process the results.
I have seen long running queries like this when the client is Access. The reason it happens is because Access processes the results in a cursor - one row at a time.
Jeff, you're exactly right. The front-end application is Access. We always end up Killing the SPID as it's never going to complete after running 24 hours. Thanks, it's good to know there's nothing to do on the SQL Server side.
killing SPID even as a short term resolution is suspect. You need to have a good conversation with the Access developer(s) unless that is you to sort out where the issue might be: especially if they use inline queries and data manipulations. You cant just chuck it to "there is nothing to do on SQL Sever side" You will be forced to deal with it later on down the line.
But you do handle the SQL Server side - and queries that run for 24 hours or more are issues that need to be addressed. That means - once identified - you need to work with them to optimize the process.
What that optimization is will depend on what they are trying to accomplish. The solution could be as simple as having them modify their query to be a pass-through query. It could be you setting up a view that provides a subset - or it can be a modification on their side to 'download' the data so they can join to it locally.