SQLTeam.com | Weblogs | Forums

Application Taking More Time(Hang)


We are facing issues in our Coding application and database.Our coding application having front end asp.net 3.5 and back end SQL Server 2008 R2.Based on project requirement we doing fields setting for coding.We facing issues of coding application & database getting more time (hang) during the coding process. Below mentioned testing observation

Test Batch Records Coding Completed QC Completed Users Fields Record Saving Time Remarks
1 Batch1 437 437 437 15-20 4 2 - 5 Sec No Issue
2 Batch2 882 882 882 15-20 4 2 - 5 Sec No Issue
3 Batch3 1426 1426 1426 15-20 5 2 - 5 Sec No Issue
4 Batch4 5761 1252 531 15-20 1 2 - 5 Sec No Issue
5 Batch5 1675 448 357 15-20 5 2 - 5 Sec No Issue
6 Batch6 247 247 247 15-20 4 2 - 5 Sec No Issue
7 Batch7 665 665 665 15-20 4 2 - 5 Sec No Issue
8 Batch8 224 224 224 15-20 2 First 2-5 Sec After 200 takes 5 - 10 Sec No Issue
9 Batch9 790 250 250 15-20 2 First 2-5 Sec After 200 takes 5 - 10 Sec Hanging start
10 Batch10 573 282 273 15-20 1 First 2-5 Sec After 200 takes 5 - 10 Sec Hanging start after 250 records

We run the trace and found sp FetchDocumentID taking more time. We execute the sp in SQL Server by selecting Executing plan and not found any missing Indexes.

Below mention Sp structure

CREATE PROCEDURE [dbo].[FetchDocumentID]
(@i_UserId int,
@i_ProjectId int)
DECLARE @Error INT = 0
, @ErrorLine INT
, @ErrorDesc NVARCHAR(1000)
, @Message NVARCHAR(1000)
, @ProcName NVARCHAR(128) = N'Project.dbo.FetchDocumentID.procedure.sql'
, @UserDetailId BIGINT
, @RoleName VARCHAR(150)


,FieldId INT
, DocumentID INT
, Rejected INT )

FROM dbo.UserAccess UA
WHERE UA.ProjectMasterId = @i_ProjectId
AND UA.AccessId = 4
INSERT #Document (DocumentID,Rejected )
SELECT TOP 1 [DM].[DocumentId],1
FROM [dbo].[DataMaster] DM WITH(NOLOCK)
LEFT JOIN [dbo].[DocumentMaster] DC WITH(NOLOCK)
ON [DM].[DocumentId] = [DC].[DocumentMasterId]
WHERE [DM].[IsCoded] = 1
AND [DM].[IsRejected]=1
AND [DM].CoderId = @i_UserId
AND [DC].[ProjectId] = @i_ProjectId
AND (([DC].[IsLocked] = 1 and [DC].[Status] = @i_UserId)
OR ([DC].[IsLocked] = 0 AND [DC].[IsChecked] = 1 ))
GROUP BY [DM].[DocumentId]
ORDER BY [DM].[DocumentId]

INSERT #Document (FieldId
, B.DocumentMasterId
FROM dbo.UserAccess A WITH(NOLOCK)
INNER JOIN dbo.DocumentMaster B WITH(NOLOCK)
ON A.ProjectMasterId = b.ProjectId
WHERE A.[UserMasterId] =@i_UserId
AND A.ProjectMasterId=@i_ProjectId
AND (B.IsLocked = 0 OR (B.IsLocked = 1 AND B.[STATUS] = @i_UserId) )


INSERT #temp1
FROM #Document A
LEFT JOIN DataMaster b
ON a.FieldId = b.FieldId
AND a.DocumentID = b.DocumentId
WHERE b.DocumentId IS NULL
ORDER BY a.DocumentID

if EXISTS(select * from #temp1)
UPDATE [dbo].[DocumentMaster]
SET [IsLocked] = 1
,[LockedTime] = SYSDATETIME()
,[Status] = @i_UserId
WHERE [DocumentMasterId] = (select DocID from #temp1)

, 0 AS Rejected
FROM #Document A
LEFT JOIN DataMaster b
ON a.FieldId = b.FieldId
AND a.DocumentID = b.DocumentId
WHERE b.DocumentId IS NULL
ORDER BY a.DocumentID

UPDATE [dbo].[DocumentMaster]
SET [IsLocked] = 1
,[LockedTime] = SYSDATETIME()
,[Status] = @i_UserId
WHERE [DocumentMasterId] = (select DocumentID from #Document)

SELECT A.DocumentID,Rejected
FROM #Document A

, @ErrorLine = ERROR_LINE()
, @ErrorDesc = ERROR_MESSAGE()

SET @Message = N'Application of the procedure, %s, failed at line, %d, due to error no. %d.: %s'
RAISERROR(@Message, 11, 1, @ProcName, @ErrorLine, @Error, @ErrorDesc) WITH NOWAIT

RETURN (@Error)

Can anyone help me to resolve hang issue.

Thank you in advance


The execution plan may not be reporting missing indexes but that does not mean you have all the indexes you need.

Do you have indexes on:

  1. all the JOIN predicates
  2. All the columns in the WHERE clauses
  3. the FieldId column of #Document
  4. DocID in #temp

This WHERE clause is a problem:

 WHERE [DocumentMasterId] = (
                              SELECT DocumentID
                              FROM #Document

The query can return more than one value. You might get this error one day:

Msg 512, Level 16, State 1, Line 1
Subquery returned more than 1 value. This is not permitted when the subquery follows =, !=, <, <= , >, >= or when the subquery is used as an expression.


Not sure you need #TEMP tables at all? Looks like you are populating them with

INSERT #TempTable

so you could just as easily get the PKey IDs into @Variables instead, which would be faster.

Some observations:

It always worries me when I see


in code. That is the most widely abuse, and most critically dangerous, SQL programming practice that I see. It WILL cause damage to your data, and critical business decisions that your users make. It will not do that often, but one day, sooner or later, it will.


Yes, that code needs completely rewritten, you don't need any temp table at all.

I don't see WITH (NOLOCK) would cause 'damage to data' if used properly. It should be used only for reporting, and only when it will not materially effect the results in any bad way.


But on a OLTP system that is never guaranteed, surely? And on a reporting system, i.e. effectively read-only, then its of no use anyway.

For example:

Someone else inserts a row and causes an Index Page Split. Your process reads that index page after it is split, but fails to read the continuation-page (i.e. the other process writes the new split-page after you have moved down the index) so you only get half the rows from that index page, so you have some rows missing in the query results.

Or the converse happens, you read the index page THEN the other process splits the page and you DO read the continuation-page; now you get half the rows again, so you have duplicates in the query results.

Not to mention including a row that is subsequently rolled back (although in practice I personally think that is an almost non existent issue).


How is it of "no use" on a reporting system?? That's the entire point, to be able to report without locking, and thus with no chance of causing deadlocking on transactions that modify data. Yes, there are some potential issues, but those are vastly overblown when reading historical data, and relatively rare even when reading more recent data. I'm not saying NOLOCK should always be used or anything close to it. I'm saying that it definitely does have its place, and, as a DBA, I like that the option is available. Just saying "never use NOLOCK" does have the advantage of being a very simple rule, but it's not necessarily best for overall performance.


I would regard it as unnecessary on a database that was essentially (or totally) read only. And on a database with lots of read/writes e.g. OLTP I would regard it as unsafe (except for the caveat of DBA reports as you rightly suggest)

We'll have to agree to disagree then. I regard records missing from a report, or included twice in totals, as totally unacceptable - regardless of whether the chance of it occurring is "almost never" (IME it is more frequent than that, even on systems with modest traffic).

As a DBA I too have a number of reports which use NOLOCK so that I know that I can run them without any interference to the users ... but I know what the pitfalls are and never trust the data as being 100% accurate. But I think that is very different to including it in a report that is for use by End User, and the vast majority of questions here with sample code included NOLOCK are not that sort of DBA knowledgeable report. Hence I like to point out the pitfalls to the O/P so they can be aware of the risk.

You are right, as I do use it, so what I really mean is "Never use for an end user report, restrict to reports run by DBAs who know the pitfalls"

That said, IME in offices where NOLOCK is used it is used extensively, without knowledge of the consequences, and typically new-hires are told to use it in all instances. This is a behaviour that harkens back to SQL 6.5 when there was no other means of avoiding deadlocks and/or improving performance. Now NOLOCK should only be used with extreme caution and other solutions, such as READ_COMMITTED_SNAPSHOT used instead.


R_C_S is an extremely valuable option. But it has huge overhead as well.

I would regard it as unnecessary on a database that was essentially (or totally) read only.

I would regard it as required to use on a db that was effectively read only (but was not specified as READ_ONLY to SQL itself). Why endure the overhead of locks when you don't need them at all?


Hey guys. Interesting discussion of NOLOCK, FWIW I tell people who won't get rid of it and can't/won't use RCS to use READPAST instead. You may miss some rows (those being updated) but you won't get dirty reads.

Have you noticed that the OP has yet to respond to my initial suggestions and questions?


We are fortunate to only have modest sized databases, and for us we didn't notice any significant difference when we introduced it. At the time we did have some eCommerce databases that were being hammered (still small DBs with only 5-10GB or so, but lots of users & user activity), but now we've ditched eCommerce and our APPs are bigger DBs but very few users and much lighter loads (well ... load from the users! some of the rubbish code written by the Vendors for Batch Processing is monstrous)

One of my clients is having a major problem with an Autonomy ADD-on which uses NOLOCK extensively and the users report peculiar behaviour - things that should be included are missing, appear on the next batch update or have to be generated manually as a work-around, and so on. I have no way, currently, of detecting if that is the use of NOLOCK or not, but we are in the process of rewriting their code to improve it and I will then be able to run a proper side-by-side comparison of the NOLOCK / Non-NOLOCK versions. The overnight batch process that updates the main Autonomy DB runs for 10 hours or so, with nothing else running on the server, thus plenty of opportunity to hit page splits etc.