Using With(Nolock) In SQL server is a bad habit?

Can someone explain me why it is bad to use With(Nolock) and how it is dangerous?

Using NOLOCK can cause dirty reads as no shared locks are used making uncommitted transactions potentially dirty reads. For example when someone issues a select with NOLOCK and a transaction does not include the commit transaction and therefore can be rolled back, the NOLOCK will view uncommitted data therefore making it a dirty read. NOLOCK can prevent deadlocks in your table queries which can be a benefit but because no locks are issued it can also cause dirty reads. You should be careful when you use it with your applications, consider real time reporting for example.

Further reading:

This is not your answer, but NOLOCK is now part of the engine that works automatically. NOLOCK is working against a snapshot of the information and changes can happen outside the snapshot giving the possibility of 'dirty reads'.

NOLOCK can cause your query to see:

  1. data that never existed in the db (you could see the results on an active transaction that are later rolled back, and thus never really existed in the db)
    and/or
  2. data that no longer exists in the db (you could see rows that are being / have been deleted by another transaction)
    and/or
  3. the same data twice (if data moves after you read it, you could read it again. SQL's normal locking procedures prevent that from happening).

In addition to what the others have stated - using NOLOCK could cause the query to fail due to data movement. This occurs when your query is attempting to read data on a page that is in the process of moving (page split) due to an insert or update.

If possible, you want to enable snapshot isolation which avoids the issue by creating a snapshot of the data at the time your query starts...this can be enabled and turned on for the whole database - or just enabled so it can be set for each individual query.

Note: if you turn on snapshot isolation this could have an impact on your system. It will require space in tempdb and you really need to have enough space and fast enough drives for this to work well.

Snapshot isolation also adds 14 bytes to every affected row in every affected table. This could cause a huge performance hit because of page splits unless you have sufficient freespace in each page for the added pointers.

... and contrary to popular belief, WITH (NOLOCK) doesn't actually prevent DeadLocks. I worked for a company that had a highly enforced policy of using WITH(NOLOCK) everywhere and they still had an average of 640 deadlocks per day with spikes to 4,000 in a day. It merely helps get around blocking of reads which, as the others have stated, isn't often a good thing.

Only nasty fast, properly written code will avoid DeadLocks.

It prevents deadlocks for the specific table on which its coded, assuming that table is not also being modified in the query.

NOLOCK certainly does have concerns with it, but it does reduce overhead and is thus still a valuable tool when used properly. Yes, it's 100% a myth that it should be used on "every" read. But I've yet to see the grand destruction -- "it destroyed a company" -- from it that everyone claims is so common. Act reasonably: if you're doing something that can destroy the company, then forget even READ COMMITTED, use SERIALIZABLE instead!

I still that between the myths of "always use NOLOCK" and "every table should be clustered on identity", the latter is far more destructive, both to table designs and to overall performance.

What do you tell your users when a standard financial report contains phantom data and is not reproducible?
What do you tell your users when the database generates an alarm based on phantom data which then can not be verified or reproduced?
What do you tell your users when consecutive runs of a query produce different results with different ordering?

I submit that when a manager receives a report entitled "Daily Cash Transactions" that manager probably expects it to contain ALL of the cash transactions for the specified day and only transactions for that day.

Whoever said something as critical as "Daily Cash Transactions" should use NOLOCK?? Again, act reasonably. Only use NOLOCK if it's reasonable for the query in question. And, again, yes, all of that is theoretically possible, but that doesn't at all mean it happens that often. NOLOCK is much more useful against historical parts of the data, at least in the sense that the rows are only very rarely, if ever, updated.

I could be wrong but that's what I'm getting at. If you do an update followed by a select in a transaction, the use of WITH(NOLOCK) won't prevent a deadlock. We used to get an average of 640 deadlocks a day with that in place (same proc being fired). If you're just doing a normal select against the same table as what such a proc is using, the only need for a WITH(NOLOCK) is to prevent waiting on an update. It won't necessarily protect you from a deadlock because it's not likely that a deadlock would occur in such a simple situation to begin with. It'll simply wait for the blocking to end.

I used to think that because of supposed reduction in locks and lock escalation but I've never seen anyone prove it nor have I personally witnessed it. If you have some proof of that in the form of demonstrable code, Scott, I'd love to see it, please. If not, then I guess it's time for me to do an experiment or two.

The more simultaneous users, the more noticeable the effects of NOLOCK are. It must reduce overhead, since not taking locks saves overhead vs taking locks. But is it truly significant? Much more difficult to determine.

Just picking up on this as the thread was referenced elsewhere just now.

Indeed, I think that would be a very rare event. Hypothetically possible though ...

All our stuff is OLTP and we don't allow NOLOCK in any code, with the exception of select-only reports run by DBAs on live data (so as to be sure not to interfere with any front-end activity). We have used Read Committed Snapshot on all our DBs for years now - i.e. APPs are built, and tested, against that setup and, for us, that seems to be the solution.

But we also have 3rd party code that has NOLOCK's applied with a Pepper-shaker! they are on every statement (including #TEMP etc.). I get have a dozen Alert emails a day for "Error 601 - Data movement during NOLOCK", so I know that an operation has been aborted because of that issue (and probably has not been retried, given the low-quality of the code, so now we probably have a data anomaly). Given that and because the 601 is only triggered when things are bad enough to warrant it, I expect that we get plenty (more than ONCE is too many in my book ...) of "Data included twice" or "Some data missed altogether" . e.g. there will also be successful Dirty reads during index page split.

Here's another thing, which happened to me this week and had not occurred to me before as an issue:

We were making some emergency critical data changes to a 3rd party Database on Production - I know, bad situation, should never happen, etc.

My normal approach (on our systems) would be:

BEGIN TRANSACTION
UPDATE MyTable SET MyColumn='NewValue' WHERE OtherColumn = 'SomeValue'
SELECT [Rowcount] = @@ROWCOUNT
-- COMMIT
-- ROLLBACK

I would normally run this, leaving the transaction hanging open, whilst I check the RowCount was reasonable and also perform a few (pre-written) SELECT statements to check everything is as-I-expect.

of course on a NOLOCK system I can't do this, because all APP-Code will be including all my modified rows, whether they are right, or not. Its going to take me 10 seconds, or maybe more, to satisfy myself that the UPDATE was all-fine-and-hunky-dory, or not, before I then issue the COMMIT or ROLLBACK ... a long time for Users to be including duff data in all the work they are doing. If I decided to ROLLBACK it could even, I suppose, be a Company Destroyer.

[quote="JeffModen, post:11, topic:4558"]
If you do an update followed by a select in a transaction, the use of WITH(NOLOCK) won't prevent a deadlock. We used to get an average of 640 deadlocks a day with that in place (same proc being fired).[/quote]

Interesting. I thought NOLOCK did avoid the deadlock there by simply reading the updated row anyway, ignoring the update lock. The effect of which was that data could be read that "didn't exist" (since since the UPDATE could later be rolled back rather than committed).

For example, run these two scripts in two query windows. First script:
IF OBJECT_ID('dbo.test_locking_1') IS NOT NULL
DROP TABLE dbo.test_locking_1;
CREATE TABLE dbo.test_locking_1 ( col1 int NOT NULL, col2 int NULL );
CREATE UNIQUE CLUSTERED INDEX test_locking_1__CL ON dbo.test_locking_1 ( col1 ) WITH ( FILLFACTOR = 99 );
INSERT INTO dbo.test_locking_1 VALUES(1, 10);

GO

BEGIN TRANSACTION;
UPDATE dbo.test_locking_1
SET col2 = 11
WHERE col1 = 1;
WAITFOR DELAY '00:00:20';
ROLLBACK TRANSACTION;

Then immediately in a second window run:

SELECT * FROM dbo.test_locking_1 WITH (NOLOCK);
GO
--wait for it, wait for it...
SELECT * FROM dbo.test_locking_1;

Not quite the same problem that I'm speaking of, Scott. I'm talking about something like the following pseudo-code, which was used in a stored procedure for a "NextID" table (instead of an IDENTITY column)...

DECLARE @NextID INT
;
BEGIN TRANSACTION
  SELECT @NextID = Value+1 FROM dbo.NextID WHERE TableName = 'sometableame';
  UPDATE dbo.NextID SET Value = @NextID WHERE TableName = 'sometableame';
COMMIT