Can I fix database locks for a long running update query using uncommitted isolation level?

We have a table of values that has about 5 million entries in it. That table is pretty static, it's only updated every few weeks, but new records are inserted and old ones are modified during business hours.

We have a few different stored procedures that run all day and are updating records in other tables using the values that are in the large table. The queries usually run pretty quickly but occasionally one will end up running long, and then the database locks start and everything gets hung up until the first query finishes, or it deadlocks and throws and error and then everything starts running again.

Could I add this around the query so that the update wouldn't lock the table?
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED;
-- Update SQL here
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;

(or am I totally missing an isolation level understanding?)

Since the large table is only updated every few weeks, and the potential for dirty reads is limited, and we have an overnight data integrity processes that's already in place to specifically look for any records that match to the table (due to late start/end date entries, etc), and it's not time critical to be 100% the first pass (can get picked up in the overnight scan without business implications).

Is that something that could work?

Use " WITH (NOLOCK)" on the SELECTs again the large (lookup) table in the stored procs:

UPDATE dbo.other_table
SET ... = ...
FROM dbo.other_table
INNER JOIN large_lookup_table WITH (NOLOCK)

An UPDATE will always do locks, there's no way around that. SQL must do that to preserve data integrity. For example, so that one UPDATE doesn't overwrite another one.

However, you can still read from a table when it is being UPDATEd if you use NOLOCK.

You could also look at one of the SNAPSHOT isolation levels. RCSI is sort of like Oracle's MVCC.

Using one of the Snapshot isolation levels puts more stress on tempdb to store the row versions unless you are using SQL2019 with ADR where the versions will already be stored in the PVS. ie Stress test any snapshot isolation very carefully before putting it into production.

Do you really need 5 million rows for day to day business activity?
If so what percentage of the 5 million get updated?

Sadly yes, need it for historic reasons and it a large cartesian table full of vendors and UPCs and dates :frowning:

I thought that WITH (NOLOCK) was being deprecated for some reason?

For the table being updated, I'm fine with the lock on those records, it's the read lock on the big table that I'm not concerned with as the updates to that table are infrequent, and yet every time there's a slowdown in the process, it's always multiple queries updating things based on that table.

Historic reasons smells like reporting to me

No, (NOLOCK) is not deprecated. It has a very useful purpose when used appropriately. Don't believe the people that tell you "Never use (NOLOCK)". That's just plain bad advice.

Can you post the UPDATE code so we can make sure that you're not doing what I simply refer to as an "Illegal Update?