SQLTeam.com | Weblogs | Forums

Avoid locks

Hello guys,
I've a database SQL 2008R2 std with many applications that run DDL and DML transactions and often they cause locks on tables. Supposing that transactions are all already optimized , which are the possible solutions for avoid locks ? For example design replication database so I can redirect all query transactions to replicated? partitioning?

thanks for any tips

to increase concurrency (the real goal), try RCSI (snapshot isolation). Locks are normal and are the way to guaranteee ACID

1 Like

We offload read queries to a secondary server. For 2012 and up, we use readable secondaries using the Availability Groups feature. For earlier versions, we used Transactional Replication.

+1 to the RCSI comment that gbritton made. We used it at my last job in addition to the AG/replication setup. At my current job, they have not implemented it yet.


Thanks very much. But Availability Groups feature is on SQL 2012 Standard Edition?

Not for 2012, no. But you could use log shipping to any number of readable secondaries.

The problem with log shipping is that users have to be disconnected every time a log needs to be restored (most have the restore job set at 15 minutes). For this reason, I don't think it's a viable solution for many. I recommend Transactional Replication instead.

well, depends on the usage patterns. For "live" access, you're right. but for, say, nightly reporting runs, LS is fine and less overhead

I think to buy enterprise version so I can use AlwaysOn Availability Groups. I need always on database for ERP applications that has to do DML transactions.

Can I use Active Secondary Replicas for my aim?

The secondary replicas would be readable but not writeable.

Sorry, yes i meant only for select statemens. Every row inserted or updated on table is replicated immediately ?

But my big dubt is the behavior when database A write the replica on DATABASE B, in this moment B.table is locked right? So if an application start a select statement on replicated B.table when insert or update transaction is active for replica? I have to hit NOLOCK on all select statements running on database B?

thanks again

With replication, there will be some latency. The amount of latency depends on your setup and how big the transactions are (such as index rebuilds). In a normal day with short transactions, latency should be 1-2 seconds. But when a large tranaction runs, such as a nightly index rebuild, latency will increase and can be excessive. Make sure you design the system to handle your load.

For a synchronous replica in an Availability Group, the latency is zero (unless there's a problem).

Do NOT use NOLOCK. Replicas will use snapshot isolation. The AGs are designed for high concurrency.

1 Like

Very good, few seconds latency isn't a problem for my aim and index rebuild job is scheduled monthly at night.

But if replica use snapshot isolation, it means that change needed in the application as it has to use a new isolation level? For example a commit in any select statement, as I've read this post:

Difference Between Read Committed Snapshot and Snapshot Isolation Level

Whether or not you need to make changes in your application for the isolation level is dependent on your application requirements. I've had many systems on AGs over the last 2 years as well as several applications using read committed snapshot over last 8 years and none of them required any changes.

thank you very much.

Sorry Tara, last question, for implementing AG is mandatory two SQL Server Enterprise License right?

If you use the secondary replica for reads or anything really besides being a passive standby replica, then you must license it. Yes Enterprise edition.

In your case, for syncronizing replica from primary database to secondary read-only database, what kind of connection you use? 10Gbit switch?
Because I suppose that the speed of communication between them is very important.

Yes 10gb, but I think the IO subsystem is much more important. On that system, we used local SSDs (Fusion-io) for most indexes and then the SAN (both SSD and non-SSD) for anything that couldn't fit on the local SSD and wasn't accessed as much.

Stop supposing that. I can almost guarantee that they are not because they are causing long term locks that are bugging you. You can't actually avoid locks. You just need to write good fast code. If you have long term locks, you don't have good fast code.

I was hit with such a problem this last week. Everyone said that a certain front-end related stored procedure was "already optimized" and could not be made faster because of the weird requirements. It took 1 CPU second to execute and used 227,000 reads per run and ran 26,500 times in an 8 hour period. I'd do the math for you... that's 7.36 hours of CPU time and more than 49 Tera Bytes of memory IO.

When I got done with it, the total CPU time used in 8 hours had been reduce to 13.25 CPU seconds as a total for all 26,500 runs and the number of bytes of memory IO had been reduced to only 13 Billion per day (3 orders of magnitude improvement on CPU and Reads).

If you're having problems with locks, look to your code because that's where the performance is. You can't get that kind of improvement even with SSDs or SnapShot Isolation.

1 Like

Hi Jeff, I agree too. The 90% of locks wait are due bad SQL code from developers, and my aim is to reduce locks as possible so I need to help my colleague for rewrite better code. But when there are many applications that cause high concurrency, I have to introduce also any other features that can improve benefit on database.