Updating 2 tables with commit will afect data on first update?

Hi.
I will update 2 tables in on transactions.
My question is:
I should use something like

begin tran
update table F inner join table G ... set x=1
where  F is  null 
and X is NULL

Update table G inner join table F ... set y=1
Where F is not null
and X is null
commit

So my question is, on the first update, will the data that is null be changed for the second update to view or the data will not be changed?
If it is not changed then I must check for null on both tables but if changed then I should only check for null on the second table.

which way is it?
thanks.

Hi.
I have done some test so from what I can see, the rows on the first table are updated and the second table takes in consideration the first UPDATE table.

So if I am not mistaken the data will be changed for the second table to view.

Is that seems right?
thanks.

Yes you are correct.

Setting aside all the details about atomicity and locking and so on, if you remove the begin tran and commit statements the behavior would logically be the same..

1 Like

I think that will NOT be the case IF you use Read Committed Snapshot ... but I'm not sure!!

1 Like

Hi.
I'm using the SQL default server behavior and the rows that first updated to not null are then joined with the second query rows that require the first table to have non null rows and they get updated also.

So...

Is there something I should be aware of?

Thanks.

I don't think the behavior would be any different regardless of RCSI. If you execute the first statement, leave the transaction open and then query from another SPID, you would see the data as though the first statement has not executed because it is reading from the snapshot.

1 Like

So basically I'm good and I will always see the data after the first update correct?
Thanks.

Outside some type of RCSI, wouldn't the attempt to query just-updated-within-a-transaction data just wait for the transaction to be completed (committed or rolled-back)?

I think it would wait for commit/rollback (unless you added a NOLOCK hint or something unspeakable like that).

I was referring to the case that Kristen mentioned where RCSI is enabled.

Yes, that's true, but my point was different and relates to WITHIN the current batch. I'll try to clarify - bear with me!!

I agree, but only because the two queries appear to be mutually exclusive (but a TRIGGER, for example, could Fox that :frowning: )

update table F inner join table G ... set x=1
where  F is  null  --<< Condition (a)
and X is NULL
Update table G inner join table F ... set y=1
Where F is not null  --<< Condition (b)
and X is null

Also, the UPDATEs are to Table_F and Table_G, so presumably the second update will be OK, 'coz the tables are different (unless TRIGGERS strike again ...), so the only risk is that the second UPDATE fails to match the intended rows (because some rows where true for F IS NOT NULL at the start of the batch but the first update changed [F] to be NULL (and that updated data should then NOT be visible to the second Update but BECAUSE of RCSI it [wrongly] is visible)

Using SERIALIZABLE should fix that, or maybe it also needs UPDLOCK

Bit of an edge case ... but hard-to-find in DEV IME !!

Technically you prefaced the statement I quoted with:

I don't think the behavior would be any different regardless of RCSI.

So from what I understand I should be OK as long i don't have something ( p.e a trigger) to interfere with the update or use Snapshots?

Yeah, i think we all think that!

maybe you could / Should? use

WITH (UPDLOCK, SERIALIZABLE)

to make sure of it? and wrap the two statements in a transaction so they have to both succeed (or fail)

1 Like

Hi.
I've read some articles that states that UPDLOCK must be used with an index so it does not block the updates.
Used correctly , a deadlock can be resolved but used badly, I can hang the execution.
As I haven't used UPDLOCK, I'm not sure if I will make matters better or worse.

Thanks.