A trigger to update a column, cannot make it work

I know triggers are ofttimes looked down on, but this is such a simple task, I thought surely it must be easy..
All I'm trying to do is update a column on a table after performing an UPDATE or INSERT on same table.

I stripped out my logic where I'm doing some checking before the update, so I could just see if I could perform a simple update.. but I'm failing miserably..
What it does is: just sits there(hangs), doesn't update the table, and I end up killing my SQL process.
I seem to be creating a loop or...something
Am I trying to update a table that's ...busy from the previous UPDATE/INSERT?

Here's what I'm trying to do:

AFTER UPDATE,INSERT
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;

UPDATE my_table SET myColumnName =
( SELECT myPreviouslyUpdatedColumn
FROM inserted)

The columns are the same datatype.

The way you have written the update statement it will update all the rows in my_table. If the table has a large number of rows that can cause the query to appear non-responsive. If you are trying to update only the rows that were inserted/updated, then your query should be something like this:

UPDATE m SET 
	myColumnName = i.myPreviouslyUpdatedColumn
FROM
	my_table m
	INNER JOIN INSERTED i ON
		i.YourtablesPrimaryKeyColumn = m.YourtablesPrimaryKeyColumn;

Another possibility why your query might appear to be non-responsive is if you have recursive and nested triggers turned on. But by default recursive is turned off, so unless you turned it on, it is unlikely to be the cause.

1 Like

thank you kind sir: I shall study it & learn...
R

So, since we're dealing with just 1 table, would something like this work?

UPDATE m SET 
myTargetColumn = m.myInsertedColumn

FROM
myTableName.m

That would update all the rows in the table.

oh, I see that now...
there's no constraint..

I actually, do have a particular value I'm looking for within a column in that table...
I'll try & work that out..
(it's a specific text string in a column)
:open_mouth:

oh wait, I swear someone told me that when you're referencing a column after an update/insert, that the operations
you specify are acting on THAT update or insert..

meaning, in this case , if I say:

myTargetColumn = m.myInsertedColumn

then "myInsertedColumn" would be referring to the column that was JUST UPDATED or INSERTED

I guess not maybe?
(I also understand that there could be multiple row updates to contend with as well)

Nope, you can do whatever you like in a trigger, doesn't have to have any bearing on the data that was changed.

A common mistake is that folk think that a trigger is applied to each row that was changed - i.e. the code they write only has to operate on a single row, and in a multi-row operation the trigger will be applied to each row in turn. That is NOT the case either ... a trigger has to be written to operate on all the rows that were changed, as a set. (You can of course write a loop that will walk round each row in turn, but that would be very inefficient.)

It might be that you are thinking of the UPDATE() function? that tests if a specific column was included in the INSERT / UPDATE statement. This, too, is often wrongly used and assumed to test if the value of the column CHANGED following an UPDATE statement. This is not the case, UPDATE(MyColumn) returns TRUE merely if the column was included in the Update statement.

If you want your trigger to "act" on the rows that were Inserted / Updated then you need to use the pseudo table "inserted" which will contain the "after" data of those rows. You can join [INSERTED] to the actual table, on any Unique Index - most commonly the PKey - to then update the row(s) further as per the example that @JamesK gave you.

If you just do

UPDATE M
SET ...

without such a JOIN to [inserted] then you will update all the rows in the table.

1 Like

Ok., thanks for that Kristen.
So far, I'm getting a hung app, which makes me think something is updating...too much.
Any thoughts about the following?
(I'm prepared for "dude, that's hideous")

UPDATE oe_line
SET generic_custom_description = j.extended_desc
FROM oe_line j
INNER JOIN INSERTED i
ON j.line_no = i.line_no
WHERE i.extended_desc like '%some text:%'

UPDATE j
SET generic_custom_description = i.extended_desc
FROM oe_line j
INNER JOIN INSERTED i
ON j.line_no = i.line_no
-- AND another join condition?
WHERE i.extended_desc like '%some text:%'

I am guessing, but I think that will JOIN every row in [oe_line] (the table being updated) with every row in [J] (the alias for another instance of the [oe_line] table.

@Ifor has sown the correct syntax. We always use that format:

UPDATE U
SET ...
FROM MyTable AS U
WHERE ...

even if I am only updating a single table, to avoid ever using the syntax

UPDATE MyTable
SET ...
WHERE ...

so that when / if it becomes necessary to add JOINs there is no "change of coding style" with the potential to introduce a bug as a consequence.

I think I may have finally got it!

Hello again, I wanted to show you folks what I have working now.
It's in my TEST system, and so far, is doing what I want.

I'm not sure if this is asking too much, but I was wondering if anyone would care to
take a look and see if there are potential pitfalls here that I can't see, that might
trip up the process.
Many thanks for all your help with this.
Rich


BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;

DECLARE @extendedDescValue as NVARCHAR(255)
DECLARE @OrderNoValue as VARCHAR(8)
DECLARE @BusinessCursor as CURSOR
DECLARE @pos1 as int
SET @BusinessCursor = CURSOR FOR
SELECT extended_desc
FROM INSERTED i
OPEN @BusinessCursor
FETCH NEXT FROM @BusinessCursor INTO @extendedDescValue
WHILE @@FETCH_STATUS = 0
BEGIN
	If @extendedDescValue like '%Customer Bin:%'
		BEGIN
			SET  @pos1 = CharIndex('Customer Bin:',@extendedDescValue)+12
			--UPDATE oe_line
			UPDATE o 
			SET generic_custom_description = ( RTrim(LTrim(SUBSTRING(@extendedDescValue,@pos1,99))))
			FROM oe_line o
			INNER JOIN INSERTED i
			ON o.line_no = i.line_no
			WHERE o.extended_desc like '%Customer Bin:%'
		END
		SET  @pos1 = 0
	FETCH NEXT FROM @BusinessCursor INTO  @extendedDescValue
	CLOSE @BusinessCursor
	DEALLOCATE @BusinessCursor
END

END

You have an indentation error on "SET @pos1 = 0" - may seem trivial, but indentation is there, for me at least!, to indicate the logic flow, and thus I am questioning it:

BEGIN
	If @extendedDescValue like '%Customer Bin:%'
		BEGIN
			SET  ...
			...
		END
		SET  @pos1 = 0
	FETCH ...
	...
END

so when I see that I immediately think "Is that 'SET @pos1 = 0' indented correctly, in which case it should be inside the BEGIN / END or, perhaps, inside and ELSE??, or should it be at the same indentation as the rest of the code outside the END??

Perhaps made slightly more unsure by the style of indentation as the BEGIN / END themselves are indented. For me this would be a more conventional style

BEGIN
	If @extendedDescValue like '%Customer Bin:%'
	BEGIN
		SET  ...
		...
	END
	SET  @pos1 = 0
	FETCH ...
	...
END

either way, whatever coding style you decide is fine. Just make sure you are very consistent in applying it.

As a separate point there is no need to "reset" SET @pos1 = 0 - it is not used after that point, so that is redundant, as is the declaration of @OrderNoValue which is not used (in Peer Review that would cause me to ask "Was this MEANT to be used? or is it redundant?")

But that coding-style issue is incidental to "Why are you using a CURSOR"? Loops are usually the work of the devil in relational databases and I can't see what purpose a LOOP serves here.

You are stepping through all the rows (in INSERTED) that are available to the Trigger and for each one that has

extendedDescValue like '%Customer Bin:%' 

you are then updating ALL the rows that match '%Customer Bin:%' BUT you are using the CharIndex of the specific row that that the cursor found.

This may well be working BECAUSE the trigger is ONLY being called / tested with single-row-updates

Scrap the cursor
Change the UPDATE to use the CharIndex in-line
Process all the rows using JOIN INSERTED as s single set - with SQL that is highly likely to be 100x faster than looping round then (whether Cursor or some other Loop method)
Test the Trigger with both one-row and multiple-rows [that match the LIKE '%Customer Bin:%' criteria]

I'm not convinced that your "+12" is correct. That will INCLUDE the ":" in the subsequent SUBSTRING ... which you then LTrim() - clearly the LTrim() is not going to remove anything, and given that you included that function I presume you intention was to remove the whole of 'Customer Bin:' AND any spaces after that?

Here's a test frame work:

DECLARE @extendedDescValue as NVARCHAR(255)
DECLARE @pos1 as int

SELECT	@extendedDescValue = 'xxx Customer Bin: yyy '

SELECT	@pos1 = CharIndex('Customer Bin:',@extendedDescValue)+12
SELECT	[@pos1]=@pos1,
	[Substring] = ']' + SUBSTRING(@extendedDescValue,@pos1,99) + '[',
	[generic_custom_description] = ']' + ( RTrim(LTrim(SUBSTRING(@extendedDescValue,@pos1,99)))) + '['

try changign "+12" to "+13" and seeing if THAT is what you intended.

Why are you limiting SUBSTRING() to 99 characters? The definition of @extendedDescValue is 255 characters (and it is important that that is at least as big as the Column [extended_desc] INCLUDING remembering to change that @extendedDescValue definition then [extended_desc] changes in future ... that is a significant potential future maintenance bug

Personally I would use STUFF instead of SUBSTRING, because it avoids the issue with SUBSTRING() having to figure out what the final "99" parameter should be, although STUFF() is not well know so you may well never of heard of it ...

Add this to the test framework above:

SELECT	[STUFF()] = ']' + RTrim(STUFF(@extendedDescValue,1, @pos1,'')) + '['

and experiment with leading / trailing spaces and any other "test data" that you need to.

What about if the test data was this? What would be the expected outcome?

SELECT	@extendedDescValue = 'xxx Customer Bin: yyy Customer Bin: zzz '

? Maybe that is "ridiculous" in the context of your APP though :slight_smile:

Once you have the 12 / 13 offset finalised, and any decisions about TRIMming and so on, then I would suggest that the [whole trigger] code should be something like this:

BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;

	-- UPDATE oe_line
	UPDATE o 
	SET generic_custom_description
		= RTrim(STUFF(extended_desc, 1,
				CharIndex('Customer Bin:', extended_desc)+12,
				''))
	FROM oe_line AS o
	INNER JOIN INSERTED i
		ON o.line_no = i.line_no
	WHERE o.extended_desc like '%Customer Bin:%'
END

Does [generic_custom_description] ever get updated by any other means? What if this row was updated (e.g. by a User) again and the "Customer Bin" stuff was removed, and the whole thing replaced with "1234" - would it matter than the [Customer Bin] still contained the parsed data from the "Customer Bin" at the previous update? (or maybe the user is able to, also, change the [generic_custom_description] value)

Well, some things for me to consider here for sure.
thank you SO much for taking the time here Kristen.

-The indentation is just habit, from the scripting work I do..
I'm happy to change that up to appear more..SQL friendly. IT does look
out of place, I agree.
-The reset Pos bit, I did because I think I was getting returns earlier on that
made me think the thing needed to be zeroed out: I've removed that.
-Regarding the cursor, that is simply for lack of knowing a better/different way of
looping through the resulting INSERT. I'll definitely try your method .
BTW, regarding testing with single row updates, I am testing by adding single AND multiple
records(lines) to the app, so I'm able to see multiple rows get processed by the 'loop'.

-The goal was indeed to look for 'Customer Bin:' and extract everything after that: though
I grant you, the question of "Customer Bin: yyy Customer Bin:" is an interesting one..
-The 99 characters was quite simply short sighted, and the result of inexperience. I made
one of those novice assumptions about the potential length of the data: shouldn't do that..I
should know really better: that will change to the MAX allowed value for the field.

I shall get in work on this now.
Again, thank you SOOO much for your wisdom & insights.

A geek in Maine appreciates it tremendously.
Rich

The style doesn't matter (folk here will argue backwards-and-forwards on Style all day long!), I would stick with what you are familiar with.; Just be consistent.

Only thing I would say is that if you read / learn of a style change that you think would enhance your code (either presentation or, for me more importantly, "safer coding") then you can consider adopting it. All your old code won't conform of course, so it needs to pass the "definitely worthwhile" test !!

Good use of the Forums then :slight_smile:

The take-away from this is to change your Loop / Cursor for a set-based solution. When I came to SQL from sequential-type languages it took me some time to get my head around that.

You can benchmark the difference if you like, but set-based is likely to be of the order of 100x faster then row-by-agonising-row (RBAR) loops, its definitely not some trivial Geeky save-a-CPU-cycle-here difference ...

good words.
Struggling a bit with the new implementation, but I'm going to stick with it until I get it.
R