Improve Performance of Trigger

I don't know (I need to test it) whether performance is hampered by referencing columns from both INSERTED and [TargetTable] - both will be the same at the point that the [After] trigger fires, and I've been a bit slack about sometimes referencing columns from INSERTED and sometimes from [TargetTable] - so perhaps I should ONLY Join to INSERTED into order to locate appropriate [TargetTable] rows that need processing in the trigger - that should just be a Clustered Index seek - and then only reference columns from [TargetTable]

... but I have had a go at using a CTE to "assemble" the data to hopefully then be able to easily determine which rows have changed and need updating.

I don't know how to get a Query Plan (from a Trigger) in plain text, but I can get the I/O Stats.

The original was:

Table 'TargetTable'. Scan count 1, logical reads 14
   CPU time = 0 ms, elapsed time = 15 ms.

Table 'TargetTable'. Scan count 1, logical reads 797
Table 'Worktable'. Scan count 0, logical reads 0
Table 'LookupTable'. Scan count 2, logical reads 12
   CPU time = 16 ms,  elapsed time = 9 ms.
   CPU time = 16 ms,  elapsed time = 24 ms.

The first result is the simple UPDATE statement (I get set a column to itself). The second is from the Trigger.

After I changed it to use a CTE I got:

Table 'TargetTable'. Scan count 1, logical reads 14
   CPU time = 0 ms, elapsed time = 3 ms.

Table 'TargetTable'. Scan count 2, logical reads 28
Table 'Worktable'. Scan count 1, logical reads 0
Table 'LookupTable'. Scan count 2, logical reads 12
   CPU time = 16 ms,  elapsed time = 5 ms.
   CPU time = 63 ms,  elapsed time = 55 ms.

The reduction on logical reads on TargetTable looks good, but the second CPU time has gone up a lot, not sure if that is absolute or "logical" value though.

Adding tests (for differences) between the re-calculated CTE column values, and the TargetTable and doing a benign update (no changes to actual column values, and trigger's JOIN to Lookup table does not find any different values) I get this:

Table 'TargetTable'. Scan count 1, logical reads 14
   CPU time = 0 ms, elapsed time = 3 ms.

Table 'TargetTable'. Scan count 2, logical reads 28
Table 'Worktable'. Scan count 1, logical reads 578
Table 'LookupTable'. Scan count 2, logical reads 12

   CPU time = 0 ms,  elapsed time = 6 ms.
   CPU time = 47 ms,  elapsed time = 56 ms.

There will have been no physical UPDATES to TargetTable (i.e. by the trigger) but there is significant activity in the Worktable to compare columns to see if anything has changed. Not really apparent from this Stats logging whether there s an improvement in time, I expect I need to run it on a much bigger dataset and time how long it takes. My earlier process, which had been running for 4 hours before I aborted it, is still rolling back ... I might be gone some time!!