I found that the Query itself ( select ) when instrumented is taking 1+ second
But inserting it into the table variable is tripling the time. (My test case only returns 20 records)
I am new to understanding an execution plan - normally havent looked. In this case I did since the results seemed so baffling ( unexpected ). It says 95% of the time is in a table scan of one table that has an inner join with another table. It doesnt make sense to me that sticking the results into a table variable ( especially so few ) would be a time hit. I can see what the diff would be with temp tables.
Here's my print out ... this is from a script, in a stored proc its faster but still poor
14 row(s) affected)
Query for Table B data - milsecs 1260
(14 row(s) affected)
Now we have table b filled of data - milsecs 5203
Longshot: TempTable has different datatype for the columns and implicit CONVERT is involved. That's not going to take seconds on 20 rows though ...
I would look at the Query Plan (in text form!) for the Query on its own, and then for the INSERT version. Perhaps SQL is, for some reason, choosing different indexes? You don't have a PKey on the @TempTable, but if you did I can imagine that SQL might use a different strategy to get the data "in PKey order" so-to-speak, so maybe something like that is going on.
SET SHOWPLAN_TEXT ON
GO
-- ... put query here ...
SET SHOWPLAN_TEXT OFF
GO
Thanks - when I examined the execution plan it suggested added non clustered index on two fields conditional fields ( used in the where clause used on the inner join. I had not done this in a while and had to read about it to refresh my memory. ( https://msdn.microsoft.com/en-us/library/ms186342.aspx - used SQL Server Managment Studio option - so interactive with mouse )
now my query times for same are
(14 row(s) affected)
Query for Table B data - milsecs 303
(14 row(s) affected)
Now we have table b filled of data - milsecs 596