EXEC method with a variable return

I am using the EXEC method and need to know if there is a way to assign the COUNT to the variable @JAN_COUNT. I am using this method because @SQLCRITERIA contains a multi-value IN CLAUSE Ex:"[CVA_CVA_Code] IN (561,562,563)" which is passed to the Stored Procedure that this statement resides in.

    DECLARE @SQLCRITERIA AS NVARCHAR(MAX) = NULL;
    DECLARE @JAN_COUNT AS SMALLINT = 0;

    SET @SQLCOMMAND = N'SELECT 
                            @JAN_COUNT = COUNT(DISTINCT [CVA_VIN_17]) AS [Count_VIN]
                        FROM 
                            [dbo].[TBL_CVA_AUDITS] 
                        WHERE 
                            MONTH([CVA_Audit_Date]) = 1 AND ' + @SQLCRITERIA+ '
                        GROUP BY 
                            MONTH([CVA_Audit_Date]);' 
    EXEC(@SQLCOMMAND)

replacing that with the following should work:

EXEC sp_ExecuteSQL @SQLCOMMAND, '@JAN_COUNT int OUTPUT', @JAN_COUNT = @JAN_COUNT OUTPUT

An alternative would be to pass the delimited list as a @Parameter, and then use a Splitter Function to "split" that list, on the delimiter, into a #TempTable, and then to JOIN that #TempTable to the rest of your query. That will not require any dynamic SQL.

Using Dynamic SQL, as you are, make sure to take all necessary precautions to prevent SQL Injection. In particular at the minimum you should replace

+ @SQLCRITERIA+

with

+ REPLACE(@SQLCRITERIA, '''', '''''') +
1 Like

Thank you for your reply Kristen. I understand the injection problem which will be addressed. The question still remains, how do I assign the COUNT value to the variable.

@Kristen is addressing that question in the query he posted.

EXEC sp_ExecuteSQL @SQLCOMMAND, '@JAN_COUNT int OUTPUT', @JAN_COUNT = @JAN_COUNT OUTPUT

The two instances of the keyword are for that purpose. Specifying OUTPUT in the parameter definition and specifying it as OUTPUT in the supplied parameter are both required.

If I may I would suggest that you need "policy" which prevents SQL Injection ever happening. Never using dynamic SQL is a good place to start, if that is not possible then there needs to be strict requirements when dynamic SQL is used - e.g. Peer Review by someone senior, or a "group code review" or somesuch.

If dynamic SQL is allowed in your code then in the absence of very strict safeguards you are immediately at risk; the negative PR if users / clients find out that you have been hacked is best avoided of course!

Thank you for the advise Kristen. I am already looking at using User-Defined Table Types instead of the methods posted above. The problem is sometimes I need a simple working solution quickly. Then I will go back and use a better solution as I learn more.

Watch those quick solutions. They have a tendency to become permanent. Once made a table for one person to use for reports, it is now used for production and it was never designed for that!

None of the rest of us have ever done that. Nope. Not ever !!!!

Another question. I now have it working using a Defined-Table Type and joining on the IDs. How many JOINS can you safely have in one Query? The way I see this working is there will be one Defined-Table Type per search criteria and sometimes the search criteria will contain 25 columns. That means besides the other Joins you may have, there will be an additional 25 Joins from just the Defined-Table Types in your Query. Is there another way?

I doubt it will matter (if you have lots).

If most of them will be empty (i.e. the user has only chosen One Column to filter on) then it might be worth providing a hint to the optimiser, and a RECOMPILE option so it doesn't get stuck with a single query plan which is unsuitable for future usage.

Of the top of my head:

-- Split out any delimited values for various "filter criteria columns"
INSERT INTO @Col1Data SELECT SplitData FROM dbo.MySplitFunction(@Col1List)
SELECT @RowCountCol1 = @@ROWCOUNT
INSERT INTO @Col2Data SELECT SplitData FROM dbo.MySplitFunction(@Col2List)
SELECT @RowCountCol2 = @@ROWCOUNT
-- ... etc ...
-- Prepare results based using Filter Column List criteria
SELECT Col1, Col2, ...
FROM MyTable
WHERE     (@RowCountCol1 = 0 OR Col1 IN (SELECT SplitData FROM @Col1Data)
      AND (@RowCountCol2 = 0 OR Col2 IN (SELECT SplitData FROM @Col2Data)
... etc ...

I think if you use (OUTER) JOINs, rather than IN, it may be more difficult to get the Query Planner to consider them as "conditional"

1 Like

This is exactly what was needed. Everything so far seems to be working perfectly. Thank you Kristen :slight_smile:.

Gosh. I have to seriously disagree with that, Kristen. Dynamic SQL is a powerful and very appropriate tool when used correctly. Telling people to never use it based on a fear that can easily be prevented just doesn't seem right. It would be far better to teach folks how to use such a great tool correctly.

yes you are quite right. But IMO dynamic SQL is hugely overused and the first solution of choice, and IMHO avoiding the Security (Now we have Dyanmic SQL we just need to GRANT SELECT, UPDATE, DELETE on ALL Tables to ALL Users :frowning: ), SQL Injection risks, etc. is best achieved by finding ways to avoid using Dynamic SQL in the first place.

I think that use of EXECUTE AS and having Certificates for proxy execution are pretty advanced approaches and probably not realistic for the skill-set for the average developer who is using Dynamic SQL to work around a "snag". Hence "Don't use dynamic SQL" is my first solution advice.

For anyone who has the skills to use the safety nets then Dynamic SQL presents no threat. But making sure that non-SQL developers write safe SQL? I doubt that training them up to include all that stuff is realistic. Look at the proliferation of NOLOCK :frowning: - most 3rd party code I encounter has been written using the "get it done" method :frowning:

Using sp_ExecuteSQL and parameterising all dynamic SQL is a good place to start (and has the advantage of cached query plans / reuse), but sooner or later a @TableName, @ColumnName or an @IN_List creeps in and bypasses the SQL Injection safety.

Surely Dynamic SQL is most commonly used for a Table or Column substitution in the SQL, or an IN list? When those questions come up here usually there are alternative programming solutions that avoid dynamic SQL ... hence my view is to recommend "Avoid dynamic SQL" as a first principle.

I've probably overlooked all sorts of real world examples of where Dynamic SQL IS the solution? :slight_smile:

Understood but it doesn't need to be that complicated. It's like the passing of a Table Name.... just make sure that it's actually a table name using the non-dynamic OBJECT_ID() check and Bob's your uncle. For other things, use QUOTENAME() to check before usage. If either fail, you can certainly log the attempt but never return anything back to the user on such failures.

I do agree that, like Cursors, While loops, rCTEs, and a host of other RBAR sins that Dynamic SQL is frequently misused and, like you, will offer alternatives if an alternative can be had. Of course, right or wrong, they'll use it either way (and maybe I'm a masochist) so I'd rather tell folks how to use such a powerful tool correctly than tell them to avoid it (which they won't). I will admonish them that SQL Injection is still the leading form of successful attacks and then tell them if they aren't going to take the time to learn how to prevent such things and implement it correctly, THEN they shouldn't use it.

Indeed ... BUT !!

CREATE PROCEDURE MyProc
    @TableName sysname
AS
DECLARE @SQL nvarchar(4000)

IF OBJECT_ID(@TableName) IS NULL
BEGIN
    RETURN -1 -- Invalid table name
END

SELECT @SQL = 'SELECT Col1, Col2 FROM ' + QuoteName(@TableName)

EXEC @SQL

Now, surely, I need to do something about the Permissions for that EXEC? My understanding is that

GRANT EXECUTE ON MyProc TO MyUserGroupOrRole

is not enough, and the User will have to have SELECT on the underlying table?

So now I need EXECUTE AS or Certificate-stuff in order to avoid having to provide permissions on the underlying objects - all of which increases the "complexity bar" for programmers for whom SQL is not their first language :grin:

You sure the error isn't coming from the malformed EXEC command in that prod?

Shifting gears, there are necessary privs to be assigned to the proc even without Dynamic SQL or the check on OBJECT_ID(). If the user has privs to execute the proc but no privs to read the table, the proc you posted will return nothing when executed. Of course, granting privs to the table is the beginning of a security nightmare in its own right because we don't necessarily want the user to have privs to the table... just the results of running the stored procedure.

With that in mind and whether or not Dynamic SQL is being used or not, the stored procedure should contain an EXECUTE AS clause to allow a user with only PUBLIC privs and the privs to execute the proc to successfully run the proc. They can't see the code in the proc. They can't use the table directly. They can't do anything other than execute the proc. The following link has much more information and "scenarios" as to which EXECUTE AS option you might want to use.

Also, as of 2005, you can create database roles. That means that you can create multiple different "execute" roles to assign AD groups or users to (I try to keep it to AD groups for production) and grant execution of different procs to each different role. That means that you can add the WITH EXECUTE AS OWNER clause to your stored procedures and have exquisite control nearly all of the access security to stored procedures only. You don't need to grant any privs to tables or other objects. Just the procedures. It also works out well if a proc has to do something like a TRUNCATE, which would otherwise require DDL_Admin privs which you never want to give to users or application logins.

And none of it actually has anything to do with Dynamic SQL. :wink:

Now, if we could just get rid of ORMs, we wouldn't need to grant privs to tables or other things. The front end login would have/need only PUBLIC privs and privs to execute certain stored procedures.