Can I pass a tablename as a parameter?

Can I do this in my html?

SelectCommand="Select * from @tablename"

Looks like the bottom got cut off. I'm trying to pass the table name in as a parameter.

You'll need Dynamic SQL to do that.

If you go down that route beware of SQL Injection

Something like this:

DECLARE @tablename sysname
SELECT @tablename = 'MyTableName' -- This will be a parameter, just set to a value for testing purposes only
DECLARE @strSQL nvarchar(4000)
SELECT @strSQL = 'SELECT * FROM ' + QuoteName(@tablename)
EXEC (@strSQL)

If you have parameters which are NOT the tablename - e.g. you want a WHERE clause that comapres YourColumn to @SomeParameter then use


rather than just EXEC. MUCH better, query plans are cached and reused for better performance and in most cases you don't have to worry about SQL Injection

1 Like

Thank you. I have it working now.

If you used the code as written, you've opened yourself up to SQL Injection. You need to check sys.objects to ensure it's a table name or someone could embed attack code in the parameter being passed (SQL Injection).
DECLARE @tablename sysname;
SELECT @tablename = 'MyTableName'; -- This will be a parameter, just set to a value for testing purposes only

--===== Prevent SQL Injection without giving a potential attacker any clues.
     IF OBJECT_ID(@tablename) IS NULL
DECLARE @strSQL nvarchar(4000);
SELECT @strSQL = 'SELECT * FROM ' + QuoteName(@tablename);
EXEC (@strSQL);

Will SQL Injection attacks survive QuoteName(@tablename) ?

Ah... please forgive me. I'm not sure why my mind didn't see the QUOTENAME in your good code even though it's obvious that I copied it, @Kristen. To be sure, though and to explain my comment even in the presence of QUOTENAME...

As you know (because you used it most appropriately), QUOTENAME() is generally accepted as one of the supposedly "airtight" methods for the prevention of SQL Injection. The problem I have with it is that it will return a clue to an attacker that it's being used even though the attack attempt would fail. For example, it does confirm that they at least "got in". It's also "replacement code" behind the scenes. I don't know how it could be done but just imagine a really clever attacker someday coming up with some magic that worked its way around what it does.

Being possibly ridiculously over-cautious in the opinion of many, I'll add the OBJECT_ID() test to the code and do similar for column names that are provided as parameters.. I do understand that OBJECT_ID() is also code but it's not replacement or concatenation code and having it check for the existence of the object with no concatenation seems much safer to me, not to mention that it gives potential hackers no information as to whether they succeeded or not.

1 Like

begs the big question "why are you doing it this way, passing table name as parm?"

1 Like


You are 100% right that the whole SQL Injection thing needs tackling Head-On, rather than just (actually "in addition to") coding-safe or similar.

I'm sure my earlier "beware of SQL Injection" warning would be fine for a seasoned-pro, who let's assume would be aware/avoiding that issue in the first place. For someone who, perhaps, has never heard of the issue its nothing like a big enough warning of the consequences, so good job you have given it more air-time.

An important (to me at least) side effect of that is to be able to report (or log to a DBA Log File/Table) a human-readable error message as to what @TableName was tried, but did not exist, and have the Sproc fail gracefully. NOT doing that means that the dynamic SQL in the SProc is just going to fail with a runtime error - and possibly failing with half-updated rows (i.e. no Transaction <ShockHorror!>) or a cascade side-effect if the Dynamic SQL did not abort the Sproc (We always use SET XACT_ABORT ON but in the absence of that maybe a syntax error in some dynamic SQL would be "silent"??)

begs the big question "why are you doing it this way, passing table name as parm?"

Because the table that I want to perform the sql query on depends on something going on back in my VB code.

Why? Is the idea of passing a table name via a parameter bad? BTW, I did get this working.

It tends to suggest, to me, that the design of the DB could maybe be improved / changed such that table-name-as-parameter could then be avoided.

But as with most such things "it depends".

For example, if you have one table of Widgets PER Customer (e.g. table name is [WIDGETS_Customer1] then that would definitely be bad :slight_smile: - you've got to create a new table every time you add a customer etc. etc.)

If there is a choice of Customer / Supplier tables then maybe that would be OK, but personally I would UNION ALL the two tables together, in a VIEW (probably with an additional [Source] column indicating Customer / Supplier), and perform the query on the VIEW. Or I might ahve a single table for "Entities", regarless of whether they are Customers or Suppliers, and then have some sort of Attribtue, or sub-table, that indicates if an Entity is a Supplier or a Customer (or both ...)

If you want to explain a bit about what your circumstances are then folk here could make suggestions

Yes, I see what you mean. This could have been avoided.

I am given a complete table of all current employees every month. I call the table CurrentEmployees. My code uses that table. When we get the December table, all I have to do is rename the November table to EmployeesNov16 and bring in the new table as CurrentEmployees.

The old tables never get used again, but I keep them around anyway.

Now, 5 years later, I was asked to write an app that makes some comparisons between any 2 of the old tables. (Gains/Losses). So, I have 2 dropdown lists of the old tables so the user can choose, for example, May2016 and June2016. So I need to do queries based on those table names.

That's why I wanted to pass table names as a parameters.

Thanks for the explanation.

My inclination would have been to have a persistent table of Employees (past and present).

Each month I would import the list of current employees into a temporary table and "process" that against the Master Employee table.

Anyone still employed I might update a "LastKnownGood" date (or I might do nothing).

Anyone in the master table marked as "Current" who is not in the current employees list I would Update to change Status to "Departed" and set a "DepartedOn" date (which would be NULL for any current employee).

Anyone not in the master table (or in the master table, but only with a "DepartedOn" date [i.e. a returning employee]) I would Insert as a new record, with status "Current" and an "ArrivedOn" date.

You might do something more fancy for left-but-returned employees, in particular if you want a single Employee Record / ID for such people (rather than a new employee record each time someone returned). In that case you would need transactions for each time they were employed - with an Arrived Date and Departed Date (null whilst they are still currently employed)

You can then easily report on who was employed 5 years ago, but is no longer; or how many people were employed then / now and how many in continuous employment during that period.

I don't suppose it matters on the volume you are processing, but over time your system grows at 1x the employee numbers per month, whereas a master employee table only changes / grows by the number of departures and new arrivals, so is more "frugal" in that regard.

1 Like

I always think in terms of what if I get hit by a bus (this might actually happen the way people in Seattle drive these days and with weed being legal and all) or you are on vacation in sunny Cancun (do you really want to be bothered by an email to add a table while you are sipping a nice pina colada by the pool) . You want a process that is sustainable, scale-able and can thrive without too much human interference.