Send Column Name as a Parameter

Hello,

I'm getting super frustrated, because I've looked at a lot, and can't seem to find anything that really says I can or can't.

Background: I am uploading a table that has 6500 rows & 153 columns. To do this, I make it a CSV file. In doing so, it removes leading 0s.

Out of these 153 columns, 15 or so have leading 0s of varying length.

What I would like to do - write a SP that checks the length of the string, adds appropriate amount of 0s. (I have written this)

The issue that I have, is that I would like to send in the column name as a variable so that I can just run the SP 15x instead of having to write out the code naming all 15 columns individually. Is there a way to do this?

When I name the column specifically, it changes 3500 rows-ish. When I change it to a variable, and send the column name, then it changes 0 rows.

This works:

BEGIN

update MyImport
 set specificcolname = '0' + specificcolname
where LEN(specificcolname) = 5 
	
update MyImport
 set specificcolname = '00' + specificcolname
where LEN(specificcolname) = 4

update MyImport
 set specificcolname = '000' + specificcolname
where LEN(specificcolname) = 3

update MyImport
 set specificcolname = '0000' + specificcolname
where LEN(specificcolname) = 2

END


This does not work... :frowning:

BEGIN

update MyImport
 set @columnname = '0' + @columnname
where LEN(@columnname) = 5 
	
update MyImport
 set @columnname = '00' + @columnname
where LEN(@columnname) = 4

update MyImport
 set @columnname = '000' + @columnname
where LEN(@columnname) = 3

update MyImport
 set @columnname = '0000' + @columnname
where LEN(@columnname) = 2

END

How are you creating the CSV file in the first place? That should not strip out the leading zeroes if the columns are defined correctly when output. If you open the file in Excel - Excel 'evaluates' the column and determines it is a number and displays it without the leading zeroes.

The source CSV file still has the leading zeroes until you save it from Excel.

My next question: how are you uploading this file?

And finally - if you believe this must be done with a post update script, then you should only perform the update one time across all columns instead of individual updates for each column. Something like this:

 Update MyImport
    Set Column1 = right(concat('00000', Column1), 5)
      , Column2 = right(concat('00000', Column2), 5)
      , Column3 = right(concat('00000', Column3), 5);

If you what to limit the update statement (which isn't really necessary):

 Update MyImport
    Set Column1 = right(concat('00000', Column1), 5)
      , Column2 = right(concat('00000', Column2), 5)
      , Column3 = right(concat('00000', Column3), 5)
 Where len(Column1) < 5
    Or len(Column2) < 5
    Or len(Column3) < 5;

Now - if you are importing the data to a staging table, then you just need to add the above code to the code that processes the data into the final table.

1 Like

Thanks Jeff!

So, actually, all i'm doing is saving it as a .csv file, comma delimited, and I have my regional settings to put in a | (pipe) instead of comma, so everything is separated by pipes. I had done things - like tried adding in a column to define the data (so all letters) but that didn't change it, and when looking at the 2 files I can still see the 0s were stripped. The source file is an excel file.

Yes, it is for a staging table because I have to compare data before bringing it it.

Right now, I'm just doing a bulk insert so there's nothing special, and no working with the data as it's imported.

I'm not set that it has to be done with a post update script, just thought it would be easiest - however, it's seeming to be quite a pain...

Now, I was super excited with your resolutions. However, maybe due to the # of rows or something? I'm not sure, I can't get that to work either.

I can do a test table with 5 rows, and that works, but I can't get it to update the 6500 rows. It says it's completed it on 6500 rows, but then there are still no leading 0s. The column type is nvarchar(6). Even if I just select top 5 and concat it won't add the 0s.

Also, I don't have to do it to all the columns, because there is a lots of different text data in the other columns.

Please post the target table schema definition

Well, by target, do you mean the one that the staging data is going to be going in to?

I'm not really sure that matters much... I'm just working on the staging table to make sure it is correct? I mean, correct me if I'm wrong, but I don't see the usefulness in having the target...

I did a bulk insert, and then wanted to call a sproc to clearly format my staging table, and then from there, let it sit awhile until people needed to do the comparison.

I meant the staging table.

use Interim
go

IF OBJECT_ID('dbo.CSP_Table', 'U') IS NOT NULL
DROP TABLE dbo.CSP_Table;

go

CREATE TABLE CSP_Table (
col1 int
,col1 int
,col2 int
,col3 nchar(11)
,col4 nchar(5)
,col5 nchar(6)
,col6 nvarchar(50)
,col7 date
,col8 nchar(1)
,col9 nchar(8)
,col10 nvarchar(40)
,col11 nvarchar(30)
,col12 nvarchar(75)
,col13 nvarchar(10)
,col14 nchar(3)
,col15 nchar(2)
,col16 nchar(8)
,col17 nvarchar(40)
,col18 nvarchar(6)
,col19 nchar(2)
,col20 nchar(2)
,col21 nchar(5)
,col22 nvarchar(6)
,col23 nvarchar(2)
,col24 nvarchar(5)
,col25 nvarchar(20)
,col26 nvarchar(40)
,col27 nvarchar(5)
,col28 nchar(1)
,col29 nchar(30)
,col30 nchar(2)
,col31 nvarchar(30)
,col32 nvarchar(6)
,col33 nchar(12)
,col34 nchar(12)
,col35 nchar(12)
,col36 nvarchar(30)
,col37 nvarchar(20)
,col38 nvarchar(6)
,col39 nvarchar(6)
,col40 nvarchar(6)
,col41 nvarchar(6)
,col42 nvarchar(6)
,col43 nvarchar(6)
,col44 nvarchar(6)
,col45 nvarchar(6)
,col46 nvarchar(6)
,col47 nvarchar(6)
,col48 nvarchar(16)
,col49 nchar(2)
,col50 nvarchar(13)
,col51 nvarchar(60)
,col52 nvarchar(60)
,col53 nvarchar(40)
,col54 nvarchar(40)
,col55 nvarchar(30)
,col56 nchar(3)
,col57 nvarchar(30)
,col58 nvarchar(10)
,col59 nchar(3)
,col60 nchar(9)
,col61 nchar(1)
,col62 nchar(1)
,col63 nchar(1)
,col64 nchar(1)
,col65 nchar(1)
,col66 nvarchar(22)
,col67 nchar(6)
,col68 nvarchar(30)
,col69 nvarchar(35)
,col70 nvarchar(30)
,col71 nchar(6)
,col72 nchar(10)
,col73 nchar(6)
,col74 nvarchar(30)
,col75 nchar(6)
,col76 nvarchar(40)
,col77 nchar(6)
,col78 nvarchar(40)
,col79 nchar(6)
,col80 nvarchar(40)
,col81 nchar(6)
,col82 nvarchar(40)
,col83 nchar(6)
,col84 nvarchar(40)
,col85 nchar(8)
,col86 nvarchar(40)
,col87 nchar(4)
,col88 nvarchar(60)
,col89 nvarchar(40)
,col90 nvarchar(40)
,col91 nvarchar(13)
,col92 nvarchar(40)
,col93 nvarchar(3)
,col94 nchar(6)
,col95 nvarchar(6)
,col96 nvarchar(40)
,col97 nvarchar(50)
,col98 nvarchar(30)
,col99 nvarchar(35)
,col100 nvarchar(30)
,col101 nchar(5)
,col102 nchar(2)
,col103 nchar(6)
,col104 nchar(25)
,col105 nchar(6)
,col106 nchar(15)
,col107 nchar(6)
,col108 nvarchar(45)
,col109 nchar(6)
,col110 nvarchar(40)
,col111 nchar(6)
,col112 nvarchar(40)
,col113 nchar(6)
,col114 nvarchar(40)
,col115 nvarchar(12)
,col116 nvarchar(40)
,col117 nvarchar(6)
,col118 nvarchar(40)
,col119 nvarchar(6)
,col120 nvarchar(40)
,col121 nvarchar(6)
,col122 nvarchar(40)
,col123 nchar(14)
,col124 nvarchar(40)
,col125 nvarchar(75)
,col126 nchar(6)
,col127 nvarchar(40)
,col128 nchar(6)
,col129 nvarchar(40)
,col130 nvarchar(6)
,col131 nvarchar(40)
,col132 nvarchar(6)
,col133 nvarchar(40)
,col134 nvarchar(6)
,col135 nvarchar(40)
,col136 nvarchar(6)
,col137 nvarchar(40)
,col138 nvarchar(6)
,col139 nvarchar(40)
,col140 nvarchar(6)
,col141 nvarchar(40)
,col142 nvarchar(6)
,col143 nvarchar(40)
,col144 nvarchar(6)
,col145 nvarchar(40)
,col146 nvarchar(40)
,col147 nchar(6)
,col148 nvarchar(40)
,col149 nvarchar(6)
,col150 nvarchar(25)
,col151 nchar(12)
,col152 nchar(6)
,col153 nchar(2)
)

With this, you can generally assume the columns with 6 are the ones that need the leading 0s

columns with the 6 in the name or columns with the six data type length? Also please provide a sample csv data 2 rows should suffice ?

Well, what it seems to have come down to, was that I was using nchar instead of nvarchar.

Now, by all accounts, I was pretty sure they are supposed to be the same capabilities less the variable length, however, it would seem that this function does not work with nchar.

@yosiasz I'm being ambiguous on purpose... i can't tell you column names, column data but i think I've got it now... :slight_smile:
just thought i'd post the resolution for the next person that reads... :slight_smile:

Sometimes talking things out and sharing schema etc helps in finding the issue.
I understand your need to be ambiguous, no problem - we will just give you ambiguous answers :rofl:. Glad it all worked out!

As soon as I saw the data type - I was going to mention that was the problem. Using CHAR/NCHAR pads the data to fill the full size. So, give a number like 123 - what is stored in the table would be '123 ' and prepending the zeroes gives you '000000123 ' and the right 6 characters returns the same data that you started with.

Going back to the original problem...it isn't clear whether the source is Excel...or that you are using Excel in the middle of the process which will truncate leading zeroes because that is what Excel does...

Now - as far as moving from a staging table to the final table. The process of loading the data into the destination table would involve an insert/update (merge) process - normally. It is that process where you can perform the process of adding leading characters, trim excess spaces - convert to date/int/numeric/etc...

1 Like

Ha! Now that you say that, it makes perfect sense!

With the Merge process I am able to do exactly what I need to do, I just prefer not to do the table updates with the merge so that if anything funky ever does happen, then I can narrow down where it's happening at... :slight_smile:

Thank you so much!