As per above script data will store in a single column with cama separate or any other delimiter which is semicolon ; etc.. so first i need to find out in which row header will be there and based on header (ID,name,sal') i need to create table schema dynamically.. so may be in another table i have more columns that has to automatically create table with that header..
Yes boss.. we need to find the row and that row would need to create as new table.. please help me the way to achieve.. I tried but not yet reach the target
Yes but columns might be changed.. but any how based on the row data how to create table.dynamicalli.. please help on that.. I need solution at least create table based on that row..
I need to inherit my code.. this is one task for find the header row and create table schema for that header.. just assume like Harish told we find one fixed header like where column=%id%.. and with that row how to create dynamic table create schema.. please help
That will fail when it finds data that might have Id in it but is not really a table schema definition. For example
Idol, identification,bid,rid,did,android,grid, etc.
If you can change the way your tabe is populated to have a second column named type, in it have 3 possible values, ddl, dml, other.
As is yoir table is not well designed
the other issue is there is no way of finding out the table columns' data type. Unless this table will just be staging table wherein you could just use nvarchar(max) for all of them. You really need to reconsider your design.
;with src
as
(
SELECT x.Value + ' nvarchar(max)' as jumbo
FROM [NewTable1]
cross apply STRING_SPLIT([18/10/2020 date],',') x
where [18/10/2020 date] like 'ID%,%'
)
select distinct @query = 'create table zimza(' +
STUFF(( SELECT ', ' + jumbo
FROM src
FOR XML PATH(''),TYPE)
.value('.','NVARCHAR(MAX)'),1,2,'')
+ ')'
from src;
select @query = 'if OBJECT_ID(''zimza'') is null ' + @query;
exec(@query)
Split_String() isn't the right thing to use here because the order of the values being returned is NOT guaranteed. You'll probably need to use the likes of DelimitedSplitN4K, instead, which does return the left to right oridinal of the elements that have been split out of the string.
I'm actually working on this problem over on SSC where the OP also posted it.
This is really a lot of effort in futility to me. Very bad design.
declare @query nvarchar(max);
;with src
as
(
SELECT x.Item + ' nvarchar(max)' as jumbo, x.ItemNumber
FROM [NewTable1]
--cross apply STRING_SPLIT([18/10/2020 date],',') x
cross apply DelimitedSplitN4K([18/10/2020 date],',') x
where [18/10/2020 date] like 'ID%,%'
)
select distinct @query = 'create table zimza(' +
STUFF(( SELECT ', ' + jumbo
FROM src
order by ItemNumber
FOR XML PATH(''),TYPE)
.value('.','NVARCHAR(MAX)'),1,2,'')
+ ')'
from src;
select @query = 'if OBJECT_ID(''zimza'') is null ' + @query;
exec(@query)
select * from zimza
Heh... you didn't even try running your own code. Here's the code you posted at the start of this thread...
Here are the errors that are generated when I simply copy'n'paste it in to SSMS to run it...
Msg 8152, Level 16, State 13, Line 10
String or binary data would be truncated.
The statement has been terminated.
(1 row affected)
Msg 8152, Level 16, State 13, Line 14
String or binary data would be truncated.
The statement has been terminated.
(1 row affected)
Msg 8152, Level 16, State 13, Line 18
String or binary data would be truncated.
The statement has been terminated.
Msg 8152, Level 16, State 13, Line 20
String or binary data would be truncated.
The statement has been terminated.
Msg 8152, Level 16, State 13, Line 22
String or binary data would be truncated.
The statement has been terminated.
Msg 8152, Level 16, State 13, Line 24
String or binary data would be truncated.
The statement has been terminated.
Msg 8152, Level 16, State 13, Line 26
String or binary data would be truncated.
The statement has been terminated.
Heh... Ok... according to the OP, the column names can change. I wonder what would happen to your code if they changed the "ID" column name to "PK", just as an example. BOOM!
Totally agreed but it's very likely the OP has no control over the data he receives. I'm working on this very same problem over on SSC and I'm also doing the imports from multiple files of ~32K rows because I also wanted to make sure that it would perform well. I'm going to take it one step further and see if I can determine he actual datatypes for the columns instead of doing what the import wizard does (which is terrible).
This isn't the first time either of us has had to deal with wonky stuff other people put in files. It won't be the last time the OP has to deal with it, either.
The data types in the ETL tables or final destination tables.
Either way may the force be with you, I couldnt see myself doing what you are attempting to do
Thank you so much for your's help.. That helps me a lot..
Also Help me instead of passing manual table name and column name should be dynamically pass the variable .. Because if i have 2 or more source tables data then i need to create respective table row should create new table schema another table respective row should create another table schema..
also i need to find in case column name will change instead of ID if some other column name came to first..