Merge multiple .sql files into one single file

Hi,

Does anyone have a program or utility that can merge multiple .sql files in the windows directory in to one single executable .sql file for fast deployment?

Scenario....

I have multiple .sql files: e.g.

file1.sql
file2.sql
file3.sql
fileN.sql

Which I would like to merge in to one single file e.g.
Files.sql

Thanks,
Alex

We use String and Gum !!

I make a directory listing (its all in a a batch file, so more-or-less completely automated) using:

DIR ..\*.* /S /od /b >MyFileList.TXT

I then manually edit that to remove all the not-wanted files, before a given cutoff date/time (i.e. the start date for the new release)

I then pass that file as a parameter to a BATCH file that concatenates all the files using

FOR /F %%z IN (MyFileList.TXT) DO echo %%z & type %%z >>MyReleaseFile.SQL

My string and gum creates a "Marker File" in the folder with today's time and date. The filename is

__20170209__.___

so they stick out like a sore thumb. They are included in the DIR list (which is sorted by date), so easy to find a previous release "marker file" and delete all the older filenames from the MyFileList.TXT

We then run the MyReleaseFile.SQL against a QA database. We fine that we get some "Cannot create XXX because it wants YYY" and we manually rearrange the sequence of code to take care of that (whilst we might have written them in the right order, any subsequent editing will have changed the file date)

All our SProcs are ths style:

IF NOT EXISTS MySProc
    CREATE MySproc AS PRINT 'Dummy for MySproc'

ALTER MySproc

-- the full-fat code
...

so all we have to do is copy the Create Dummy higher up in the code. (I suppose automatically outputting a complete set of Create Dummy statements at the top of the file would solve the probably completely!)

We post-process the MyReleaseFile.SQL in a number of other ways.

Any

/* WITH ENCRYPTION */

is "un-commented"

We minify the file - remove all extraneous space, comments, etc. I'm not sure if it improves performance, but it certain improves Upload to remote server and RunTime which can reduce rollout time.

I wouldn't bother - since you already have the order defined then you just need a method for looping over the files and executing them.

ScriptRunner works well - but for a home grown solution - then Powershell would be very simple to use to loop over the files in a specific order. You then use Invoke-SqlCmd in Powershell to connect/execute the scripts. If you need more control - then it is easy enough to check the return status and stop the process...

You could even setup scripts to backup all of the objects being affected prior to running your scripts - and in the worst case rollback to the backup you took just prior to starting the process.

We combine the individual files into a single "rollout file" so that the original, underlying, files can continue to change as necessary but we have a "static snapshot" by virtue of the single, composite, file - so we can apply that to TEST and if the QA goes OK we can then apply it to PRODUCTION or on all Clients' databases.

So, for me, I would prefer not to execute the actual files, individually, but rather to concatenate them into a single file - and execute that.

If that works for you - but I find it interesting that a folder with individual files is somehow harder to work with than a single file concatenated with individual files. A new deployment folder date stamped with the deployment date and the files added to that folder works just as well as a single file and is easier to manage (in my opinion).

I think a single file can get too large to work with very well...and when you have issues with script 263 restarting from that point forward in a single file is much harder to manage.

Yes, good point and I can see how a folder full of individual files, representing "Release 14-Feb-2017" would work the same as one-big-file, and pros and cons of each.

We post-process the SQL a fair bit:

We move "sections of code" to enforce execution sequence. For example we might move a "CREATE PROCEDURE" shell statement to the top to ensure that it existed when a subsequent, different, SProc made an EXEC to it. It would be possible (in pretty much all cases) to change the execution order of the individual files to achieve the same, but if I were to move the whole "file" then something else might fail, whereas just moving the SHELL Create solves that.

We also remove comments and spaces, and we have markers in the code that require changing on rollout - such as comments indicating where DEBUG code is to be removed for Production. I could do that on all individual files too, but being lazy ! I've just written a Find and Replace macro for my editor, rather than a function that is capable of acting on multiple files.

All that said : I have been toying with having a Multiple SQL File Executor program which could do something intelligent with any errors that are raised. Currently it can be a pain to sort out an error; we do have PRINT statements top & bottom of each file, so they tell me which SProc / Trigger / View etc. was executing at the time of the error, but by then the rest of the script has continued to run - sometimes causing havoc. My thinking was that if I have a list of individual files called USP_MySProc, UTR_MyTrigger and UV_MyView (or whatever naming convention is appropriate) my executor could make an initial pass of the file-list and create SHELL versions of any object that did not already exist, so that none of the real code found anything missing. I would then just need to make sure that any SHELL routine had been replaced by something real.

Maybe I could generate the file-list file based on Dependencies in SQL?

One of the problems with executing a DDL script generated by SSMS is if something goes wrong a hailstorm of side effects ensues, which most probably results in restore from backup. Redgate's script generator gets around this by using (from memory, may be different, but principle the same) a #TEMP table to store whether there has been an error and preventing subsequent code from running. An executor could bail out at the first error more easily, as well as logging how many rows were updated at each stage (and comparing that against a known-good installation) and so on.

Type *.sql > OneFile.sql
It will merge your multiple .sql file into one.
For futher information, see
https://blog.sqlauthority.com/2014/10/19/sql-server-combined-multiple-sql-files-into-one-single-file/

i use folowing folder structure
005.tables
010.views
015.sprocs

within thise folders I use files with similar prefix.
005.cuatomer
010.customeraddress

etc
I use powershell to iterate and deploy.
my deployment folder is re.runnable. by the time I get to dev, qa,stage all the kinks have been worked out to to prod smoothly