Incremental bulk insert into table with auto generated "created date" and "modified date"

Right now I am doing a daily bulk insert of an csv file inside an SQL DB using the following script:

Truncate table [dbo].[Table];
GO

BULK INSERT [dbo].[Table]
FROM 'C:\BulkImport\Table.csv'
WITH (FIRSTROW = 2,
    FIELDTERMINATOR = ';',
    ROWTERMINATOR='\n');    
GO

USE DB;  
GO  
ALTER DATABASE DB  
SET RECOVERY SIMPLE;  
GO  
DBCC SHRINKFILE (dbo_Log, 1);  
GO  
ALTER DATABASE DB  
SET RECOVERY FULL;  
GO

As you can see I am first truncating the table, then inserting the data and shrinking the log for the database (the csv file I am inserting is quite large - approx 2GB of data, and after each insert the DB size kept increasing dramatically, shrinking the log file solves this issue but I`m not sure if this is good practice or not, I am fairly new to the SQL world).

Now I want to achieve 2 things:

  • I realize that truncating the whole table and inserting back the data in the csv file is not good practice, so now I want to bulk insert the file inside the database incrementally, only for new or modified data (part of the data in the file changes retroactively).
  • to have created / modified date columns generated automatically upon inserting / updating data.

You definitely do not want to set recovery to simple as you will lose point in time recovery.
I have couple of systems where I have a separate database for staging tables which is always in simple recovery. ie The processes is:
1.Load data into the staging table in the staging database with something like BATCHSIZE = 50000 to keep the log fle under control.
2.Update the main table, in the main database, from the staging table. (The main database is always in full recovery.)
3.Truncate the staging table.

This also keeps the log backups of the main database at a reasonable size.

ps If you want to keep track of changes in the main table I would be inclined to make it versioned.

1 Like

I think you will like the dba Fundamentals from Brent Ozar on youtube:

1 Like

You're breaking the log file chain by going to the SIMPLE recovery model. Like @Ifor said, that breaks point in time recovery from that point on. And, it will continue to be broken until you either take a Full backup or a DIF backup.

And, it's totally unnecessary.

I'll also say that having a 2-5GB log file isn't a bad thing unless your file growth is set wrong because it'll create a lot of very tiny VLFs (Virtual Log Files) within the log file which slows everything down including backups and restores.

Because you're truncated the table before loading it, we can do you bulk inserts in a minimally logged bulk load that won't cause your log file to explode. I also think that truncating the table is asking for trouble because what happens if your bulk insert fails due to some issue? You're left with an empty table and all processes that use it even for older data are now totally broken.

All that being said, if you want help on this, you need to post the complete CREATE TABLE statement and include ALL indexes and FK constraints.AND the command that you're using to do the bulk insert.

You can get around the truncate issue by Including the TRUNCATE and the BULK INSERT in a trans. If you need to, you rollback the TRUNCATE.

That doesn't, however, get around the issue that the table cannot be used until it's totally rebuilt. The "Swap'n'Drop" method would.