Script Formulation

Hi,

I have the following Delete statements - I want to create a script in SQL Server to run all these statements in one go - Then I want to create a log file to write the following - (this can be a text file which can easily be exported for record);
The contents in the log file should be;

TableName
Total Rows Deleted
Record Time of Deletion

E.g;

dbo.PatientHistory
25343 records deleted
12/08/2015 14:28

The statements are below, so this can be represented as a record for each output.

Also, if it fails due to any other reason and the script is stopped or terminated - it write at the bottom of the output

Script Successful OR Script Failed at 'datetime' at Delete Statement XYZ


DELETE dbo.PatientHistory
FROM dbo.PatientHistory INNER JOIN ActivePatients
ON dbo.PatientHistory.PatientGuidDigest = ActivePatients.PatientGuidDigest 


DELETE dbo.Event
FROM dbo.Event INNER JOIN ActivePatients
ON dbo.Event.PatientGuidDigest = ActivePatients.PatientGuidDigest


DELETE dbo.Problem
FROM dbo.Problem INNER JOIN ActivePatients
ON dbo.Problem.PatientGuidDigest = ActivePatients.PatientGuidDigest


DELETE dbo.Diary
FROM dbo.Diary INNER JOIN ActivePatients
ON dbo.Diary.PatientGuidDigest = ActivePatients.PatientGuidDigest


DELETE dbo.MedicationIssue
FROM dbo.MedicationIssue INNER JOIN ActivePatients
ON dbo.MedicationIssue.PatientGuidDigest = ActivePatients.PatientGuidDigest

Thank you for your time.

after each DELETE, the @@ROWCOUNT variable will have the number of affected (deleted) rows. You can add these up and write your audit record at the end.

[quote="dr223, post:1, topic:2717"]
I want to create a script in SQL Server to run all these statements in one go[/quote]

Presumably on a schedule - once a day, or something like that?

You could put the Script in a SQL Agent Job (and give it a schedule) but personally I think it is better to put all the deletes in a Stored Procedure and just put a single line in the SQL Agent Job to execute that script

EXECUTE PurgePatientRecords

Your procedure ("PurgePatientRecords" in my example) can then delete the records from each table in turn and, as gbritton says use @@ROWCOUNT to record, and log, the outcome.

Personally I much prefer to log that information to a Database Table - I can then see, day-by-day, what deletions have occurred, rather than having a text file that I have to read / parse etc. You can have SQL send you an email with "today's results" if you like (I would still record the information in the Log file).

We have one log file for such things, used by all such processes, it has the columns for:

Log Record ID (IDENTITY - i.e. 1, 2,3...)
Date/Time
Number of rows processed
Number of Inserts
Number of Updates
Number of Deletes
Name of process
Name of step (i.e. if the Process has multiple steps / tables etc. that it processes)
Notes

It is very hard to detect that such a process failed. Simple errors can be trapped but, for example, a power cut? or a catestorphic error that aborted the process.

You can check @@ERROR after each delete, or use TRY/CATCH but even then there are things that it will not catch. That said, if such things happen, over time, you can program against each one.

What if the [PatientHistory] table was not found? For example, a software update is being applied when the job runs and, tempoarilly, the table has been renamed because it is being recreated with additional columns. Or there is a Deadlock ... Or ...

Each time something like that happens you can attempt to program against it.

You could have the script send you an Email on success ... but will you be alert to NOT receiving an email one day? I get hundreds of such emails every day, there is almost zero chance of me detecting that one of them doesn't arrive!

Critical processes like this write to a log file, with the columns:

Record ID
Date
Process name
Status
Error No
Error Message

When the process starts it inserts a new log record - Status = 1 (started), Error No = 999 ("Running")

When it finishes it updates the row with Status = 2 (Completed OK) or Status = 9 (Completed with error) and then adds an Error No 0=No error, otherwise a #Number unique to the error that occurred (looking att the code I can search for ErrorNo = 5, for example, to find the position in the code where that would be triggered).

We then have "something else" that checks the Log File for anything that started and did not finish in a reasonable time, and then THAT sends me an email (whether it finds anything or not). Now THAT email I SPECIFICALLY check for each day :smile: The only other one I specifically check for is an email that is generated if a SQL Agent Job fails. I have a job, which runs once a day, which deliberately generated an error and fails. If I don't get THAT email then someone changed the email settings, or email is bust, or the server got reconfigured or ... whatever .. .I need to LOOK at that server to see what has changed :smile: