[quote="dr223, post:1, topic:2717"]
I want to create a script in SQL Server to run all these statements in one go[/quote]
Presumably on a schedule - once a day, or something like that?
You could put the Script in a SQL Agent Job (and give it a schedule) but personally I think it is better to put all the deletes in a Stored Procedure and just put a single line in the SQL Agent Job to execute that script
Your procedure ("PurgePatientRecords" in my example) can then delete the records from each table in turn and, as gbritton says use @@ROWCOUNT to record, and log, the outcome.
Personally I much prefer to log that information to a Database Table - I can then see, day-by-day, what deletions have occurred, rather than having a text file that I have to read / parse etc. You can have SQL send you an email with "today's results" if you like (I would still record the information in the Log file).
We have one log file for such things, used by all such processes, it has the columns for:
Log Record ID (IDENTITY - i.e. 1, 2,3...)
Number of rows processed
Number of Inserts
Number of Updates
Number of Deletes
Name of process
Name of step (i.e. if the Process has multiple steps / tables etc. that it processes)
It is very hard to detect that such a process failed. Simple errors can be trapped but, for example, a power cut? or a catestorphic error that aborted the process.
You can check @@ERROR after each delete, or use TRY/CATCH but even then there are things that it will not catch. That said, if such things happen, over time, you can program against each one.
What if the [PatientHistory] table was not found? For example, a software update is being applied when the job runs and, tempoarilly, the table has been renamed because it is being recreated with additional columns. Or there is a Deadlock ... Or ...
Each time something like that happens you can attempt to program against it.
You could have the script send you an Email on success ... but will you be alert to NOT receiving an email one day? I get hundreds of such emails every day, there is almost zero chance of me detecting that one of them doesn't arrive!
Critical processes like this write to a log file, with the columns:
When the process starts it inserts a new log record - Status = 1 (started), Error No = 999 ("Running")
When it finishes it updates the row with Status = 2 (Completed OK) or Status = 9 (Completed with error) and then adds an Error No 0=No error, otherwise a #Number unique to the error that occurred (looking att the code I can search for ErrorNo = 5, for example, to find the position in the code where that would be triggered).
We then have "something else" that checks the Log File for anything that started and did not finish in a reasonable time, and then THAT sends me an email (whether it finds anything or not). Now THAT email I SPECIFICALLY check for each day The only other one I specifically check for is an email that is generated if a SQL Agent Job fails. I have a job, which runs once a day, which deliberately generated an error and fails. If I don't get THAT email then someone changed the email settings, or email is bust, or the server got reconfigured or ... whatever .. .I need to LOOK at that server to see what has changed