SQLTeam.com | Weblogs | Forums

Require Suggestion on third party tools both PAID & FREE


#1

Hi All,

I have designed a SSIS package which archives historical data from LiveDB to ArchiveDB.

Archival is done on department basis, each dep lead is given choice of how much amount of live data is required. If the LEAD selects '100'. Only 100 days of data is stored at LiveDB and other data is moved to Archival. All this info is stored in a table and dep LEADS have access to UPDATE the values. SSIS PACKAGE refers to this value while archiving.

However I would keep all the DIM tables in sync with LiveDB and ArchivalDB which would be a simple PROC.

This Package only archives FACT tables, all the FACT tables are kept in a SEQUENCE Container such that they run in parallel.

My Organization wants a third party tool which validates the outcome of the package sent a mail to DBA's about the archival status.

Validations like:

It should trigger the package at a point of time between 1AM and 5AM as per we schedule.

How many records are INSERT(ed) from STAGING table to PROD table.

How much data is loaded into FLAT files using BCP QUERYOUT and how much did the BULK INSERT process load the data into Archival DB.

It should keep a note of the failed process TABLEs and should include in the mail which tables have succeeded and failed.

There should be a choice to select or run only the failed tables manually.

We would appreciate if there is any option to auto-trigger the package to execute only the failed STEPS/ TABLEs on next day at the same time.

Thank you, please let me know if you require any other info.


#2

Probably no help, but we do all that under program control (rather than using SSIS). We log which (table), when (started & finished), how many (rows) ... all that sort of stuff.

There is then another process that comes along and says "Did the process start" (i.e. looking for the first log entry - if not it logs that as a error) and if it did then "Here's a list of every table which should have copied, is that in the log in chronological order?" thus if anything broke (including a power cut or something else unforeseen) it can detect that.

We don't do the email thing - too great a chance of the email not being delivered and the user not remembering that they should have received an email (more the case if the user gets "hundreds" of such emails each day, of course, if there is only one that's not hard to remember), we use a report instead. We do have an email if the checking failed, but not if it is successful, so we are not relying on loads of emails announcing things.

(We do have one email which checks that every single test of such processes was "clean" and it reports whether that test was totally clean, or X processes failed, thus we just have to remember to check for that one email. If we add a new process of some sort it gets added to the overall checks RATHER THAN generating yet-another-email. We also have a second email which is triggered by a SQL Job failing - i.e. we are testing that a failure does indeed generate an email - so that email NOT arriving would tell us that there was something wrong with the email system - that has happened, when some well-meaning-soul <sigh!> "improved" something) but other than those two emails we have no "must remember to check for that one" emails.