SQL Coding Standard - Vetting Vendor Databases

I think I wrote something about this here sometime ago ... can't find it though, if anyone remembers it a link/clue would be handy thanks.

I've been having a look at some of the code in Sprocs in APPs that 3rd party vendors have provided. Chock full of horrors ... my biggest pet-hate is NOLOCK as I'm 101% sure that the Vendor's DEVs have no idea of the consequences and just thing its a performance bandaid ...

I'd like to come up with a list of "banned" behaviours and vendors have to certify that their next installed release is free from them - or justify why they use need it.

Google has pointed me at a couple of good articles:

but I'd be keen to hear what anyone else uses / does / gets scared most by!!

I'll have a go at a list, and I'll try to maintain it with suggestions added by others


No use of SA or sysadmin permission.
What requirements for remote access (to support the APP)


Name to only use A-X and "_". Must indicate what the database is used for (e.g. includes the Vendor's name) and preferably whether the database is Production / Test, just for XFer of data, or similar.

One vendor has half a dozen databases, some are probably old and unused, they include names like "LIVE" - its a shared server with stuff from a dozen vendors, I have no idea which is which.

Physical files must be placed in authorised locations (if Vendor has a BACKUP to restore our standard Restore SProc will take care of the default locations)

Temporary databases (e.g. a RESTORE to retried some data, or perform a test, MUST include the date of the restore and the assumption that it will be dropped without warning after a reasonable period of time).

Preferably temporary restores should be named with the prefix "RESTORE_" (which automatically excludes them from our automated Backup system, otherwise we might run out of disk space for no good reason!)

SIMPLE recovery model not permitted except when the data can be reproduced, 100%, from any earlier restore - i.e. whatever scripts populate the database will "resume" from a "last run on" date included in the database.

Not permitted: AUTO SHRINK, AUTO CLOSE etc.
Need approval if NOT using READ ONLY SNAPSHOP and ASync stats create
Full Text should only be enabled if it is actually used / needed
AutoGrow should be MB not PERCENT and should be sufficient for, say, 3 months at each growth

SQL Agent Jobs

All steps must call SProcs and have no (significant) SQL content (SProc is backed up with database, gets copied to TEST server - whatever - long SQL scripts in Job are an invisible hazard!

(There needs to be a step in the Acceptance test to review any SQL Agent Jobs)

Any SSIS jobs require approval (who's going to deal with any breakdowns, and how good is the diagnostic information)

Similar for any other means of scheduled bulk data loading


Database backup is our responsibility. Absolutely no backups may be taken by the Vendor (potentially breaking the backup chain). We provide an SProc, which Vendor can run, to generate a Log/Full (or even DIFF if they want) backup. It puts it in the right place, maintains the chain, etc. etc.

Vendor may not change any database settings (for example: we've had them change FULL to SIMPLE because they ran the LOG out of disk space, and in the process broke the backup chain)


No unauthorised use of NOLOCK (valid usage would be a report run only by DBAs deliberately not interfering with Application Users)

Access to other 3rd Party Databases should assume that Database Name etc. could/will change. Some sort of ALIAS should be used; for example a VIEW would allow easy change and adaptation to accommodate schema changes etc. during an upgrade to the other database. Conversely hard coding database names etc. is not permitted.

Strong preferences:

All tables have a clustered index - no HEAPs
Each row has a single column unique identifier (definitely doesn't have to be clustered index, is perhaps preferable to be the Primary Key [makes mechanical detection of what it is somewhat easier]). This is useful for DBAs in comparing data - i.e. matching a specific record in one location / database revision with another.


Things to mark the Vendor aware of, and pick them up on later if it turns out they were not actuall "aware"!

Database is appropriately normalised
Application is scalable - including having been reasonably benchmarked under load - that carries an expectation that there is a baseline benchmark that we can compare "our problem" against. Database tales have appropriate/reasonable indexes. Regular queries have been tuned.
Foreign Keys exist for all appropriate relationships
Ideally all tables have audit/changes history recordal - any table where it makes sense to have it.
All tables that store Queues, Audits and the like have housekeeping routines for cleardown - tables where our users add data can grow forever, but I don't expect tables storing transient data to do that.

1 Like


  • Double Abstraction required
  • All users in AD groups
  • Permissions only assigned to custom Database roles
  • Groups (not users) added to DB roles
  • Principle of least privilege required

SQL Agent jobs:

  • Why all proc calls? What about SSIS packages or SSAS jobs?
  • All job steps should be run with a proxy user

means the core-code is in the DB, and backups of that DB, and thus the Sproc code moves around with that DB's backups (e.g. to TEST or on rollout of a new version from DEV to TEST and TEST to PROD). Also likely that the source code for the SProc is in some revision control somewhere - IME code embedded in Job itself tends to be coded directly in the job, outside revision control, and hard to detect (well .. its possible, but not on our "average radar") that it has changed. We have Exception Reports on Sprocs (and tables etc.) by Modify Date which can alert us to something being changed without the Vendor having bothered to tell anyone, code within jobs is not really on that Radar as it is not organised "by database"

I tend to dislike these, but that's just a person thing. Done consultancy for many clients where their Techies spend most of the morning sorting out all the SSIS stuff that broke overnight. All their SSIS jobs were written using point-and-click and have absolutely zero diagnostic logging - they just break, somewhere in the middle of transferring 1M rows, because there is a badly formatted date, or whatever. Those sort of jobs we prefer to do in SQL with a temporary table where we can check the quality of the data and then only UpSert the rows that are clean. We can also report any exceptions direct to the User - rather than Support having to find them manually and then write a "Yet again" email to the user ...

I may be being biast though!