SQLTeam.com | Weblogs | Forums

Adding fields to a table in a large organisation


sql server In a large organisation with many tables, SPs, functions, etc. let's say you would like to add three fields to one of the tables.
What are the things you need to consider/do before adding these fields in an architecture point of view?
Thank you



Bother! ... forum won't let me post JUST that ... so you also get this wibble as a bonus ...


How about fist finding out which tables, sps, etc are affected...?
Please elaborate


Existing code should not be affected by adding columns to a table if that code was written properly. Of course, you can't know that for sure, so you probably want to check.

I don't know of any way to do that easily. You'd basically have to look thru the code looking for SELECT * or INSERT INTO table_name without a column list.

For non-dynamic sql, the view sys.sql_expression_dependencies should at least be able to help you find all code that references the table in question.


Strikes me that there may be two different things here.

On a large table with many users I would aim to restrict the time that it takes to make the change. That would mean adding columns at the end of the table and including any defaults etc. in the column definition - for example for a NOT NULL column it would be required to have a DEFAULT, even if that was not needed - so add the column with the DEFAULT, then "populate" the column with the correct data, then DROP the [temporary] default.. A bigger problem would be where there is a change-of-mind means that the database structure needs changing. An INT or NUMERIC needs to be made larger, or a clustered index needs to be changed (probably no excuse for that!!). On large tables in busy databases they become BIG changes to make, and perhaps you choose to work around that instead - e.g. instead of widening a column add a new column for the bigger / wider column, migrate over the data and change the APP to use the new column. Adding a new column on the end of the table is very quick and has minimal impact ... some other changes (e.g. increasing the size of a varchar) can also be done quickly and easily, but for the rest the process is very time consuming:

Create temporary table in the new structure
Drop constraints and defaults on old table
Create defaults etc. on temp table
Copy all data from old table to temp table -- This can take a while, and a lot of disk space, obviously.
Drop old table
Rename temp table to original name
Re-create constraints / FKeys, indexes

The elapsed time for that lot can be significant, so I can well imagine that on a big database table, with lots of users, "workarounds" would be adopted to avoid that.

Whether that is compatible with "an architecture point of view" may be debatable!


I was called in by a client who suffered a fairly sudden dreadful decline in performance. It turned out their DEVs use SELECT * everywhere, even if only one or two columns were actually used in the APP (maybe the programming language they used made that method more "convenient" for the DEVs).

Anyway, their Call Centre had asked for a "Notes" facility on various Customer, Product, etc. tables and once provided (as a varchar(MAX) I expect ...) they made heavy use of it and said it was wonderful ... at which point all the SELECT *'s were pulling huge amounts of data from the DB, across the network to the APP Server, which then ignored most of the data, but especially all the NEW columns which it had no interest in, away. It took them months to change all the SELECTs to have specific column lists ... big APP ... they had thousands of concurrent users on the WEB, performance was dreadful until it was fixed, I expect they lost a lot of loyal customers during that time.