MS SQL Temporal Tables Pro's and Cons

As the title says what are the pros and cons to using this new feature.

I can find information on how to create them and they seem like a good idea. But there has to be a downside.

one of the down side that comes to mind is

Disk Size ....

maintaing history is EXTRA Storage ... small or Large

True, but the "best practice" recommendations is for the temporal table to use a columnstore index, this also lists some of the limitations as well

https://docs.microsoft.com/en-us/sql/relational-databases/tables/temporal-table-considerations-and-limitations?view=sql-server-ver15

(Right near the bottom)

While I agree that the creation of "history" and other forms of "audit" tables do, in fact, use extra disk space, if you need such a thing, it's a tough fact to avoid. Column-Store indexes can help there but you do have to be careful there with performance and you do have to remember that Temporal Tables are page-compressed by default. I've not looked it up but I don't know if you can actually have column-store indexes in the presence of compression. You may have to override the default.

One of the things that does bug me is that you can't relegate the history table to a different database and have it operate by synonym. The reasons why you may want to do such a thing is to set the different database up as (possibly) a different recovery model than the original database and also move things like large "history/audit" tables out of the original database to make rapid DR restores a possibility to "get back in business" more quickly.

I've also not yet tried to capture the name of the Session User in the "history/audit" table as a default so that you don't have to have code do it possibly incorrectly or be missed altogether. That'll be an experiment that I do in the very near future because I actually have a pressing need to do so.

To Harish's point, there can be a huge amount of unwanted wasted space used if the table has a huge number of columns and the normal column count of updates is very small (say, a 150 column wide table where, after the first insert, only 4 columns are usually updated). The same holds true if the table contains LOBs (the article that uberbloke cites, mentions that particular issue) or suffers from a lot of "out-of-row" activity from large numbers of overly large variable width columns.

Other than those things, I see Temporal Tables as being a MUCH better and faster solution than home-grown solutions, especially when the end result is to have the ability to determine a "point-in-time" state for any of the rows in a table. In other words, MS (IMHO) did a great job in instilling Type 6 SCDs (Slowly Changing Dimensions) in Temporal Tables both for writing to the "history/audit" table and reading from the primary table and the "history/audit" table by date/time.

To keep down the size of the Audit table I plan to run a stored procedure that will remove all records that are greater than 6 months if there are more than 3 versions. We don't expect to have massive updates so I expect the table won't grow to much over time. The will be Varbinaries in the tables as well

You might want to look into partitioning then. Most will recommend the use of Partitioned TABLES for such a thing. I drank that MS Kool-Aid for a table that has 96 months of data that must always remain. For a six month table, Partitioned TABLES would probably work but, if I have my way next time (or even with a conversion on my existing monster history table (and it's not a Temporal Table either), I'll use a PARTITIONED VIEW instead.

Either way, though... consider partitioning so you're not loading up your log files and backups with DELETEs that don't matter.

Either way, when it comes to the highly active and usually huge tables, have a plan "B" for if/when plan "A" doesn't work so well.

1 Like

What advantage would partition tables give me? Would it not still use the same amount of space?

There are several advantages to partitioning.

The first is that it will make life a whole lot easier and faster to SWITCH out and entire month's partition and then drop the working table that you switched the partition out to.

The second advantage is that you can set all previous partitions (just not the current or next month partitions) to READ_ONLY so that you're not having to backup the older months that will never change. You DO have to back them up one final time after you've set them to READ_ONLY.

The third is that you can do a "Piece-Meal" restore if one of the partitions goes bad. This also allows you to put off loading such partitioned data until after you've done restores of the data necessary to "get back in business" if you run into a DR situation. You do have to restore the partitioned data but not until you've got everything else up and running to support business activities. For the partitioned data, you just need to restore the "current month" to "get back in business".

Were developing in the cloud which is my first time working in there, but I will let our data architect determine the partitioning since I don't know a lot about that yet.

But thank you for the advice.

BTW and just to be sure, there are three things that I consider to be "faults" with Temporal Tables...

The first thing is that you cannot purposefully ignore columns in the history table. It audits everything.

That brings us to our second "fault" and this is there is no option to ignore LOBs, which can add a huge amount to the space required for the history table.

The third fault is you cannot add a column to do things like capture things like what the ORIGINAL_LOGIN() was when the row entered the history table.

If you need to do any of those things, Temporal Tables are not the way to go. You might think that an INSTEAD OF trigger might help here but those are terrible when it comes to updates.

Yes I understand, but I will actually be auditing tables that have LOBs as this is something that we do update/change and we have run into problems in the past of people uploading a new wrong file. The data doesn't change a huge amount but there are changes at time. in 10 years we have only generated 300 gigs of data so I think planning for a Terabyte for now is reasonable. I plan to only keep the 3 most recent changes to any data change when greater than 6 months. We will also be speaking to the client to nail down how often data might change for a given row of data.

Keeping only the latest 3 changes after 6 months sounds like a bit of a never ending chore of deletes, which will also keep you from setting the partitions of the legacy months to "read only".

Thank you everyone for your input. I have learn a lot about Temporal Tables. Unfortunately the powers that be above, have told me that it will increase our cost two much on the cloud. So I am stuck using triggers and regular tables. All this work wasted.

Thanks everyone.

To be clear (and I am NOT advocating cloud usage), Temporal Tables may still not be appropriate because of the LOB requirements you have but Temporal Tables have absolutely no cloud requirements.

This is what I was told about our costs..

Temporal tables require S3 pricing tier if using DTUs which is about 225/month per database. Also I don't think they work with SQL Express.

Currently in prod we're using dual vCores at a cost of ~540/month. I believe that given our workload we can get away with S1 or 20 DTUS for about $45/month.

My long winded point being that I think we need to fix the prod deployment and spend about $45 per environment right now on MSSQL for a cost of about $180/month for 4 environments.

Using temporal tables will drive that cost to about $1000/month. These are operational and ongoing costs. Think 12k/yr or 60k/5 yrs to audit a few tables feature which is a weak justification.

All this for a database on SQL Server EXPRESS???