Can anyone give me a couple of simple "bullet point" reasons to upgrade from SQL Server 2008 to SQL Server 2016? I want to present a business case to my boss to get him to pull the trigger on purchasing the upgrade. I'm just looking for maybe 3 or 4 key benefits, such as, it's faster, it's easy to maintain (why?), it's got more features that make it easier to code (such as the IFEXISTS function and stuff like that), Thanks!
Take a look at this:
I saw that, but it didn't seem to have any reasons to upgrade
Support ended in 2014, unless you have Extended Support which will end in 2019
Scroll down to "What’s New in SQL Server 2014 since SQL Server 2008"
- Query optimization enhancements - i.e. faster
- Backup Compression - single config setting will make Compressed Backups the default, saving 80% disk space and I/O time (and also on Restore). [Used to be an Enterprise-only feature]
- Enhancements to replication / High Availability / AlwaysOn etc.
- Spatial features, geocoding and so on - if you need them
- memory-optimized tables - if you need them
- "R" - machine learning algorithms
- Ad-hoc Query Paging
- Sequence Objects
- SELECT INTO allows parallel execution
- Updateable columnstore indexes
- Buffer pool can be extended to SSD (up to 32x size of memory)
- TRY_CONVERT, PARSE and TRY_PARSE
- New Date/Time functions EOMONTH, DATEFROMPARTS etc.
- Other new functions CHOOSE(), IIF(), CONCAT(), FORMAT(), STRING_SPLIT(), STRING_ESCAPE()
- The OVER clause supports window functions
- FIRST_VALUE, LAST_VALUE, LAG and some others
- JSON Support
- DROP IF syntax
- INDEXes can be declared inline in CREATE TABLE statement
As it turns out, SP1 for SQL 2016 was released today, and if you didn't think any of the other reasons were enough, check these:
I tend to try to frame my upgrade justifications a little different. I usually focus on one specific thing and try to attach a "cost" savings to it. That can be in dollars, cores, memory, storage, time, etc.
For example, if you upgrade to SQL Server 2016, clustered column-store indexes will reduce the storage of these very large tables (but list the exact ones) by 90% which means I can use less of that really expensive storage. Oh and your queries for transaction history will run in 10 seconds rather than 3 - 4 minutes. That means I need fewer of those expensive cores.
Give them a hook where they can see the benefits in their terms.
I see your point. The main 3 things I believe they want here are faster imports, faster queries and easier to program (so we can write and debug queries more quickly and efficiently). Our main use of the database is not editing data at all, we import large text files we received from other companies weekly and we then run some big queries to extract data from them. And some of it is very slow. For example, one weekly process to import 12 text files into tables takes more than half a day; another process we run takes 2-3 days! (The current queries are terribly sloppy and inefficient; they were written by some previous employees who transitioned on their own from Excel and Access to SQL Server and were self taught, so it was a mess.) I'm trying to unravel it and rebuild everything with halfway decent design and coding. We're running with SQL Server 2008 and it was configured all screwy ever since it was installed.
That's more or less a "given" with each new version. There is always something that makes life easier, or improves the debugging / optimisation process.
Training might be an alternative
Problably a given ... but:
in that case the effort spent on sorting them out will yield the most bang-for-buck, in particular because well created processes and queries will scale well, whereas lousy ones seldom do. So you can chuck as much horsepower at a badly written routine and maybe solve it for today, but not when it has grown, whereas a well optimised routine is much more likely to scale well on its current kit, and fly on more modern, faster, kit.
Key thing I focus on with large imports is to only import changes. That does of course need a way of knowing what has changed but with luck the remote system has a ChangeDate column on its tables, but it might be possible to achieve that in other ways - e.g. perhaps a sequential editor could be used to extract rows with differences by comparing last-weeks file with this-weeks-file.
Unfortunately, we get a pile of text files that our client extracts for us and other people to use for various purposes, and the records have no direct relationship from one set to the next, so we can't only import changed records; the whole lot have to be imported every week. And over about 10+ years various people have come and gone who tacked on various steps to modify and twiddle with the data, so it's a serious mass of spaghetti that needs to be untangled. But meanwhile, the older version of SQL Server is no longer supported, and Microsoft has made many improvements and enhancements over the past couple of versions, so I'm hoping we can get up to the current one.
We have a client that exports data from a SQL Database for import into ours (just for Read Only purposes). The files are huge, we can "see" the other SQL box, but this Export/Import avoids license fees on the other APP ... the import runs about 10 hours each night, there are no CHANGE DATE columns ...
We get the files as Native BCP files, so in theory they should import easily - no line-breaks or other goofiness to catch us out.
In practice the transfer of the massive files is prone to Disk Full and other outages (including "Tonight we'd reboot at 10PM without telling anyone because we can ..."). So our String & Gum has to be robust enough to detect all those sorts of issues.
We pull all the files into STAGING TABLES (every column is a VARCHAR(8000) (or VARCHAR(MAX) when that is needed) and then establish which rows have changed since last time and just pull those rows into our own database. Takes about 5 minutes to get the changes each day, and the rest of the 10 hours just getting to that point ...
I've known client to use SSIS to get stuff from Left, Right and Centre. Their IT people seemed to spend every day until lunchtime sorting out whatever had gone wrong the night before ...
... personally I'm happier with BCP. We save the -E Errors output, the -O Output output and anything to the STDOUT into (text) Log Files so we can examine them if we need to. Over the years we have refined that, so now we are able to (mechanically) detect any sort of error in those output files - including the 3rd party changing their table DDL and swearing blind that they hadn't done!
seems to run very reliably. We have a table of "Data Validation Errors" into which we insert Source (e.g. Source & Table ID), PKey, Error Number and Message. Our SQL routine for import has loads of error traps (every single string column is evaluated to see if the import data is too wide, all dates are checked as being valid, and so on - that code is all mechanically generated), so if some goofy data arrives the details are logged (and the outcome is either critical, and that row not imported, or "repaired" [e.g. column value is truncated to max Length of the column]). Operator can then go about cleaning up the source data, or whatever, to solve that ... or tell us that we need to increase a column width or similar.
Dunno if that would work for you, and maybe these days SSIS does an equal, and better, job?