SQLTeam.com | Weblogs | Forums

Slow SQL Server instance?

Hi there!

I have a set of databases that resides on the local SQL server 2014 instances of my clients. I recently made some modifications to the databases which seem innocent in nature (added a few columns, changed some data, nothing out of the ordinary). My process is pretty simple, for each database, I do the following:

  • DBCC CheckDB
  • Add columns, change some data, whatever simple stuff
  • For each table in the database, ALTER INDEX ALL ON Table_Name REBUILD
  • sp_updatestats

But now, queries that use to take only milliseconds are taking seconds. That's a performance degradation by a factor of 1000! But here's where it gets weird: this radical performance degradation only happens at the clients' sites. If I detach the databases, bring them over to my workstation (which is an older system than my clients'), all queries run lightning fast!!!

What could be the issue? What should I investigate and how? I tried restarting the clients' systems, no effect. They all seem to have plenty of available memory and the SQL Server instance eats up its usual fair share.

I'm at my wits end so any help would be dearly appreciated!

Thanks for reading!

Do they have access to that server? is it a physical server or virtual? has there been any network changes on their side?
Are their databases being backed up? look at their log files for the databases you made changes to.
Could be one of many things. But the fact that is blazing fast when you attach it to yours seems to indicate an issue on their side.

All client servers are local physical servers. All databases are backed up twice per day. The drop in performance happens immediately after the series of procedures that I quotes earlier:

Blockquote My process is pretty simple, for each database, I do the following:

  • DBCC CheckDB
  • Add columns, change some data, whatever simple stuff
  • For each table in the database, ALTER INDEX ALL ON Table_Name REBUILD
  • sp_updatestats

Blockquote

As you say, given that I can take the databases and put them on my older system with no loss in performance is indicative that the issue is with the engine, or system databases, but not the client databases. But what the heck could have affected their systems like that?

It almost certainly has to do to sp_updatestats...

By default - that procedure will update statistics with a sampling rate. The index rebuild updates the statistics with a full scan - and then running the procedure updates those stats with a sampling rate instead of the full scan.

Now...the execution plans that are generated are based on the sampling rate instead of the full scan and you get a non-optimal plan.

As to why it runs 'fast' on your system...there are way too many factors, but most likely the execution plan that is generated on your system is a more optimal plan or it has less impact because of local storage.

An index rebuild should update (regenerate) statistics equivalent to full scan. You really do not want to issue sp_updatestats after that.

I will do an index rebuild ASAP in that case. And this may sound like a stupid question but, given the available time to do so, would there be any benefit to running sp_updatestats before doing an index rebuild?

I can't state clearly enough how much I appreciate your input. Thanks for your time!!!

Not really - because the index rebuild (not reorganize - just to be clear) will update the stats with a full scan.

Hi folks!

I ran an index rebuild on a few client instances and it had no effect. So the issue is not with indices either.

But I was reminded that I did make another change in my query methodology: instead of connecting directly to a database and running "Select Col1 From Table1", I changed all my queries to "Select Col1 From DB1.dbo.Table1" with a connection to any unspecific database. It appeared inoffensive and better structured. However, when I ran a set of those new queries yesterday directly from SSMS at a client site, it took over 8 seconds to run. So I did a test and reverted back to the old methodology of instead connecting directly to each concerned database and running the same queries using the simplified "Select Col1 From Table1". This ran in milliseconds.

So obviously, the SQL server instance is taking a lot of time finding DB1.dbo.Table1 when called from DB2. To me, it's analogous to having a slow performance in opening a file on Windows using a relative path (e.g. \Station1\MyFile.txt) versus using an absolute path (e.g. \192.168.1.100\MyFile.txt).

This behavior only seems to occur at my client sites but not on my workstation using the exact same databases. So, again, the issue is not with the databases. Therefore, my guess is that there exists some setting on the SQL Server instance on my station which is not set at the client sites. Would anyone know what is at issue?

Thanks!