SQLTeam.com | Weblogs | Forums

Sql server random slow query

Hi to all,

In my company we are dealing with a very strange problem in the last couple of week in a SQL 2012 server. And maybe you could give us an hit.
We have database that supports an ASP.NET application. We have about 500 users, and we use IIS 7 with Web Garden. Usually everything is ok, but 3-5 times a day there is one, and just one, query that takes some minutes to be executed, and usually it takes about (500 ms).
This represents about 0,1% of the times the same query is called.

Using profiler, I see that usually the query takes 10k to 20k of reads, but when the problem occurs we get 1.5 Million reads.

The stored procedure is dynamic, we send the query as a string. If I copy the query and execute it in MSSMS it is always fast.

We update indexes and statics every week.

We tried already the
• copy the parameters to local variables because of parameter sniffing
• free cache.

Nothing seems do work and the problem seems to be random.

Did you ever come across with something like this? We didn’t detect locks that explain this, because the same query is called almost at the same time by other users with normal responses. CPU and Memory also seems to be ok.

Any kind of help would be appreciated.

[quote="mdsantos, post:1, topic:1128"]
The stored procedure is dynamic, we send the query as a string
[/quote]that is not a good practice for security reasons and possibly for performance as well.

If the exact same query sometimes has 10K reads and sometimes 1.5M reads then the execution plan is different. Capture the plan if you can when it is performing poorly and compare it to a good run.

It is hard to provide specific help without seeing the statement(s), DDL, and execution plan. But a resolution will most likely be to simplify the code so that sql server has less chance of coming up with a bad plan. Maybe adding/modifying an index is the answer. And in some (rare) cases forcing things with a hint of some kind is called for.

Sql server may not re-use a cached plan for a call from a query window and the same call from an application if any of the connection settings are different. For Sql to reuse a plan not only the statement needs to be the same but also the connection properties. Often an application will connect with different set options then a connection from a query window.

Hi, Is the estimated rows different when the query is slower?What type of operation is occuring , for example is it a Clustered Index Scan? Are you able to post the query?

Follow this guide to troubleshoot a slow running query - http://www.sqlserver-dba.com/2012/11/sql-server-how-to-troubleshoot-a-slow-running-query.html

I don't have a problem with the concept, provided it is done right. For a query where the end users has a large number of parameters which they can choose from then IME dynamic SQL can outperform static SProc code, quite possibly by orders of magnitude.

For me, for dynamic SQL, the query must be fully parametrised and using SP_ExecuteSQL (rather than "EXEC (@MySQL)"), and it must be constructed in such a way that it is likely that the same SQL string will be represented for the same query. Of course that still means that each variation of the SQL is a different query, but each will have its own cached query plan so any formats that are "popular" will take advantage of cached plans.

We log all SQL generated for dynamic SQL (including their elapsed execution times - that is of course also effected by server-load) in a table so we can report on which actual SQL statements are "popular" and we then review indexes etc. to make sure that they are running efficiently; if the usage that users make changes over time it means we can support them with changes to database etc. to match.

Might be that the query is cached by then?

I'm a programmer not a sysadmin, so I don't know if this is relevant, but if SQL decides to rebuild Stats does the "cost" (or perhaps "side effect") of that get shown in the READs for the query? I'm thinking about setting ASYNC Stats Rebuild (which I believe is NOT the default, but I think is seems to be the preferred option amongst folk here?)

Totally agree. I read the OP remark as they have a general SP that execs whatever query is passed in.

I didn't read it like that, but on re-reading it you could well be right!

We pass SQL Strings to our Sproc too, but only the list of columns that we want in the SELECT and the logic for the WHERE ... that then gets passed to sp_ExecuteSQL with a full set of parameters etc. so ... I just assumed that was what the O/P was doing too :sunglasses:

It's getting complicated to get the bad execution plans. I'm using profiler -> performance events -> show plan all and show plan XML

But after some minutes I get " There is insufficient system memory in resource pool 'internal' to run this query." errors, and I have to close the trace.

Is there any othet way to get the "bad" plan?

No it's not. The same query run again in MSSMS is fast. With same parameters and same output.

But possibly with data in cache, or statistics on table more recently updated etc.

We've had folk come on there before and say "same query" but actually they have substituted string constants for parameters and so on ... not saying you've done that!! but there are all sorts of reasons why SQL would use a different query plan for your adhoc query to the one that the user had. An extra space, or different capitalisation, etc. is enough. If the original is in an SProc then the query plan, stored with the Sproc, could have been built with completely different parameters - and the index chosen, originally, is unsuitable for the query the subsequent user is actually performing. Your new adhoc query would make a new query plan, not used the one cached, and thus have the "perfect" index for the parameters. There are also other attributes, in addition to "Identical SQL", which have to be the same for SQL to reuse the cached query plan - elements of your connection, which might be different between You and The User.

My money would be on the statistics being stale, and you have ASYNC Statistics rebuild turned OFF (which is the default). So when SQL decides the stats are stale it rebuilds them AND blocks the query until it has done that. To my mind that's daft,. as a default, because if the user query just runs, with the old stats, its no worse than the person that ran that same query 1ms ago!! hence I would just turn ASYNC Stat Creation on anyway ...

But I'm guessing ... it probably won't be that and I'll have to eat my hat ... again!

What I don't "get" is why the READS would be sky-high if all that is happening is that the query is blocking for Stats Rebuild.

@mdsantos - An alternative way of getting the execution plans is to get them from the cache, it's not an ideal method. http://www.sqlserver-dba.com/2011/01/sql-server-query-plans-in-cache.html

You are better off , running the query in SSMS and savin the execution plan.

Based on your earlier comment you are getting a 701 message , internal memory pressure - find details here on how to troubleshoot this type of memory pressure http://www.sqlserver-dba.com/2013/07/troubleshoot-error-701-there-is-insufficient-system-memory-to-run-this-query.html