We have a front facing reports site that is required to generate real-time data.
Our issue is each user has unique access and results specific to them so data returned is aggregated differently for each user along with criteria selected for the report.
The time to return this data is over 1 minute when no cache is present, but once the query completes 1 time it then completes in 4 seconds.
Since this is front facing clients start refreshing before results are returned stacking queries and causing other issues.
We tested on increased resources, and nothing was resolving. Queries are clean, and optimized for indexing and it appears it is the aggregation functions causing a lot of additional time to complete(mostly count and sum)
Cte’s are giving slightly better performance than #temp and table variables, but havnt tried an actual table yet, but contemplating if using an actual table may effect as record sets are stored temporarily for the subsistent queries in the procedure.
Any suggestions on when cached plans are our only scenerio that is an acceptable time right now?
This is your problem. If you are aggregating large transaction tables you may need to maintain summary tables overnight etc so that less aggregation is required for historical data leaving only the detail for the current day. (Even aggregates per minute can significantly reduce the amount of reads.) You may also need to look at filtered indexes etc.
I am pretty sure this can be fixed. Creating execution plans should not take an inordinate amount of time, but yours may be an exception. One thing comes to mind that would cause the vast difference you are seeing is an unintentional cartesian product hidden by a group by. Your query may be doing millions of lookups for each row in a result set then discarding it due to a grouping. All those physical seeks get converted to memory lookups in the 2nd pass. To see an example, ask Bard or Bing AI "Give me an example of a query with a GROUP BY that produces the same result when the query contains an unintentional cartesian product"
@ifor, I am not sure if you read where I explained due to access levels the summary aggregation is not a valid option as we would be pre-compiling for potentially 100,000’s of potential access levels that need relatively real-time updates on their portals.
Second, your other comment is completely unqualified without understanding/seeing the query. It is a passive aggressive statement that was unnecessary. I do not dismiss that queries can often be improved, but you dismiss that there are times in computing with databases when no “clean” queries can resolve limitations of technical environment and operations;). Sometimes rethinking the approach is a solution…
Ultimately that is what we did to get a working solution.