Report generation speed

One of company reports is taking average average 30 seconds to generate during peak hours; processing about 2.3 million records. During non peak hours it is taking average 7-10 seconds. How can I speed up processing during peak hours. Where to start from; code review, wait stats; tuning advisor etc. Thanks.

I would start with a code review and optimize the process. It may need a pre-defined table built on a daily/weekly/monthly basis or just improved code.

If that doesn't resolve the issue - look into pre-generated snapshots (assuming SSRS).

If those reports are being run against production, consider moving them to a read-only secondary.

Code review and index review (which will include reviewing the query plan).

Thanks for revert. It got be near to answer.

Thanks for revert. It got me nearer to answer.

If neither optimizing code nor optimizing indexes bring the desired speed, you have to talk to your key user(s) and find out what delay between
a) snapshot of data used for report and
b) data in db
they can live with.
IF they say "real time, we want to see the current truth"
THEN they will need to provide you with better hardware
:slight_smile:
ELSE IF they accept a sensible delay which is decisively longer than report generating (you could start by proposing one hour and let the haggling go to a min. of 15 minutes) you could simply schedule the SQL code and insert results into a table that has the absolut minimum of indexes ... and most likely do truncate before inserting .... else table will grow and be able to historize the reporting data, which does not seem to be required ...

Other possible options to improve performance:
(1) page compress the data
(2) force LOB columns out of row (if 1+ LOB columns exist in the table and they are not needed by most reports, esp. the slow ones).

Post the code. Since it's a report, it probably has some aggregations in it. I recently presented on a report that had some nasty complicated aggregations by week and month and also included a grand total, of course.

I got it to run in 63 MILLI-seconds (not a misprint... MILLI-Seconds) on a table that contained {insert drum roll here} 100 MILLION rows (again, not a misprint... 100 MILLION rows).

I got it down to 1.2 seconds using a technique known as "Pre-Aggregation". I could have left it like that because it's well below the 5 seconds they were striving for (their code took almost a minute for "just" 10 million rows). But I also knew the data was growing and so went for the jugular with an Indexed View and that resulted in a run time of 63 MILLI-Seconds.

The use of Column Store may have helped here, as well, but I find Column Store to be a maintenance PITA and so wanted to avoid it... and did so quite successfully.

Can't help other than making those suggestions unless you post the code and, hopefully, an Actual Execution Plan.