As a DBA what we should do if a request will come about poor performance
of the query?
First, ask questions:
- What is the current runtime/io of the query? (get a baseline)
- Has the runtime/io changed recently?
- What is the desired (target) performance in milliseconds. (note: "as fast as possible" is not measurable)
- Is the performance hit the target at some times (of the day/ week/ month /year) and only poor at specific times?
Some things to do:
- Look at the execution plan. Especially, look for table/index scans and poor row estimates
- Update statistics on the tables involved
- Consider whether a new index or two might change the scans to seeks
- Pick two or three options, try them all and compare to the current runtime/io
- Pick the best result
Note that it may be possible that the query is well-written, the indexing is appropriate and the statistics are up to date. Then you may have to start thinking about things like:
- IO contention. What else is on the filegroups hosting the tables involved? Is something else hitting those disks particularly hard?
- Is tempdb or the log files on the same disk (move them off)
- Memory. Is the server memory starved? Perfmon can help you there
- Do you need faster disks? Additional NICs?
1 Like
In addition to what gbritton said, I would add Wait Stats. What is the query waiting on most of the time? What is the instance waiting on most of the time? I would also look STATISTICS IO and TIME output. Code refactoring too.