SQLTeam.com | Weblogs | Forums

Deciding between two SQL statements giving same result

sql2012

#1

Hi,

I have two SQL statements which gives same results. How can I decide which one to use? I have nearly no knowledge of reading execution plans. Is there a simple way to decide when looking at such a plan?

SQL1:

select l.stkdepo, sum(l.miktar) as MevcutMiktar,

((sum(l.miktar) +

coalesce((select sum(Erp2.dbo.stkhardet.miktar) from Erp2.dbo.stkhardet
where Erp2.dbo.stkhardet.stkdepo = l.stkdepo and Erp2.dbo.stkhardet.stkkod = stkhardet.stkkod and Erp2.dbo.stkhardet.iptal = 0 and Erp2.dbo.stkhardet.islemturu = 'C' and Erp2.dbo.stkhardet.belgeturu <> 'UR')
, 0)) -

coalesce((select sum(Erp2.dbo.stkhardet.miktar) from Erp2.dbo.stkhardet
where Erp2.dbo.stkhardet.stkdepo = l.stkdepo and Erp2.dbo.stkhardet.stkkod = stkhardet.stkkod and Erp2.dbo.stkhardet.iptal = 0 and Erp2.dbo.stkhardet.islemturu = 'T')
, 0)) as SerbestMiktar

from stkhardet l

where iptal = 0 and l.stkkod = 'STOK3'
group by l.stkdepo

SQL2:

select a.stkdepo
      ,b.depoadi
      ,sum(a.MevcutMiktar) as MevcutMiktar
      ,sum(a.SerbestMiktar) as SerbestMiktar
  from (select stkdepo
              ,sum(miktar) as MevcutMiktar
              ,sum(miktar) as SerbestMiktar
          from erp1.dbo.stkhardet
         where iptal=0
           and stkkod='STOK3'
         group by stkdepo
        union all
        select stkdepo
              ,0 as MevcutMiktar
              ,sum(miktar
                  *case
                      when islemturu='C'
                      then  1
                      else -1
                   end
                  ) as SerbestMiktar
          from erp2.dbo.stkhardet
         where iptal=0
           and stkkod='STOK3'
           and ((islemturu='C'
           and   belgeturu<>'UR'
                )
            or  islemturu='T'
               )
         group by stkdepo
       ) as a
       left outer join erp1.dbo.stkdepo as b
                    on b.depokodu=a.stkdepo
 group by a.stkdepo
         ,b.depoadi
;

Their execution plans can be downloaded from here: http://pasted.co/3cd6a172
(Seems forum is not happy with mega links, I had to put that link in a pastebin like web site).

Thanks.


#2

Although execution plans are certainly helpful in tuning a query, they should NEVER (and I don't use that word often) be used to decide which is better for performance because things like % of Batch are still estimates even for the supposed Actual Execution plan.

And, no... You'll find very few of us that will skip to another site for your execution plans, either. It takes too much time and it's not safe. And, if you don't upload them here, then when they do go away from the remote site, it might leave the forum post basically useless for others to learn from.

Getting back to your decision making... IF your code does NOT include Scalar or Multi-Statement Table Valued (mTVF) funtions (Inline Table Valued Functions {iTVF} are ok), then you can simply wrap you code in the following and compare the results.

SET STATISTICS TIME,IO ON;
...put your code here...
SET STATISTICS TIME,IO OFF;

Of course, if that turns out to be almost a tie, then scalability,, number of hits per day maintainability, reusability, and readability will become larger factors in the decision, as well.


#3

Hi JeffModen,

Thanks for the response.

As I am new to site, I have some restrictions. Unfortunately, one of them stops me from adding attachments. Hopefully, I will overcome that problem in time.


#4

So did my suggestion for how to chose work for you?


#5

Actually, I get some figures in a nearly empty database. I am still not sure what are they and how I they help me to decide.

SQL1 results:

(3 row(s) affected)
Table 'stkhardet'. Scan count 8, logical reads 16, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'stkhardet'. Scan count 1, logical reads 3, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 2 ms.

SQL2 results:

(3 row(s) affected)
Table 'stkdepo'. Scan count 0, logical reads 8, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'stkhardet'. Scan count 1, logical reads 2, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'stkhardet'. Scan count 1, logical reads 3, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
 SQL Server Execution Times:
   CPU time = 0 ms,  elapsed time = 0 ms.

#6

Apologies for the late reply.

Such figures mean nothing on a nearly empty database. You can write accidental Cross Joins on a nearly empty database and they can still look good. You need some meat in your test tables. My normal minimal test table size is 1 million rows.

On new databases, having such a volume of rows is certainly a problem. So, in order to test for performance, you would need to make some test data. You can buy products that will allow you to make oodles of test data. I personally don't bother with such products, though, because it's simple to make a whole lot of data with just a little forethought and understanding.

Please see the following two articles for how to rapidly build tons of random but constrained data in just minutes of programming on your part and just seconds of run time.
Generating Test Data: Part 1 - Generating Random Integers and Floats
Generating Test Data: Part 2 - Generating Sequential and Random Dates

Then, let your imagination run wild with that basic knowledge.