In my SQL query as part of the WHERE I'm trying to say
request BETWEEN DATEADD(day, -11, GETDATE() time 00:00 ) AND DATEADD(day, -11, GETDATE() TIME 23:59)
Obviously i know the above isn't going to work, but how can i do it so the date is 11 days in the past but the first date has a time of 00:00 and the second date (the between) is 11 days in the past but has the time as 23:59?
Well now I would never have thought that would work, but yep, the optimizer is using the index when CASTing, and @harishgg1 's original statement has a simpler plan:
drop table if exists #;create table #(date_col datetime not null index date_col);
insert # select dateadd(day,checksum(newid()) % 50, getdate());
select * from # where date_col >=CAST(DATEADD(day,-13,GETDATE()) as date) AND date_col < CAST(DATEADD(day,-10,GETDATE()) as date)
select * from # where cast(date_col as date ) = cast(GETDATE()-13 as date)
True but with this simple query the plans are effectively the same as both have 100% of the cost on the index seek. The problem with the CAST on the datetime column occurs in more complicated queries if the cardinality estimation is worse. Personally I would rather developers followed your original pattern.
The post following that very well written response shows that the behavior in later versions appears to be the same as using an open interval range.
With that said - I personally will still recommend the open interval range method as opposed to CAST/CONVERT of a date/time column. My concern is that someone will see that it works for date data types and just assume it also works for other data types.
And - it is trivial to calculate or generate the start and end ranges and should not be a factor.
I appreciate the link but (and jumping on the same band wagon as many of the others have)... I tested that method when they first made it available and, although it's SARGable, it's still slower than doing it with the good ol', always reliable, Closed/Open method that @robert_volk used.
Also, I strongly recommend against using any form of 23:59 times. They're just not as bullet-proof as using the Closed/Open method.
Never rely on supposedly simple execution plans to tell you the truth about performance. It's full of estimates that aren't always even close to being accurate. Only the proverbial "Million row tests" will a measure of Reads, CPU, and Duration will actually tell you the truth provided that you have no Scalar or Multi-Statemet table valued functions in the code. If that happens, you'll either need to fire up SQLProfiler at the SPID level or do the same in Extended Events (which I don't care for at all but that's a whole 'nuther story).