Insert doesn't work only when done via STORED PROCEDURE

I have a stored procedure that ends with an insert into a table on a linked server. It has been working for a long time and i'm not sure what changed in our environment (doubtless something), but basically here is what happens:
When I troubleshoot it by executing the code NOT as a stored procedure, it works fine and the insert takes place.
When I execute the stored procedure, there are no errors returned, but the insert does not take place.

Does anyone have any ideas? PS - I can't figure out how to post "code" on this site, not sure why just the last block shows up as code - sorry.

Code:

USE [THP_Reporting]
GO
/****** Object:  StoredProcedure [dbo].[CLAIMSAUDIT_ClaimCountsImport_EZCAP]    Script Date: 02/29/2016 19:58:17 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author:        <ISAAC PISORS>
-- Create date: <10/27/2015>
-- Description:    <TO BRING IN NEW PHP CLAIM COUNTS FOR THE CLAIMS AUDIT DATABASE>
-- =============================================
ALTER PROCEDURE [dbo].[CLAIMSAUDIT_ClaimCountsImport_EZCAP]
(
@LineofBusiness_EzcapTerm varchar(9), --examples: PHPCAID PHPCARE
@LineofBusiness_LegacyReference varchar(9)--examples: PHP AAHP
)

AS
BEGIN

declare @startdate date
declare @enddate date

--testing only /////////////////////////////
--declare @LineofBusiness_EzcapTerm varchar(9) --examples: PHPCAID PHPCARE
--declare @LineofBusiness_LegacyReference varchar(9)--examples: PHP AAHP
--set @LineofBusiness_EzcapTerm ='PHPCAID'
--set @LineofBusiness_LegacyReference = 'PHP'
--end testing only

set @startdate = CAST(getdate()-90 as DATE) 
set @enddate = CAST(getdate() as date)

--Build up our happy little conglomeration of audit records and claim header records, with 
--total claims worked:
IF OBJECT_ID('tempdb..#temp') IS NOT NULL BEGIN DROP TABLE #temp END
;
with cte as (
                select
                    COUNT(distinct c.claimno) as 'TotalClaimsWorked',
                    ltrim(rtrim(audit.OP_ID)) as 'OPID',
                    --(select USERs.fullname from [phaz-phpdw-04.phaz.vhs.net].[bianalytics].dbo.claimsaudit_users users where users.ezcapopid = Audit.op_id) as 'Operator',
                    CAST(audit.timestamp_changed as DATE) as 'ChangeDate'
                from
                    DatawarehouseCMS.dbo.AUDIT_DETAILS_V  audit
                    inner join DatawarehouseCMS.dbo.CLAIM_MASTERS_V c on audit.CHANGE_REC_KEY  = c.CLAIMNO 
                where
                    c.COMPANY_ID=@LineofBusiness_EzcapTerm
                    and cast(audit.TIMESTAMP_CHANGED as date) >= cast(GETDATE()-59 as date) --change back to 10 after remediation 2/8/2016
                    and cast(audit.TIMESTAMP_CHANGED as date) < cast(GETDATE() as date) --added 2/8/2016, don't pick up today's records. and put a false permanent ZERO.
                    and ltrim(rtrim(audit.OP_ID)) in (select users.EZCAPOPID from [phaz-phpdw-04.phaz.vhs.net].[BIAnalytics].dbo.CLAIMSAUDIT_USERS users)
                    and audit.FIELD_NAME='STATUS'
                    --and audit.NEW_VALUE in ('1','9')
                    and audit.NEW_VALUE in ('1') --changed 2/22/2016 per email conversation w/Kim Lindsey
                group by ltrim(rtrim(audit.OP_ID)), CAST(audit.timestamp_changed as date)
            )

select * into #temp from cte 
--testing:
--select * from #temp
insert [phaz-phpdw-04.phaz.vhs.net].[bianalytics].dbo.claimsaudit_claimcounts
    (MC400ChangeUserID, NumberClaimsProcessed, ProcessDate,
     DBUserName, DBusername_tenet,  DateImported, ChangeUserAndDate, LOB
    )

   select
        temptable.OPID as 'MC400ChangeUserID', --old nomenclature,but keeping it the same for right now
        temptable.TotalClaimsWorked as 'NumberClaimsProcessed',
        temptable.ChangeDate as 'ProcessDate',
        (select USERs.username from [phaz-phpdw-04.phaz.vhs.net].[bianalytics].dbo.claimsaudit_users users where users.ezcapopid = temptable.opid) as 'DBUserName',        
        (select top 1 isnull(USERs1.username_tenet,'Unknown') from [phaz-phpdw-04.phaz.vhs.net].[bianalytics].dbo.claimsaudit_users users1 where users1.ezcapopid = temptable.opid) as 'DBUserName_tenet',        
        getdate() as 'DateImported',
        --CAST(temptable.[TotalClaimsWorked] as varchar(5)) + ltrim(rtrim(cast(temptable.[opid] as varchar(10)))) + CONVERT(varchar(8),temptable.[ChangeDate],112) + @LineofBusiness_LegacyReference as 'ChangeUserAndDate',
        --as of 2/8/2016 gettign rid of number grouper - that was killing us as of 1/2016
        cast(ltrim(rtrim(temptable.[opid])) as varchar(10)) + CONVERT(varchar(8),temptable.[ChangeDate],112) + @LineofBusiness_LegacyReference as 'ChangeUserAndDate',
        @LineofBusiness_LegacyReference as 'LOB'
    from
        #temp temptable
        left join [phaz-phpdw-04.phaz.vhs.net].bianalytics.dbo.CLAIMSAUDIT_CLAIMCOUNTS mainDB
                on
                    cast(ltrim(rtrim(temptable.opid)) as varchar(10)) + CONVERT(varchar(8),temptable.ChangeDate,112) + @LineofBusiness_LegacyReference =
                    mainDB.ChangeUserAndDate
    where
        mainDB.ChangeUserAndDate is null --we haven't already put this record in for a given Date, Analyst, LOB,and Count
        and
         cast(temptable.changedate as date) > cast('2015-08-10' as date)
    
      

END

Press the [</>] button, or put

    ```sql
    Your Code Here
    ```

around your code. (You can use the pencil icon to re-edit your original post and "apply" that, which would probably be helpful to others :slightly_smiling: )

Can you "test" this outside the APP? If so I would comment out the INSERT and see if the, subsequent, SELECT displays anything. It might be that the JOIN and:

WHERE mainDB.ChangeUserAndDate is null

is finding a value.

I don't know how many rows this is intended to INSERT at a batch, but the syntax, including a LEFT JOIN to the remote server/database, which itself has a non-SARGable condition is likely to have terrible performance, so I suppose its just possible that it is timing out (although I'd expect you to see an error, rather than "nothing inserts" in that situation.

For "nothing inserted" I think more likely that the WHERE clause is excluding all rows from the INSERT.

P.S. but across-servers I also think it is possible that something is failing and no useful error is being reported; such things do seem to be a bit fragile, at time, IME !!

Couple of other observations in case helpful

FROM DatawarehouseCMS.dbo.AUDIT_DETAILS_V

You might like to consider using a SYNONYM for ALL 3-part naming. If you created a SYNONYM for "DatawarehouseCMS.dbo.AUDIT_DETAILS_V" which was, say, "DatawarehouseCMS.AUDIT_DETAILS_V" then you would change your syntax to:

FROM DatawarehouseCMS.AUDIT_DETAILS_V

but if the name of the external Database ever needed to change, OR the table's name WITHIN that DB, then you would just need to change your Synonym, rather than all your code :frowning:

I would consider the test

IF OBJECT_ID('tempdb..#temp') IS NOT NULL BEGIN DROP TABLE #temp END
;

for #TEMP should Error rather than silently DROP the table (or just not bother to do anything so that it raises error if it exists). Silently dropping it will cause a problem for anything upstream that created #TEMP and then this SProc drops it :slightly_smiling: It might be a big task to then work out why the caller (or caller's ... caller) falls over and which child-process silently deleted its #TEMP file.

If you are dropping it because it can exist within your UNIT TEST code then I would conditionally drop it within the Test, rather than in the Sproc.

You may well have different reasons for that DROP though :slightly_smiling:

and cast(audit.TIMESTAMP_CHANGED as date) >= cast(GETDATE()-59 as date)
and cast(audit.TIMESTAMP_CHANGED as date) < cast(GETDATE() as date)

is not SARGable and is likely to perform badly. If you have an index on audit.TIMESTAMP_CHANGED the improvement could be dramatic. You are already using a DATE, rather than a DATETIME as the range limits, so the CAST on audit.TIMESTAMP_CHANGED is probably unnecessary, and removing that is the only change that is needed. (Note that any row where audit.TIMESTAMP_CHANGED has todays date is NOT included, I presume that is the intention (if not then you need "Less than Tomorrow" rather than "Less than Today" and definitely not "Less than or equal to Today" as that WILL require the current CAST and will not be SARGable)

The SELECTs within the SELECT for the INSERT could be moved to JOINs / OUTER JOINs, which should be more efficient. The first one:

(select USERs.username 
from [phaz-phpdw-04.phaz.vhs.net].[bianalytics].dbo.claimsaudit_users users 
where users.ezcapopid = temptable.opid) as 'DBUserName'

will cause an error if multiple rows match. Presumable this is a unique ID join and that could never happen (in which case better an Error Raised than using TOP 1 which would just hide the issue, if it ever arose), but I reckon that that also makes it a good candidate for a JOIN

cast('2015-08-10' as date)

might be ambiguous. I'm a bit iffy on this, as the parsing algorithm for DATE is different to DATETIME. For DATETIME the only safe, non-ambiguous, string format is "yyyymmdd" (no punctuation), that was changed when DATE datatype was introduced and that that also allows "yyyy-mm-dd". FWIW I only use "yyyymmdd" as I still have old-style DATETIME datatypes in my database/code and thus I prefer to stick to one format that works for both types rather than risk using an ambiguous string in the wrong place!

Hopefully of some interest, if not chuck it in the bit-bin.

Thanks Kristen for looking and all your helpful input. For the moment I'm just going to focus on discussing/researching the items that seem to be potential reasons for the failure.

First: I mentioned in my original post that I did indeed test the code outside the "app"- that was the first thing I did. The usual method in production is to execute the stored proc from within an SSIS package. So first I executed the proc from SQL Server Mgmt Studio, passing the same params as the SSIS package would have. That resulted in the same 'problem' - query execution successful, no error raised, no records inserted.
After that, of course, I commented out all of the appropriate things to comment out to make it "just code", not a procedure .... Declared the 2 variables needed to mimic the parameters, set their value to the same parameters previously passed in, and ran it. It worked perfectly when ran like that way- the last way I described. Which seems REALLY ODD to me and prompts me to believe I'm up against some subtle bit of knowledge that I need to learn that is making this act differently.

By the way - If you notice this code around the middle:
select * into #temp from cte
--testing:
--select * from #temp
I did, in fact, test and yes the #temp table gets populated perfectly as I'm expecting.
Then after testing that, I would go down to the SELECT that is the basis for the INSERT, and I would do the following:

  • Declare the 2 variables again since they are then out of scope/gone

  • Set their values to the same thing as before

  • Run the SELECT statement - and Yep- it comes back with all of the records it should, exactly as it should- so I know that's not an issue.

  • Finally, run the final insert in its entirety. RESULT: Records are inserted, as I would expect.
    (But when I execute the entire procedure, no records end up getting inserted - and no error that I can see).

All of this is why I posted my question with an emphasis on, I've troubleshot this the usual ways, as "not" a stored procedure, and all selects/results are coming out correctly -

I do realize that pushing data into a linked server, there are those who see that as odd ... But I figure situations are different - in the case of this code, 99.9% of the heavy lifting is on Server A, so that's where I run my code. The data that gets pushed onto Server B (in that final insert), is very small. 100 records or so at a time. And it was working perfectly for a long time - I can't tell what changed.

Issues:

  1. non-sargable "WHERE mainDB.ChangeUserAndDate is null". I believe you are right, but in the case of this data, I'm trying to bring in aggregrated counts of something, meant to be unique by attributes like "user_changed", "change_date", and "lineofbusiness". I want to aggregrate them safely without ever duplicating (I will not rely on date ranges in the code to do this, and then hope the process never fails to run then messing up the date range assumptions), and this is how I achieve that. I may try to improve that some day,but right now I'm satisfied with the speed of this procedure and must focus on the immediate problem (of non-insert).
  2. You mention that it's possible that it's timing out (due to issue #1), yet, as I said, the code works perfectly when not executed as a stored procedure - So I need to hone in on the difference between them. The only way I can think of to relate your comment to timing out, to the question at hand, is if you are suggesting that the code as an executed proc is timing out / taking longer, compared to the exact same code executed as just a block of code. Are you suggesting this?
  3. Your comment: "P.S. but across-servers I also think it is possible that something is failing and no useful error is being reported; such things do seem to be a bit fragile, at time, IME !!"
    ..... I have no argument with that at all, I agree that the linked server aspect certainly doesn't make me feel great about the whole proc. And I may switch to data flow tasks in SSIS if I can't solve this soon. But again I would answer - this comment doesn't take into account that my specific situation is such that the code IS working - every single time - when executed as blocks of code in SSMS, then the same code when structured as a stored proc, isn't doing the final insert. Linked server is a controlled variable, so to speak, in those two actions--don't you agree?

Thanks for comment on synonyms. That's something I will remember - appreciate it.

I really need to get to the bottom of the problem at hand though -- why the exact same code works fine when ran as just code, but when executed as stored proc, doesn't do the final insert, w/no visible error. Even when run also in mgmt studio. Oh ... I just had a possible epiphany. Does executing a stored proc operate under potentially different permissions compared to just Executing code.... Hmmmm..

Sorry, by "outside the APP" I meant outside the Application Code that calls the Procedure and NOT some isolated SQL code moved (and tested) outside the Stored Procedure.

Personally I never bother (or very rarely!) to move snippets of code outside an SProc in order to test them. I put PRINT statements (or similar) inside the SProc to see exactly what is going on and then run the SProc in a Test Harness which is typically something like:

BEGIN TRANSACTION
EXEC MySProc
    @Param1 = 123
    @Param2 = 'ABC'
ROLLBACK

Force of habit makes me include the Transaction, even if the SProc is only doing a SELECT. (In this case a TRANSACTION might be a problem if it causes a Distributed Transaction and the Linked Server does not permit that. If that is a problem just comment out the INSERT bit, leaving the SELECT so that it displays)

We want to see exactly what is going on inside the SProc, including perhaps using PRINT to check the value of @PARAMETERS at various points within it.

When an INSERT goes wrong the first thing I want to do is check what it is trying to insert,and I would (temporarily) change an insert in the style of:

INSERT INTO MyTable (Col1, Col2, ...)
SELECT Col1, Col2, ...
FROM OtherTable
WHERE ...

into

PRINT 'MySProcName DEBUG(1):Select rows ...'
-- ' (This line only because the formatter is screwing up the highlighting !!!)
SELECT Col1, Col2, ...
INTO #TEMP
FROM OtherTable
    LEFT JOIN [phaz-phpdw-04.phaz.vhs.net]...
        ON ....
WHERE mainDB.ChangeUserAndDate is null
      AND ...
PRINT 'MySProcName DEBUG(2):Rows selected = '
         + CONVERT(varchar(20), @@ROWCOUNT)

-- For debugging only
PRINT 'MySProcName DEBUG(3):Dump of selected rows:'
SELECT TOP 10 *
FROM #TEMP

-- Attempt the insert
PRINT 'MySProcName DEBUG(4):Insert rows ...'
INSERT INTO MyTable (Col1, Col2, ...)
SELECT *
FROM #TEMP
PRINT 'MySProcName DEBUG(4):Rows inserted = '
         + CONVERT(varchar(20), @@ROWCOUNT)

Some further thoughts:

It may result in a very small number of rows being inserted, which if fine of course, but I see the risk being in the test which checks that none of the rows already exist

Your LEFT JOIN [phaz-phpdw-04.phaz.vhs.net].bianalytics.dbo.CLAIMSAUDIT_CLAIMCOUNTS has the ON clause:

cast(ltrim(rtrim(temptable.opid)) as varchar(10)) + CONVERT(varchar(8),temptable.ChangeDate,112) + @LineofBusiness_LegacyReference =
                    mainDB.ChangeUserAndDate

When 4-part naming is used SQL can be very dumb IME. You might be lucky and SQL figures out a smart way to do this, but given that this is not SARGable you run the risk that SQL passes ALL rows from the remote server to the current server and THEN does the ON test locally (and then find thats NONE of the rows match - which means a) that you can insert all rows - Good! but b) that ALL rows were transferred from Remote server to Local server which may well be a disaster for performance

Coding around this, so SQL won't make a dumb choice, is a PITA but we do tend to go to the trouble of doing that.

Others may tell me that I am worrying about something that won't actually happen in practice ...

I think this is unlikely (but clearly SOMETHING is going wrong!!)

The reason I think it is unlikely is presumably you are fiddling with the SProc, e.g. adding DEBUG code, and recreating it, which will should throw away any old, inefficient, query plan and make a new one.

(Well ... I assume it will make a new query plan, but if it doesn't that COULD be the problem - i.e. the SProc is using a lousy, stale, existing, out-of-date query plan)

I'm not sure it will tell you anything useful, but you could run SQL Profiler (against the remote server) to see what SQL statements, if anything, it is actually receiving. SQL Profile should give you an idea of how long they take to run, and how many rows are in the resultset.

"Sorry, by "outside the APP" I meant outside the Application Code that
calls the Procedure and NOT some isolated SQL code moved (and tested)
outside the Stored Procedure."

Hmm, that seems like it would be even less value than the test of [some isolated code outside the stored procedure, but no problem - I see what you mean.

In this case there really is no app - going back to my original post again, as mentioned, the problem manifests itself even when simply executed in sql server mgmt studio - there is no "calling app".

However, the problem does NOT manifest itself when the very same code that the procedure contains (with some obvious adjustments so that it is not actually a procedure) is executed, from the very same connection, by the same person, still same in mgmt studio.

I appreciate all of your comments as many of them I'm sure I can implement to improve.
It doesn't seem, though, like so far we've hit on anything that speaks to the specific problem I'm having, which is insert failing when done thru executing SPROC,but not failing when the exact same code is executed (the "some isolated code" as you say :slightly_smiling: )

The issues, as I see it, is that:

Some Application sends an EXEC MySProc call to SQL, which executes that SProc.

I think you can simulate that, probably 100%, by doing

BEGIN TRANSACTION
EXEC MySProc
    @Param1 = 123
    @Param2 = 'ABC'
ROLLBACK

in SSMS.

However, I think that trying to run individual statements in SSMS it will show you what the results are of that particular query, but not how the @Parameters evolve during execution of the Sproc (e.g. if there is a bug in there somewhere) NOR how SQL will handle creation of the Query Plan for the whole SProc, as distinct from single-statements executed in isolation.

It is also possible that the SProc was created with different SET options than are (currently) set in SSMS, and so on.

Hence why I prefer to put DEBUG PRINT statements in the SProc, rather than running individual statements in isolation.

My mistake, I was assuming this Sproc was called from "something else" and you were needing to debug it. Quite often I come across developers who can't figure out how to "uncouple" the SQL from their APP, and thus can't really test it outside their APP.

You are in perfect shape. You can EXEC the SProc and it will exhibit the broken-outcome ... and it will do it repeatably. All the more reason, IMHO, to put the Debugging code inside the Sproc to incrementally home in on what the thing is that is causing the problem. It might be a silly typo or somesuch (I know, unlikely, and its been working OK for months ...but something must have changed somewhere).

I'd stuff it full of

PRINT '@MyParameter should be 100, actual value = ' + @MyParameter

type mundane things and then laboriously check them all looking for that needle-in-haystack.

So stop testing it using isolated code, that obviously works :slightly_smiling: Test it using debug statements in the SProc until you find the one bit that is failing - and then you can make that work too :slightly_smiling:

I've checked your code, I don't see anything untoward with it. So its going to be something spooky, or something blindingly obvious that we have both overlooked.

Particularly spooky for me is that it appears to run OK, but just not to insert any rows. To my mind that suggests that the LEFT JOIN and mainDB.ChangeUserAndDate IS NULL test, to check that the rows do not already exist, is finding NO work to do. Your isolated tests suggest otherwise, and if that is also the case (when run inside the SProc) then it must be something else - e.g. something rolling back the transaction, or a TIMEOUT, but that is what you need to find.

Hence my suggestion to split your INSERT INTO ... SELECT ... FROM into two bits. First just do the SELECT and put the rows in a #TEMP table, then you can see how many rows, if any, was found.

Then try to insert from that #TEMP into the actual table and see how many rows that inserts - and if that is different to the number that were selected earlier.

If the first SELECT ... INTO #TEMP finds nothing to do them move the OUTER JOIN - i.e. so that the SELECT always finds any matching rows, regardless of whether they existing in the remote server. Then do the OUTER JOIN test to see how many exist in the Remote table. Then try to do the INSERT on any that do not already exist.

"So stop testing it using isolated code, that obviously works Test it using debug statements in the SProc until you find the one bit that is failing - and then you can make that work too"

LOL ok, point taken. However, keep in mind that if the local server isn't returning whatever errors are apparently coming from the linked server, it's doubtful that that will somehow magically happen from Debug and Print statements.

Same thing with error handling. If the error isn't exposed, it isn't exposed.

But let me try some print stuff and see what is going on. In the stored proc as you suggested. :slightly_smiling:

OK. After this line of code in the original proc:
select * into #temp from cte
I added this code:
declare @cnt int
set @cnt = (select count(*) from #temp)
print 'count of records in #temp = ' + cast(@cnt as varchar(5))

the message in mgmt studio window was "count of records in #temp = 0".
assuming I didn't make any impactful mistake in my code to get that information to the debug window in the first place, it seems that the #temp is empty.

i can't for the life of me think of why it would "suddenly" behave this way after working alright for a year or so.

regardless, i started wondering why to use the temp table at all, considering i already have a cte and the current sequence of code lines is pretty much ready to use it too. i guess because when i developed it, the temp table was too handy for troubleshooting purposes to resist.

i changed the code to simply do that final INSERT with a SELECT that gets records from CTE [as temptable] as opposed to using the real temp table at all. That didn't work either.

I started to wonder if the whole thing has to do with variable scope, and the left join on the linked server table simply contradicting my assumption about the scope of my variable. Sad, since it seems intuitive that SQL should handle that - and why would that "suddenly" change either??

Then I remembered I already know the temp table failed to populate so I went back to that--quite easy to get distracted trying to solve problems that don't exist, as I'm sure you know. :slightly_smiling:
So I decided to print the count(*) from the CTE. that's not really possible I don't think, because after a with cte as (query) block, there are only a few things that can come after that. (Generally a Select from cte, or else a comma and more cte's). Not a "print" statement -

So I settled for Printing the results of a query except not structuring it as a CTE. I don't like this, because it doesn't necessarily guarantee us that whatever I find was the problem- since now we're major restructuring code merely to DO the troubleshooting.

Fortunately, my fears of a "false success" were allayed, since it still failed. Using this code, the Print results was 0:

declare @cnt int
set @cnt = (select count(*) from
                                (
                                select
                                COUNT(distinct c.claimno) as 'TotalClaimsWorked',
                                ltrim(rtrim(audit.OP_ID)) as 'OPID',
                                --(select USERs.fullname from [phaz-phpdw-04.phaz.vhs.net].[bianalytics].dbo.claimsaudit_users users where users.ezcapopid = Audit.op_id) as 'Operator',
                                CAST(audit.timestamp_changed as DATE) as 'ChangeDate'
                            from
                                DatawarehouseCMS.dbo.AUDIT_DETAILS_V  audit
                                inner join DatawarehouseCMS.dbo.CLAIM_MASTERS_V c on audit.CHANGE_REC_KEY  = c.CLAIMNO 
                            where
                                c.COMPANY_ID=@LineofBusiness_EzcapTerm
                                and cast(audit.TIMESTAMP_CHANGED as date) >= cast(GETDATE()-10 as date) --change back to 10 after remediation 2/8/2016
                                and cast(audit.TIMESTAMP_CHANGED as date) < cast(GETDATE() as date) --added 2/8/2016, don't pick up today's records. and put a false permanent ZERO.
                                and ltrim(rtrim(audit.OP_ID)) in (select users.EZCAPOPID from [phaz-phpdw-04.phaz.vhs.net].[BIAnalytics].dbo.CLAIMSAUDIT_USERS users)
                                and audit.FIELD_NAME='STATUS'
                                --and audit.NEW_VALUE in ('1','9')
                                and audit.NEW_VALUE in ('1') --changed 2/22/2016 per email conversation w/Kim Lindsey
                            group by ltrim(rtrim(audit.OP_ID)), CAST(audit.timestamp_changed as date)
                                ) qry1    
            ) 

            
print 'total output is ' + cast(@cnt as varchar(5))

I also verified that the @startdate and @enddate variables are still good at that point by Printing them too. At the end of the day the FIRST select statement is generating nothing.

At a bit of a loss at this point. All this printing is fun but I may abandon my investigative efforts and use an SSIS package with a data flow. Darn linked servers!!!!!!!!!!!!

I know how frustrating it is ... but , personally, I see that as progress. You have narrowed it down to a "bit that is not working"

I can see how you want to get the value of COUNT(*), but I must admit that I normally rely on the value of @@ROWCOUNT, just after a statement, rather than hijacking! the statement into a COUNT(*) ... but, when I, too, get desperate Anything Goes of course

Did you put a PRINT of @LineofBusiness_EzcapTerm just-before, or just-after, the SELECT? (Obviously: watch out in case it is NULL and causes your PRINT to output nothing ...)

By the by, I do this debugging task slightly differently. All (and I mean 100% !!) or my Sprocs have a final parameter of @Debug and then I can use SELECT instead of PRINT for debugging

IF @Debug = 1 SELECT [MySprocName] = 'DEBUG(1)', 
    [RowCount]=@@ROWCOUNT, 
    [@LineofBusiness_EzcapTerm]=@LineofBusiness_EzcapTerm, 
    ...

which allows me to easily "list" as many parameters as I need to, and not have to worry about any of them being NULL, or having to convert them to Strings for PRINT. I start with the SProc name and a, hopefully, unique "DEBUG(1)" indicator so I know where abouts in the Sproc the statement was (I can just FIND "DEBUG(1)" in the code), and then if Sproc-A calls Sproc-B I can see debug statements from the Child SProc. If an SProc calls another one then it passes the @Debug parameter's value so that everything, downstream, outputs its debugging information.

I leave all the DEBUG statements in the production code so that, if it breaks!!, I can just re-run a test EXEC command with @Debug=1 and get whatever Info I originally built in ... if that doesn't work I add more DEBUG statements and they are there for next time ...

Seems inconceivable that @LineofBusiness_EzcapTerm has the wrong value. (Unless you have only posted part of the code for brevity / security here). Unless the varchar(9) definition is too small for the actual value.

The SELECT is on

DatawarehouseCMS.dbo.AUDIT_DETAILS_V
and
DatawarehouseCMS.dbo.CLAIM_MASTERS_V

which are presumably two VIEWS in a different (DatawarehouseCMS) database on the current server.

Are those VIEWs doing anything spooky and connecting to a Remote Server, or using a FUNCTION to get some "range value" that is messing you up? (Can't see how it would behave differently to you just running that SELECT "manually", but maybe the VIEW has test/"parameter" that can behave differently when called from an SProc to a plain SELECT.

I would move this

(select users.EZCAPOPID 
          from [phaz-phpdw-04.phaz.vhs.net].[BIAnalytics].dbo.CLAIMSAUDIT_USERS users)

out of the SELECT (e.g. into a #TEMP_USERS table) so that, again, you can check the @@ROWCOUNT on that too ... its going to a Linked Server, so might be failing. Perhaps different userID / Permissions / Something when coming from SProc rather than SSMS direct. (I cannot think of ANYTHING that could be different though ... but clearly something is ...)

Has the [EZCAPOPID] column on the remote gained some leading space, such that the
ltrim(rtrim(audit.OP_ID))
no longer matches it? (Again, that should not work differently between SProc and SSMS) but maybe those are not matching up for some reason - perhaps the data from remote is getting "changed" in some way, but you could put a

IF @Debug=1 SELECT TOP 100 [MySproc]='DEBUG(2)', 
          [EZCAPOPID] = ']' + EZCAPOPID + '[' FROM #TEMP_USERS

to review that

Not everyone sets up SQL Server the same way. If I put in a semi-colon as a terminator, the code will not run, but I see code out there that uses it in 2008 R2 just like mine. When you test 'bits of code' successfully, there are two things different than running the stored procedure. One is the ALTER at the beginning, and the other is GO and END at the tail. Try adding a "GO" and see if it alters anything.

_Not everyone sets up SQL Server the same way. If I put in a semi-colon _
_as a terminator, the code will not run, but I see code out there that _
uses it in 2008 R2 just like mine

Hmm. That isn't actually a something affected by how you 'set up sql server'. Microsoft indicates, "When a CTE is used in a statement that is part of a batch, the statement before it must be followed by a semicolon". If you don't have to use one, then your CTE isn't used in a statement that is part of a batch, with a statement before it, 2008 or later.

_When you test 'bits of code' successfully, there are two things _
_different than running the stored procedure. One is the ALTER at the _
beginning, and the other is GO and END at the tail

Yes, of course in order to run the code not as a procedure, a number of things must be commented out - some being the things you mentioned, others being the BEGIN and END (unless you have some reason you want them around your non-procedural code in its entirely, which would be odd). Taking out the 'AS' ... declaring the variables properly, and setting them to any parameters you would have passed in as the proc - a # of things. I'm aware of those and have done them - as mentioned in the past several posts.

The biggest thing I do when testing bits of code is testing the selects and counts of various things along the way, to hone in on the problem.

In this case, as mentioned, I did all of that (and more), with no success. I believe there is something about the early select in my originally posted which the remote server does not like. Putting any more effort into identifying what that might be- and then hoping I've found the one and only problem - is not worth any more time compared to the pressure I'm under to make it work again.

I'm switching to a data flow task that let's the data be separate and reaches out to each server as necessary in an SSIS package. Thanks for looking.

I thought you might say that :slightly_smiling: and understand ... I've got plenty of T-shirts we could swap ...

... but ...

... if it were me I would be worried that whatever is causing this to break, given it has been working fine for ages, might well cause something else to break. For example, maybe the Permissions / UserID / password / something in the Linked Server CONFIG has changed / become broken. Maybe the Locale of the remote server has changed ... that's probably going to cause something else to break.

There are lots of ways of doing things, I like to think that having DEBUG statements in the SProc means that, if the need arises, I can debug a Production problem / SProc quickly ... but then that's how I do it, and I haven't tried doing it any other way!!. I do think it can be important to avoid doing things outside the SProc so that I am not "testing a hybrid case" though ... but of course that doesn't always work - the permissions when SQL Agent executes an Sproc (in a scheduled job) are different to when I am logged on to SSMS ... and so it goes on.

10% creativity, 90% fixing all the mistakes I made during the 10% ...

10% creativity, 90% fixing all the mistakes I made during the 10% ...

I prefer to think of that as 'polishing' and 'refining'. Make me sleep better at night that way!