SQL Server Linked Server to ODBC Connection Drops connection every 4-5 minutes

Hi

I have created a Linked Server to an ODBC Driver.

I am running an SQL Stored Procedure that inserts a lot of rows in linked server.

The script uses the cursor to get the records from SQL Server Database and inserts it to Linked Server One by one.

The script takes approx 1-2 hours to complete.

I noticed Linked Server sending Disconnect command to the ODBC Connector every 4-5 minutes.

Well, it connects automatically in the next Query, But I would like Linked Server Not to send Disconnect Command to ODBC Driver.

I am unable to find any setting for Auto Close/Disconnect for Linked Server.

I have already tried increasing Query and Connection timeout to 99999 but no change.

Regards

Raj

why? Set-based logic is the go-to solution

I have source data available in SQL Server and I have to go invoice by invoice.
The destination does not support bulk insert because it has to generate a unique Invoice number and make accounting entries for each invoice.
Thus, I have to pass invoice lines and invoice header. then go to another invoice record and do the same.

-Regards
Raj

so...two inserts per loop? (it would help if you would post your DML)

Problem is not with the number of inserts.

example below is the normal loop
Start
insert line 1
insert line 2
insert line 3
insert line 4
and so on till X
Insert header of invoice
move to next record and goto start.

Problem happens when the above script is running and it reaches 5th minute

Start
insert line 1
insert line 2 ( SQL Disconnect gets fired automatically by SQL Server )
insert line 3 ( SQL Connect gets fired and Query is executed )
insert line 4
and so on till X
Insert header of invoice
move to next record and goto start.

it is not fixed on which line the SQLDiconnect will be fired, but normally its 5th minute.
So at the destination Line 1 & 2 gets discarded as the Invoice Header is not inserted and I am left with Invoice starting with line 3 onwards and this creates mismatch.

-Regards
Raj

Please post your script (all of it).

Hi,

I do not suspect any issue with the script. I would like to know if the auto disconenct of Linked server is the default behaviour of SQL Server ?

Here is the script.

SET NOCOUNT ON

DECLARE curOutput CURSOR FOR SELECT CustomerRefListID,TemplateRefListID,TxnDate,TermsRefListID,CustomerMsgRefListID,InvoiceLineItemRefListID,InvoiceLineQuantity,InvoiceLineRate,InvoiceLineAmount,FQSavetoCache
FROM dbo.QBQODBC_INVOICEPOSTINGDATA
WHERE CustomerRefListID IS NOT NULL
ORDER BY CustomerRefListID, FQSavetoCache DESC

DECLARE @CustomerID VARCHAR(100)
DECLARE @TemplateID VARCHAR(100)
DECLARE @TransactionDate DATETIME
DECLARE @CustomerTermsID VARCHAR(100)
DECLARE @CustomerMsgID VARCHAR(100)
DECLARE @ItemID VARCHAR(100)
DECLARE @Quantity DECIMAL(18,2)
DECLARE @Rate DECIMAL(18,2)
DECLARE @Amount DECIMAL(18,2)
DECLARE @QBFlag INT
DECLARE @i INT

SET @i = 0

OPEN curOutput

FETCH NEXT
FROM curOutput
INTO @CustomerID,@TemplateID,@TransactionDate,@CustomerTermsID,@CustomerMsgID,@ItemID,@Quantity,@Rate,@Amount,@QBFlag;
--Subroutine for QB insert
WHILE @@FETCH_STATUS = 0
BEGIN
INSERT INTO QREMOTE...INVOICELINE (CustomerRefListID,TemplateRefListID,TxnDate,TermsRefListID,CustomerMsgRefListID,InvoiceLineItemRefListID,
InvoiceLineQuantity,InvoiceLineRate,InvoiceLineAmount,FQSaveToCache)
VALUES( @CustomerID,
@TemplateID,
@TransactionDate,
@CustomerTermsID,
@CustomerMsgID,
@ItemID,
@Quantity,
@Rate,
@Amount,
@QBFlag )

	SET @i = @i + 1		

	PRINT	'Transaction quantity of ' + CAST(@Quantity AS VARCHAR(12)) +  ' for item ID ' + @ItemID + ' for customer ID ' + @CustomerID + ' is now posted.'

	FETCH	NEXT
	FROM	curOutput
	INTO		@CustomerID,
				@TemplateID,
				@TransactionDate,
				@CustomerTermsID,
				@CustomerMsgID,						
				@ItemID,
				@Quantity,
				@Rate,
				@Amount,
				@QBFlag ;
END

--End subroutine for QB insert
CLOSE curOutput
DEALLOCATE curOutput

Looked at the script. I could be wrong but I cannot see any reason to use a cursor-based solution here. A simple

INSERT INTO QREMOTE...INVOICELINE (
	CustomerRefListID
	,TemplateRefListID
	,TxnDate
	,TermsRefListID
	,CustomerMsgRefListID
	,InvoiceLineItemRefListID
	,InvoiceLineQuantity
	,InvoiceLineRate
	,InvoiceLineAmount
	,FQSaveToCache
	)
SELECT CustomerRefListID
	,TemplateRefListID
	,TxnDate
	,TermsRefListID
	,CustomerMsgRefListID
	,InvoiceLineItemRefListID
	,InvoiceLineQuantity
	,InvoiceLineRate
	,InvoiceLineAmount
	,FQSavetoCache
FROM dbo.QBQODBC_INVOICEPOSTINGDATA
WHERE CustomerRefListID IS NOT NULL

should do the trick and eliminate your timeouts

I would like to share that Query Timeout is not an issue.

The Script runs for hours & it is perfectly fine that it takes hours. its does gets completed processing all the records.

My issue is with SQL Server's behaviour that WHY it is sending SQLDisconnect Command to ODBC Driver and reconnecting it every 4-5 minutes.
As it is sending disconnect command few of the records goes missing at the destination.

I cannot use the script you shared as the destination has to process business rules on each insert and cannot work in the about bulk insert scenario.

Where are the business rules? I cannot see them in your script.

It fires on each insert on the destination i.e ODBC driver the linked server is connected to.

Via a trigger? Then it should make no difference if you do it set-based or RBAR. OTOH if the trigger is written poorly (e.g. expecting only one row -- classic mistake!) then yeah, your kinda stuck -- unless you can fix the trigger. Maybe that's where we should focus our efforts.

No changes can be made to destination.

Again, my point here is Why SQL Server is sending SQLDisconnect command. ?

Is it the default behaviour of SQL Server for Linked Server that it will send SQLDiconnect command to ODBC every 4-5 minutes. ?

Could be a TCP setting, In SQL Configuration Manager, what is the TCP keepalive set to? (on my box its 5 minutes -- the default i think)

I will try changing the values and see if that helps.
Thank you for helping and sharing the information.

I will let you know the outcome.

No that "SQL Configuration Manager, TCP keepalive " does not make any difference.

Any update or any other suggestion

I appreciate that you have said that there is a requirement to process this RBAR, but I can't help thinking that IF you did it set-based it would be all-done in 5 minutes, so if that is a fixed-time-limit for something-or-other the problem would go away if the operation could be performed set based.

Another way to look at it would be to make the update, set based, and then have something at the far end process the rows RBAR for the benefit of the invoicing engine. A crud example would be to produce a CSV file of "everything", pass that to the remote end and have it "Import" it RBAR to the invoicing process.

Maybe its not a real-world problem for you, but that type of cursor RBAR processing always scares me that something will come along wanting to update the table in the middle of the CURSOR retrieving the rows. I prefer a set-based ATOMic solution in order to minimise how long things are locked, and the chance anything has to interfer. Even ATOMic stuff becomes at risk once it is multi-statement if there are idiots about who pepper NOLOCK throughout their code ...