Help needed to do below in a single sequential execute

Hello,

I am using SQL Server 2012. I would like to move some some master-detail records into their archive tables. Archive tables does not contain all identical columns. They hold some reduced data.

Before archive operation both tables are linked on orderid column (exists in both master and detail tables with same column name). After archive operation there is no more orderid column but there is uuid column and mater-detail link should be on that uuid column (exists on master table, does not exists on detail table).

Below statements works for moving header records

insert into einvoicearchive
select uuid, current_timestamp, year(current_timestamp), invoiceamount, invoicetype, buyeremail, fullname, :entegrator, xml
from orders
where cast(lastinvoicesendstatus as varchar(5)) = 'OK';

delete from orders
where cast(lastinvoicesendstatus as varchar(5)) = 'OK';

What I am after is to add two more statements below above ones for moving detail records. So far, I could select detail records using below statement

select 'UUID_of_header_record_to_be_here', productid, productname, quantity, price, calculatedtaxamount, calculatedpricewithouttax
from ordersdetail
where orderid not in (select orderid from orders)

What I am missing in selected detail records is linked uuid column.

I am kind of confused as to if I can solve this without coding but using SQL statements only.

I am also after kind of fast execution and appreciate suggestions as to this. I only have index on orderid column on both master and detail tables.

Any help is appreciated.

Thanks & Regards,
Ertan

1 Like

Take a look at OUTPUT - https://docs.microsoft.com/en-us/sql/t-sql/queries/output-clause-transact-sql?view=sql-server-ver15

You could return a list of items from the header table to be archived, and then use that list to delete the detail data and the header data. The deletes can then use OUTPUT to both archive and delete the data.

select ...
into #itemstobearchived
from ...

delete od
output ... into archivetable
from ordersdetail od
join #itemstobearchived a on a.orderid = od.orderid;

delete o
output ... into archivetable
from orders o
join #itemstobearchived a on a.orderid = od.orderid;

I would put the above into a try/catch - with an explicit transaction to rollback if any errors occurred.

You should also remove the cast in the where clause, it is going to prevent using an index and isn't needed.

1 Like

Hi,

Thanks for pointing me to the direction. My latest version is below and initially working fine for me.

begin transaction;

begin try
	select orderid, uuid, current_timestamp "invoicedate", year(current_timestamp) "year", invoiceamount, invoicetype, buyeremail, fullname, xml
	into #itemstobearchived
	from orders
	where cast(lastinvoicesendstatus as varchar(2)) = 'OK';

	delete od
	output a.uuid, deleted.productid, deleted.productname, deleted.quantity, deleted.price, deleted.calculatedtaxamount, deleted.calculatedpricewithouttax into einvoicearchivedetail
	from ordersdetail od
	join #itemstobearchived a on a.orderid = od.orderid;

	delete o
	output a.uuid, a.invoicedate, a.year, deleted.invoiceamount, deleted.invoicetype, deleted.buyeremail, deleted.fullname, '', deleted.xml into einvoicearchive
	from orders o
	join #itemstobearchived a on a.orderid = o.orderid;
end try
begin catch
	rollback transaction;
end catch

if @@TRANCOUNT > 0
	commit transaction;

begin try
	drop table #itemstobearchived;
end try
begin catch
end catch

I am just not sure if deleted data is being filtered correctly by only joining them with temporary table. I am going to test in detail using a bigger data.

lastinvoicesendstatus column is of datatype "text" and I got errors if I am not casting it to varchar.

I did test with a bigger data and delete is indeed limited to that of temporary table records.

Ahh - that really should be updated to use varchar(max) instead of the deprecated 'text' data type.

You don't need the try/catch to drop the temp table - you can use this:

If object_id('tempdb..#itemstobearchived) Is Not Null
   Drop Table #itemstobearchived;

Hi,

I have been using solution for a while and recently I needed small modifications in it (add a column here and there). Which lead me to deadlock or transaction count mismatch errors. I simply failed to find where problem is and asking for help in here.

My current SQL which has a problem is as below:

begin transaction;

begin try
	select 1 as marketplace, orderid, uuid, current_timestamp "invoicedate", year(current_timestamp) "year", invoiceamount, buyeremail, fullname, xml
	into #itemstobearchived
	from n11orders
	where invoiced = '1';

	delete od
	output a.uuid, deleted.productid, deleted.productname, deleted.quantity, deleted.price, deleted.calculatedtaxamount, deleted.calculatedpricewithouttax into einvoicearchivedetail
	from n11ordersdetail od
	join #itemstobearchived a on a.orderid = od.orderid;

	delete o
	output a.marketplace, a.orderid, a.uuid, a.invoicedate, a.year, deleted.invoiceamount, deleted.buyeremail, deleted.fullname, 'Test', deleted.xml into einvoicearchive
	from n11orders o
	join #itemstobearchived a on a.orderid = o.orderid;
end try
begin catch
	rollback transaction;
end catch

if @@TRANCOUNT > 0
	commit transaction;

begin try
	drop table #itemstobearchived;
end try
begin catch
end catch

Fixed input 1 for the marketplace column in first select is changing as well as 'Test' in last delete statement. That complete SQL is executed from an application.

I do not know how I can debug line by line using SQL Management Studio (don't know if that is even possible). I manually execute statements by selecting them without using transactions at all in order to find where my problem is.

I have non completing execute (continuously running until I stop it) when I try to run first delete statement starting with delete od

I failed to see where the problem is. I appreciate if anybody can lead me to a direction to check.

Thanks & Regards,
Ertan