Running Total very slow

Thanks guys for your help. What Scott has given has worked. For some reason I had thought that the Entry No_ was a NVARCHAR data type but was in error as it was an INT like Scott thought in the beginning. I am now capturing everything out of the temp table correctly. We do not have licensing or access rights to change our underlying accounting system Navision. Ideally I would like to just add indexes to these tables and then test, but might have to do it the temp table way. Let me say thanks to everyone for their help......Scott especially.

Oddly enough, that link dismisses the most powerful method as an unsupported anomaly. Too bad there is so little understanding there.

Again, just curious, how many rows are you talking about and how long is it now taking?

I didn't think quirky update was needed here, particularly given its level of "black box"ness. You just sort of have to trust that it works -- a lot of people, even some well know ones, just aren't willing to do that.

Like I said... lot's of misunderstand and FUD. You don't have to trust that it works. There's an easy way to have it check for errors to let you know if it worked or not.

Try the solution used in the below post to calculate running total in SQL Server
Running Total in SQL Server

So can you index when running CTE? Can this be used in a View as temp tables can't? Seems a bit slower but it could solve my problem as my DBA states that he needs a View to place in an SSIS package.

@osupratt1,

Please answer my previous question. How many total rows are we talking about?

we have 8 companies in my division. each company as from 100 - 1000 customers. average quantity of records per customer is about 10k. so for company 1 with 50 customers at 10k records per customer then that would be 500k records grouped by customer number to where we would see 50 records showing the MAX running total (high balance) for the current year. multiplied by 8.

CTE solution does not calculate correct running total. verified totals do not match. I only want the entries for current year. if I filter for one particular customer it shows the same number of records and the 'amount' (amount that is getting totaled) is the same. it is not calculating correctly so I'm not sure if there is the >= entry number that needs to go somewhere?

I am coming to the party late, so I don't know the details of the CTE solution that you referred to in the above post and couldn't see one in the previous posts. If you filter for one customer and you are still getting the same results, that usually indicates something wrong with the joins. If you are able to post a test case that you are using (that one can copy and run in an SSMS window) that would help.

If you are concerned about performance in running totals, there are 3 options that I can think of:

a) If you are on SQL 2012 or later, use windowing functions ( i.e., SUM() OVER()).

b) On an earlier version, use the quirky update. It is an undocumented feature, but as far as I know it has never been shown to give wrong results if you follow the rules. See @JeffModen's article here to learn what you need to be aware of (scroll down to the rules section)

c) If you are on SQL Server 2005 or later, use CLR aggregates. Here is an article that describes it

One other question @osupratt1. As I was reading back on previous posts, I saw that I had suggested that what you are trying to do is not "running total" in the conventional sense that people understand it.

Let us say you had data like shown below

Date    Amount
Jan 1   10
Jan 2   5
Jan 3  -2
Jan 4   7

Running total in the conventional sense is the following:

Date    Amount  Running Total
Jan 1   10       10
Jan 2   5        15
Jan 3  -2        13
Jan 4   7        20

Is that what you are trying to compute?

Although I have a grand appreciation (love it) for the "Quirky Update" method that you used, the way you did it is dangerous because it doesn't guarantee the run in any way. There's nothing to check for errors, if parallelism occurs, you're screwed, and if someone adds a unique index not in the same order as your clustered index, you'll be making wrong answers all over the place.

There are a fair number of rules that you have to follow to keep from shooting yourself or you project in the head when using the "Quirky Update" (the 3 part update based on the underlying "pseudo cursor"). You must use TABLOCK (TABLOCKX is better), you must use MAXDOP 1, doing it the way you did it, you must use an INDEX(1) hint (which also means you need the correct Clustered Index, which you don't have).

It would also be extremely smart to build in a "safety counter" to ensure that the rows are being processed in the correct order or they cause a noticeable error.

Ok.. with that thought in mind, how long is it taking now?

Scott has given me a script that works and cuts down the processing time dramatically. the problem I am having is that my original slow running script was part of a View. Since I'm am kind of new to this I have found that you can't have Temp Tables within Views. I then tried Sandeepmittal11 suggestion and it runs just as fast but I can't get the running totals to match correctly to my original slow running script that calculated correctly but was just way to slow. I have tried numerous times but in the last way of trying the totals (first 'running total' for the first record) won't calculate properly. I am not sure why and if I new and could fix it I would probably do the CTE within the View and be done with it.

What version and edition of SQL Server are you using?

I hope everything actually worked out for you. I didn't ask which version of SQL Server you were working on as a trivial question. If you're using less than 2012, I can do running totals on a million rows in only 2 or 3 seconds. You can use more supportable methods in 2012 and up if you don't mind a 13:1 performance hit but 26 to 45 seconds isn't bad.