Hi, I'm running a simple rollup by cube query and it looks like I've run to the limit of my server. The query is like:
select colA, colB, colC, colD, sum(A), count(b).. group by cube(colA, colB, colC)
In the actual query, it will let me have 10 dimensions (colA, colB, etc..) with 14 aggregates. But as soon as I add the 11th dimension, I get the error that the query processor has run out of internal resources. Our server is highly spec'ed: 8 cores with maybe 36gb of ram.
With 10 dimensions, the dataset returned is approx 350k rows, so not huge.
The first three dimensions are year, quarter, and month. The others are standards like region, userid, etc..
One thing I did which helped was to use a rollup with the cube as follows:
group by rollup(year, quarter, month) , cube (region, userid, ...)
That helped since y, q, m are parent/child so to speak. But I'm still at the limit of how many columns I can put in the cube() portion, and my clients are asking for more
I understand how adding additional columns increases the calculations geometrically. Also if there were some way to "nest" the cube portion within Y, Q, M, that would help. What I mean by this is that I don't need a result set where region = x and year=null, quarter=null, or month=null. That would never be a real world scenario, our reports are always inclusive of a date. But I don't know how to do that.
What is the best practice here when a cube grows to large? I'm hoping to avoid creating an SSAS project, I'd like to run it through just sql. Is there any way to break it into multiple steps or perform it in such a way that it does not max out the server resources? I hope this makes sense!
Thanks in advance,