SQLTeam.com | Weblogs | Forums

Lowering Max Memory caused SQL Server to abruptly stop

Server

  • SQL Server 2014 hosted on Azure
  • 32 Core
  • 256 GB RAM

We were thinking we have an abundance of RAM on the Server and were wondering how much we really need. i.e. how low can the SQL Server's RAM usage be, before an impact on performance was noticeable.

At the Start, the Max Memory was set to 230GB and SQL Server had 229GB committed.

We set the Max Memory down to 38GB. SQL Server release RAM Slowly, at a rate of about 2GB/minute. SQL Server got down to about 50GB RAM committed. There was no noticeable impact on performance, even though it was our busy time of the day. Even the Buffer/Cache hit ratio was over 99%.

We set Max Memory down to 17.5GB and abruptly the Service shut down. We needed to start it up again. It started up and we increased the RAM and had no more issues.

The Service shut down was unexpected. As noted, it was releasing RAM slowly prior to that. In addition, 17.5GB RAM is still a significant amount. We expected that as it was releasing RAM, eventually, performance would have been impacted. But the Server should have stayed up, obviously the Buffer/Cache hit ratio would have lowered.

The error message was: There is insufficient memory in resource pool 'internal' to run this query.

Anyone know why the Service abruptly shut down when the Max Memory was set to 17GB rather than just continuing to Release RAM?

My guess is that based on either the number or Resource Pools or the number of Cores, SQL Server had some limit and just shut down. Like I mentioned, at a Max Memory setting of 38GB it stayed up fine.

I think you answered your own question. The error message seems pretty clear. I would suspect that the internal SQL processes that use the internal resource group crashed. If you want, you can try setting up an extended event to capture memory grants and see how much they're grabbing. Obviously this probably won't work at 17 GB max memory, and other than confirming suspicions, I don't see how valuable it would be.

The SQL error log might have more details, especially if there was a dump file generated.

Also, don't change the default or internal resource pools or workload group settings. You can end up with a SQL Server that won't start normally. (I'm not even sure if you can change those settings anyway)

Thankyou Robert.

It's just that at that point that it was lowered from 38GB to 17GB, it had 50GB committed and was running fine (and gradually releasing RAM). Almost seemed like SQL Server had a lower limit. But if it did, how can we find or calculate that lower limit?

Yes in the error log there was all king of memory dump info.

In my opinion, this is like the story, "Doctor, it hurts when I do this.", "Then don't do that." :slight_smile:

I think you found the lower limit, and generally you want to have some memory headroom with SQL Server. I'm assuming you're running Enterprise Edition, I would never run that with less than 64 GB of RAM and at least 48 GB max memory. For no other reason than costs; Standard Edition can use more RAM for a lot less money. There's no point in paying for a Ferrari if you're only ever going to pick up groceries or drive to church with it.

That's the whole thing. We proved we don't need the Ferrari, so are we OK with a BMW or a Corolla? I certainly didn't agree with SQL Server shutting down like that. If there is a lower limit, I'd like to know exactly what that is. Maybe 0.5 x # of Cores + 1 = 17GB? We didn't find the lower limit but it's somewhere between 17 and 38.

I have a feeling it is related to the core count. Since you have 32 cores, you most likely have 2 NUMA nodes, and NUMA will partition the memory. So you may have ended up with only 8.5 GB per node. I can see that causing a lot more memory pressure.

I think you can safely downgrade to an 8 core/32 GB config, and I think they offer 8 cores with 64 GB for the same price or only slightly more. You really don't want to limit the RAM available to SQL Server, if it's available to the VM.

Take a look at Jonathan's guide on setting MAX memory:

There's a few links inside that too, you should read them as well.

2 Likes

Thankyou - I'll look at that.

I tried this calculator https://sqlmax.chuvash.eu/ . But I think it suggests a Max RAM setting value that is too low.

I am not sure you proved anything - other than during business hours you can run with less memory. To really understand your memory requirements, you need to test with a full workload across time.

How much memory does SQL Server need when running index rebuilds?
How much memory does SQL Server need when running integrity checks?
How much memory does SQL Server need when running backups?
How much memory does SQL Server need when running statistics updates?

Do you have nightly batch jobs - other processing?

Just because the system appeared to be working well - doesn't mean it is going to work well through a full days/weeks/months workload.

I would not be so quick to reduce memory without evaluating as much as possible. I understand the requirement is probably related to the cost in Azure - but you also need to consider the cost to the business if you cannot perform normal maintenance tasks.

Good point - to try it under those other scenarios. But very unlikely those would be an issue - those jobs run under one session/per server at a time. Whereas during business hours we have a lot of action running in parallel. Sessions active at one time are highest during the time tested.

When backups run - SQL Server has to load the data into memory and then write it out to disk. Integrity checks loads the data into memory before it can perform the checks.

Basically - all of those processes will benefit from having more memory available.

If all of your databases are less than 32GB in size then you might be able to get away with lowering the memory with no impact to those processes. It depends on many factors...

I would not reduce memory to anything less than 64GB on a server with 32 cores, and probably would keep it no lower than 128GB. SQL Server will always perform better with more memory than with less memory - and you are almost guaranteed to increase the IO compute with lower memory, which might end up costing you just as much as having the additional memory.

Memory is one of the cheapest commodities there is, nowadays. You have 32 core and that's the biggest cost especially with licensing... why are you trying to change the incredible tires it has out for hard rubber training wheels?

Seriously, be really happy with what you have. If you need to cut down on cores to save a bit on licensing but keep the RAM.