Data file and Log File size in Sql server Always ON

Hello,
I have set up Sql Server Always ON AG but currently looks like Log file is growing and we are facing Disk Space issue.

My current Data file size is 45 GB and Log file is almost 86 GB.
Is it Log files showing almost double size is right?

I am trying to check any Open or long transaction but it's not showing anything.
How I can control the Log file size?
I am taking every 30 minutes Log backup.

Thanks for your help and suggestions!

you can truncate the log

OR

set the backup limit to 15GB .. once reached .. backup to other location with huge disk space and truncate the log

Thanks Harish.
Within my every 30 minutes log backup, it will be truncating, right?
What's the other way you are suggesting to truncate the log?

For Always ON AG, I can change any time log backup from 30 minutes to 15 minutes?
Is it anyway I can know why it's almost double the log file size compare to Data file size?
I was checking ideal Log file size like largest Index size multiply 2 or 25% of Database size and it's way more log file size as my DB size is 135 GB.

I know it must be somewhere log Replication lagging and behind but it's from yesterday and but couldn't find it even though I have added disk space.

Did you look at the Availability Group dashboard to check the status of your AG synchronization? If you have a replica that is not synchronizing, you will not be able to clear the log no matter how often you make a log backup.

Also check when your log file grew. You can do this in SSMS, right-click on the database, choose "Reports", then choose "Disk Usage". There's a section in that report that will show file growth, but you need to expand it with the [+] button.

If you were running some kind of index maintenance or other large transaction, that could cause the log to grow in a shorter time frame. If you expect to have to do this in the future, you will want to avoid shrinking the log file after you clear it, as it will just grow again.

Thanks Robert.
Both the replicas in Healthy state and Synchronized.
Last commit and Last Harden time also matching.
Index Maintenance normally we do on Weekend.
Disk Usage shows all the Auto Growth for yesterday, nothing today (Timing is before when I added Disk space but after adding Disk space there are no Auto Growth showing)

So Not sure What else I should look as Log file size seems to be larger then what it should be right?

The log file size is probably fine at its current size, as long as you have space available for it to grow. Generally you want to have a log file large enough to handle whatever the largest transaction would be, within your regular backup window. With availability groups, replication, change data capture, you also need the log to be able to hold those transactions until they are synchronized with whatever replicas or processes that read them.

If you have enough free disk space, you probably don't need to shrink the log file. You will probably want to review how many virtual log files (VLFs) you have in the log, and possibly shrink and resize the log to adjust how many there are.

That blog will describe some of the issues with VLF size, count, and how they impact performance. This blog has some code that can help you fix any excessive VLF counts:

Generally speaking, you want a transaction log that doesn't have too many VLFs (1000 or more is too many IMHO), that has enough space to handle your largest regular transactions without needing to grow, but also small enough VLFs so that they can be backed up and cleared quickly.

Edit: I forgot to mention, you want to change your file growth increments to a fixed amount, never a percentage, for both data and log files. If your transaction log is 10 GB, let's say, you'd want to set the growth increment to 1 GB, or maybe 512 MB, but not a percentage.

Thanks Robert once again for helpful resource details.

Yes, you are right. We always try to set up auto Growth in MB not in % so this one has also set up in MB.

I will check the VLF and let you know.

Ok, It shows little high about 1400 so little more then 1000

OK. For a 45 GB database, I'd say a VLF count of 100 or 128 is probably best (I like to do them as powers of 2 if I can), but you can go to 50 or 64. You'll probably have to shrink the log all the way down as much as you can, then change the file size in fixed increments (1 GB, then 2 GB, then 3 GB, etc. up to the size you need)

Dave Levy's script will resize it in 1 step to the final size, so you'll want to modify it, or do the resizing manually.

Thanks Robert.
Just wanted to clear for my understanding as when you are saying
"(OK. For a 45 GB database, I'd say a VLF count of 100 or 128 is probably best )"
So trying point to File size as 45 GB or Database size?
because my Data file size is 45 GB and Log file is almost 86 GB but DATABASE SIZE is 135 GB

Sorry, I didn't phrase that well. Generally for databases under 1 terabyte, you don't want to have hundreds of VLFs. There's no hard guideline for the exact number, suggest reading the SQLSkills link and its associated links for better advice. For the data size you have (just data, not including the log), I'd say you want to resize your transaction log to something smaller, maybe 32, 48, or 64 GB. BUT, you want to shrink the log and then resize it so that you end up with about 100 or so VLFs.

Here's an example script:

use myDB
dbcc shrinkfile(2); -- want to shrink file to absolute smallest size possible

alter database myDB MODIFY FILE(name=logname, filesize=2GB)  -- this will create 8 VLFs of equal size
alter database myDB MODIFY FILE(name=logname, filesize=4GB)  -- this will create 8 more VLFs of equal size, adding 2 GB
alter database myDB MODIFY FILE(name=logname, filesize=6GB)  -- this will create 8 more VLFs of equal size, adding 2 GB
...  -- additional file resizing in 2 GB increments
alter database myDB MODIFY FILE(name=logname, filesize=32GB)  -- this will create 8 more VLFs of equal size, adding 2 GB

The "..." placeholder section, you'd add commands to increase the log file size by 2 GB until you get to the final size, in this example, 32 GB. Note that this is NOT the same thing as setting a file growth increment, it's increasing the size of the file in fixed increments in order to create the final number of VLFs that you want. You also want to have the VLFs be the same exact size as best as you can, so you always want to increase the size with the same increment each time.

Thanks once again for clarified response with more explanation.

For Always ON AG, it will be ok to this kind of operation in Production?

Yeah, growing a log file is low overhead, the only impact is that the new log segments need to be written with zeros. That's another reason why you don't want to grow the log to its final size in one operation, writing 1 or 2 GB of zeros each time should be fairly quick, vs. 16 GB or more in one operation.

Definitely monitor your AG sync status while you grow the file, and if you see impact then you want to do this during a low-activity window, and in multiple stages if you need to. I would do it manually though, don't schedule it via a job.

Thanks Robert.
Trying off hours will makes sense to avoid any issue.

So following script needs to be follow, right?

use myDB
dbcc shrinkfile(2); -- want to shrink file to absolute smallest size possible

alter database myDB MODIFY FILE(name=logname, filesize=2GB)  -- this will create 8 VLFs of equal size
alter database myDB MODIFY FILE(name=logname, filesize=4GB)  -- this will create 8 more VLFs of equal size, adding 2 GB
alter database myDB MODIFY FILE(name=logname, filesize=6GB)  -- this will create 8 more VLFs of equal size, adding 2 GB
...  -- additional file resizing in 2 GB increments
alter database myDB MODIFY FILE(name=logname, filesize=32GB)  -- this will create 8 more VLFs of equal size, adding 2 GB

You need to fill in the "..." part with file sizes of 8, 10, 12, 14, 16, etc. commands until you reach 32 GB. I left them out for space reasons.

I also made a goof, the keyword is SIZE, not FILESIZE.

Also, in case this wasn't clear in the discussion or blog posts, you need to have your transaction log backed up, and run DBCC LOGINFO to check for active VLFs near the end of the log file. You won't be able to fully shrink the existing log until the active log cycles around to the beginning of the file.

If the log file won't shrink, you'll need to make another log backup. You want to have as few VLFs as possible before you start resizing it. If you can shrink it down to only 2 VLFs, that's ideal, but 8 or fewer is OK too.

When you use DBCC SHRINKFILE on the log, don't specify a size or any other parameter. This is the ONLY scenario where you'd do that.

Great, Thanks Robert.

I checked VLFs so mostly it's status with "0" except few with "2" so it will let you to shrink most of the part.

You mean to say "You need to fill in the "..." part with file sizes of 8, 10, 12, 14, 16, etc. commands until you reach 32 GB. I left them out for space reasons"

For Shrink or Alter command?

ALTER DB ... MODIFY FILE. You should only have to do DBCC SHRINKFILE once or twice.

Got it!

Thanks Robert for valuable steps and tips, Greatly appreciated!