Last week I had the privilege of reviewing possibly the best SQL Server production environment I’ve seen in Canada. During the follow-up meeting, the senior DBA and I had a discussion about Virtual Log Files (VLFs), disagreeing on the maximum number of Virtual Log Files a transaction log should have. I said 200, he said 1000.
Both numbers are arbitrary, so let’s explore why VLFs exist and why we might prefer one over the other.
To give you a succinct refresher, here’s what I wrote about VLFs in my book (Amazon affiliate link):
A transaction log file is split into logical segments, called virtual log files. These segments are dynamically allocated when the transaction log file is created, and whenever the file grows. The size of each VLF is not fixed and is based on an internal algorithm, which depends on the version of SQL Server, the current file size, and file growth settings.
In a transaction log with too many or too few VLFs we might experience performance issues under a normal workload, as well as during the backup and restore process.
So what is the “right” amount? In customer engagements, I follow a guideline proposed by Glenn Berry of SQLskills.com in his Diagnostic Information Queries, to keep the number of VLFs at or below 200. In my opinion, any number higher than that is cause for concern.
On the other hand, Brent Ozar Unlimited has a popular script called sp_Blitz which proposes a maximum VLF count of 1000. To Brent, a number higher than that is cause for concern.
Who’s right?
It doesn’t matter, because when we consider the number of VLFs in a transaction log file, what we care about are excessive counts. I’ve seen databases with more than 15,000 VLFs — clearly too high. If on the other hand a transaction log file has 201 VLFs, I’m not going to insist that the DBA shrink and resize the log immediately. Instead, I’ll raise a flag and ask questions about file growth settings.
As a baseline, 200 VLFs is a reasonable maximum. If a transaction log file is small and has a lot of VLFs, that points to a problem with file growth settings, which is a relatively easy fix. If a transaction log file has more than 1000 VLFs, that should really set off alarm bells. Periodically keeping your eye on the VLF count or using more in-depth health checks like dbSnitch can go a long way to being proactive about identifying growing problems, rather than reactive firefighting.
In my customer’s case, where they have very fast storage, low CPU utilization, and mostly small databases (below 100 GB), having 200 VLFs is a manageable target amount and doesn’t appear to be causing noticeable performance issues. With good file growth settings, the VLF count won’t even matter.
Share your VLF counts with me on Twitter at @bornsql. Next week we’re back to our ongoing series on date and time, continuing where we left off last week.
Photo by Bernard Hermant on Unsplash.
Nice post. I don’t flag it until 1k because I’ve never seen folks notice a measurable difference between, say, 200 and 400 VLFs. I’ve never had someone go, “Oh thank God we dropped VLFs from 400 to 200 – it’s amazing now how much performance is better, and that’s the only thing we had to change!”
But, I’d also say I’m known for much more lax standards about that kind of thing. I want people to focus on the #1 performance issue facing their box – and I just can’t imagine 400 VLFs being the #1 thing they have to worry about. It’d be awesome if that was their top concern, though!
Comments are closed.