Skip to content
Home » Proposed SQL Server defaults: max server memory

Proposed SQL Server defaults: max server memory

  • by

(This post was updated on 1 August 2022.)

A few months ago I suggested that the following settings should be the default for most SQL Server instances:

  • Set cost threshold for parallelism to 50
  • Disable lightweight pooling if it is enabled
  • Disable priority boost if it is enabled
  • Set optimize for ad hoc workloads to enabled (read Don’t optimize for ad hoc workloads as a best practice)
  • Set max server memory (MB) to a custom value consistent with Jonathan Kehayias’s algorithm
  • Set backup compression default to enabled
  • Set the power saving settings on Windows to high performance if possible
  • Provide an option to flush the plan cache as needed

Over the next few posts I will dive into the why. This week we look at setting max server memory correctly.

Memory matters

As noted in the previous post in this series, memory in SQL Server is generally divided between query plans in the plan cache, and data in the buffer pool (other uses for memory in SQL Server are listed later in this post).

The official documentation tells us:

[T]he default setting for max server memory is 2,147,483,647 megabytes (MB).

Look carefully at that number. It’s 2 billion megabytes. In other words, we might think of it as either 2 million gigabytes, 2,048 terabytes, or 2 petabytes.

This means that SQL Server will use all the physical memory (RAM) available on your computer by default. All? All.

Windows Server

One of the biggest risks with SQL Server using all the available memory is that there are other applications, services, even hardware devices, that require their own memory. Even though Windows provides a page file (also known as a swap file, which is an area of persisted storage — usually on the C: drive — which allows unused portions of memory to be paged to disk temporarily), and even though SQL Server is a good citizen and will try to reduce its footprint when asked by Windows, it is quite possible for the operating system to become overwhelmed, especially if there are several applications vying for memory at the same time.

When Windows is not able to aggressively trim the working set of a particular application fast enough, either by paging to disk via the page file or by asking the application to reduce its memory footprint through system calls, we get the infamous Blue Screen of Death (BSOD) and must restart the server.

Linux

With SQL Server now supporting both Windows and Linux, this is a good opportunity to point out a key difference between how the two operating systems manage high resource usage.

Linux distributions including Ubuntu, Red Hat and SuSE, provide an area of persisted storage (in a dedicated partition) to allow unused portions of memory to be paged to disk temporarily, just like Windows.

However, when Linux becomes overwhelmed, it will simply terminate the process that might be using all the memory on the server. If SQL Server happens to be that process, it has the same effect as suddenly removing the power supply. When SQL Server starts up again it has to go through crash recovery; all the data that was in memory is gone, and there is a potential for data loss and corruption.

For this reason, there is a physical memory limit setting specific to SQL Server on Linux, which can be configured using mssql-conf. It is set to 80% of the total physical RAM by default. I do not advise changing this value, but you may have to tweak your max server memory setting to take this into account.

Let’s talk about megs

We should lower the max server memory configuration setting to a value that is below the available physical memory on the server. This will limit the amount of memory used by SQL Server for the following components:

  • SQL Server memory allocation;
  • compile memory;
  • all caches (including the buffer pool);
  • query execution memory grants;
  • lock manager memory;
  • CLR memory (since SQL Server 2012).

Note: As a guideline, the documentation mentions that “any memory clerk” found in the sys.dm_os_memory_clerks DMV is controlled by the max server memory setting.

It is worth mentioning that this is a hard limit as of SQL Server 2012, whereas versions prior to that would have used more than the max server memory setting. Then again, columnstore indexes and in-memory OLTP can take up a large part of the buffer pool if you let it. So if you are planning to (or already do) make use of these features, always remember to leave enough memory for the operating system, other applications, and device drivers.

How much is enough?

I created a Max Server Memory Matrix on this site some time ago, which lists physical memory in 4 GB increments from 4 to 1,024 GB, along with the recommended max server memory setting. It assumes that SQL Server is the only service on the server, and that columnstore and in-memory OLTP features are not being used. This matters on Standard Edition, because you can increase the max server memory setting to account for these features past the 128 GB

Note: If you use batch mode operations (which comes into play with SQL Server 2019 if you aren’t using columnstore indexes), you may be using additional memory without realising it.

The algorithm was written by Jonathan Kehayias, and to make things easier I have written a T-SQL script to automatically calculate the value based on physical RAM as well as SQL Server version and edition.

Summary

If you haven’t already, lower the max server memory setting on your SQL Server instance today to a reasonable value that gives breathing room to the operating system and other applications (especially on Windows Server).

Share your blue screen anecdotes in the comments below, if you’ve got enough memory to do so.

Photo by Jeremy Bishop on Unsplash.