(This post was last updated on 1 August 2022.)
Everything changed for SQL Server Standard Edition on 16 November 2016, and how memory limits work.
On that day, a slew of Enterprise Edition features made their way into editions across the board, including Express Edition and LocalDB.
The memory limit of 128GB RAM applies only to the buffer pool (the 8KB data pages that are read from disk into memory — in other words, the database itself).
For servers containing more than 128GB of physical RAM, and running SQL Server 2016 with Service Pack 1 or higher, we now have options.
The max server memory setting always did only refer to the buffer pool, but for many reasons there was misunderstanding from a lot of people that it included other caches as well.
Because columnstore and In-Memory OLTP have their own cache limits over and above the 128GB buffer pool limit, the guidance around assigning max server memory is no longer simple.
Columnstore now gets an extra 32GB of RAM per instance, while In-Memory OLTP gets an extra 32GB of RAM per database. If you have Standard Edition, you can add these limits to the 128 GB buffer pool. See Erik Darling’s post for more information.
With that in mind, you are still welcome to use the Max Server Memory Matrix and associated calculator script for lower versions of SQL Server (up to and including 2014), but I will not be maintaining it further, unless someone finds a bug.
How much should I assign to max server memory? It depends.
It would be very easy to spec a server with 256GB RAM, install a single instance of SQL Server 2016 Standard Edition (with Service Pack 1, of course), have 128GB for the buffer pool, 32GB for the columnstore cache, three databases with 32GB of RAM each for In-Memory OLTP, and still run out of memory.
This is a brave new world of memory management. Tread carefully.
Share your thoughts in the comments below.