Last time we looked at the four major components of a computer system, and then looked at the SQL Server buffer pool as a way to leverage the best performance from computing hardware.
Before we dive deeper into the buffer pool, I wanted to briefly mention data access terminology. A common metaphor for accessing data is measured in temperature: data that is already in cache (memory) is considered “hot,” data that is readily accessible to memory from very fast storage is considered “warm,” and data that needs to be read from disk or tape (or the network) is considered “cold.” This also explains why Azure Cool Storage and Amazon Glacier are so named.
Buffer Pool Extension
A few versions ago, Microsoft provided a way to extend the size of the buffer pool, at a small cost in disk space. This is the Buffer Pool Extension, introduced in SQL Server 2014.
By making use of locally-attached solid state storage, the buffer pool can extend past the physical limit of how much main memory is available to SQL Server, theoretically improving performance because that data is considered “warm.”
On paper this is a great feature. Assuming that it might take two years to realize a feature for SQL Server, its appearance tracks with solid state drives being smaller and more expensive five to seven years ago, with much slower mechanical hard drives being the main storage technology at the time.
Unfortunately there are some practical problems with the Buffer Pool Extension in 2019.
Firstly, let’s talk about the licensing contradiction with SQL Server Standard Edition. Since 2016 Service Pack 1, we have been able to access 128 GB of memory for the buffer pool, plus additional RAM for in-memory OLTP and Columnstore indexes. The practical limits of Standard Edition have been — for the most part — all but lifted. By the same token, if we can afford Enterprise Edition, we can afford more physical RAM. In other words, if we need more than 128 GB RAM for the buffer pool, perhaps we should move away from Standard Edition.
Secondly, locally-attached solid state storage is becoming cheaper and more prevalent by the day. I saw a price of $125 USD for 1 TB of consumer-grade storage. If data is stored on a number of solid state drives in a flash array, why bother with a dedicated drive for the Buffer Pool Extension as well? It’s not going to make much of a difference to performance anyway.
Thirdly, at the risk of sounding redundant, the Buffer Pool Extension is just an extension to the buffer pool. As we saw previously, the buffer pool is comprised of as many 8 KB data pages that will fit in memory. That means it can’t be used for in-memory OLTP, as that doesn’t make use of this 8 KB data structure.
Fourthly, the Buffer Pool Extension cannot be reduced in size without restarting SQL Server. While this is less of an issue compared to the others, it is worth considering in terms of maintenance windows.
The final nail: persistent memory
SQL Server 2016 introduced support for persistent memory (PMEM), which reduces the number of operations required to harden data before it is persisted to permanent storage. SQL Server 2019 will introduce support for this on Linux as well.
As we saw in a previous post, persistent memory blurs the lines between the hot, warm, and cold layers, meaning that SQL Server lets the hardware handle data persistence from memory to permanent storage. All SQL Server does is perform a copy in memory, and the hardware knows how to persist this to an underlying flash array.
The Buffer Pool Extension was a good idea that was an unfortunate victim of the boom in solid state storage devices. I have not seen it implemented at any customer sites I’ve worked on, and as solid state costs continue to drop while their capacities increase, combined with the rise of persistent memory technology abstracting away the storage layer, it is all but a legacy feature now.
Share your favourite features that were rendered obsolete before their time in the comments below.