Max Server Memory and SQL Server 2016 Service Pack 1

Everything changed for SQL Server Standard Edition on 16 November 2016, and how memory limits work.

On that day, a slew of Enterprise Edition features made their way into editions across the board, including Express Edition and LocalDB.

The memory limit of 128GB RAM applies only to the buffer pool (the 8KB data pages that are read from disk into memory — in other words, the database itself).

For servers containing more than 128GB of physical RAM, and running SQL Server 2016 with Service Pack 1 or higher, we now have options.

The max server memory setting always did only refer to the buffer pool, but for many reasons there was misunderstanding from a lot of people that it included other caches as well.

Because ColumnStore and In-Memory OLTP have their own cache limits over and above the 128GB buffer pool limit, the guidance around assigning max server memory is no longer simple.

ColumnStore now gets an extra 32GB of RAM per instance, while In-Memory OLTP gets an extra 32GB of RAM per database.

With that in mind, you are still welcome to use the Max Server Memory Matrix and associated calculator script for lower versions of SQL Server (up to and including 2014), but I will not be maintaining it further, unless someone finds a bug.

How much should I assign to max server memory? It depends.

It would be very easy to spec a server with 256GB RAM, install a single instance of SQL Server 2016 Standard Edition (with Service Pack 1, of course), have 128GB for the buffer pool, 32GB for the ColumnStore cache, three databases with 32GB of RAM each for In-Memory OLTP, and still run out of memory.

This is a brave new world of memory management. Tread carefully.

If you’d like to share your thoughts, find me on Twitter at @bornsql.

Testing for Object Existence: CREATE OR ALTER

For the longest time, T-SQL writers have had to wrestle with ways of testing for an object’s existence so that it can either be dropped and recreated, or modified as needed.

Last week we covered the new DROP ... IF EXISTS syntax. This week goes into how we handle changes to objects.

We’ve spent many hours of our lives fighting with an object existence check so that ALTER commands don’t fail.

The common implementation pattern now is to CREATE a dummy object with the appropriate name and then use ALTER to write the actual code. That way, future changes can just be done by using the ALTER keyword.

We can see this in a famous example: sp_WhoIsActive, an extremely popular stored procedure written by Adam Machanic, which I highly recommend installing when setting up a new SQL Server instance.

USE master;
GO

IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES
WHERE ROUTINE_NAME = 'sp_WhoIsActive')
EXEC ('CREATE PROC dbo.sp_WhoIsActive AS
SELECT ''stub version, to be replaced''');
GO
...
ALTER PROC dbo.sp_WhoIsActive (...);

Look at all that unnecessary code. It’s messy; it’s prone to errors. There are so many ways to do it, which makes it inconsistent, meaning that automatically searching through a code base isn’t reliable. Additionally, if we miss a single quotation mark, the entire script fails.

In a word: Ugh!

What happens if we forget to run the stub first? What happens if we have an existing object and run the CREATE accidentally, then we have to manually change it to an ALTER

(Side note: I spent a good few minutes one day a few years ago not remembering the ALTER keyword.)

SQL Server 2016 Service Pack 1 has finally added in a feature that many DBAs and database developers have been clamouring for: CREATE OR ALTER.

In the above example, the entire IF NOT EXISTS section can be replaced with:

USE master;
GO

CREATE OR ALTER PROC dbo.sp_WhoIsActive (...);

While this has been a long time coming, causing many anguished cries from people writing T-SQL scripts over the years, we are going to love using this new, small, yet significant, syntax.

Share your best ALTER story with me on Twitter, at @bornsql.

Testing for Object Existence: DROP … IF EXISTS

For the longest time, T-SQL writers have had to wrestle with ways of testing for an object’s existence so that it can either be dropped and recreated, or modified as needed.

This is especially common in the case of temp tables or table variables. If the object already exists when our script runs, the script will fail and leave our workflow in an inconsistent state.

Consider this script:

IF OBJECT_ID('tempdb..#doTheThing') IS NOT NULL
DROP TABLE #doTheThing;

CREATE TABLE #doTheThing (
ImportantColumn1 BIGINT,
ImportantColumn2 TINYINT,
ImportantColumn3 NVARCHAR(255)
);

With SQL Server 2016, we can now do this:

DROP TABLE IF EXISTS #doTheThing;

CREATE TABLE #doTheThing (
ImportantColumn1 BIGINT,
ImportantColumn2 TINYINT,
ImportantColumn3 NVARCHAR(255)
);

DROP ... IF EXISTS can be used on many objects, including DATABASE, FUNCTION, INDEX, PROCEDURE, ROLE, SCHEMA, SEQUENCE, SYNONYM, TABLE, TRIGGER, TYPE, USER and VIEW.

Share your object existence check nightmares with me on Twitter at @bornsql.

Configuration Manager Shortcut on Windows 10 and Server 2016

This is more for my own reference than anything.

On newer versions of Windows desktop and Windows Server, we may find that the shortcut to SQL Server Configuration Manager is missing.

According to this MSDN article, the reason for this change is that Configuration Manager is a Management Console snap-in:

Because SQL Server Configuration Manager is a snap-in for the Microsoft Management Console program and not a stand-alone program, SQL Server Configuration Manager does not appear as an application in newer versions of Windows.

I think this is ridiculous because it does not maintain backward compatibility.

This is especially frustrating because the same article reminds us that all changes to SQL Server services should be managed through the Configuration Manager.

The workaround is to create our own shortcut as follows:

SQL Server Version Path for Shortcut
SQL Server 2008 / R2 C:\Windows\SysWOW64\SQLServerManager10.msc
SQL Server 2012 C:\Windows\SysWOW64\SQLServerManager11.msc
SQL Server 2014 C:\Windows\SysWOW64\SQLServerManager12.msc
SQL Server 2016 C:\Windows\SysWOW64\SQLServerManager13.msc

Share your frustrations with the “modern” Windows UI with me on Twitter at @bornsql.

Temporal Tables and History Retention

I’m a huge fan of Temporal Tables in SQL Server 2016. I first wrote about them, in a four-part series in November 2015, before SQL Server was even released. I don’t always get this excited about new features.

However, it has some limitations. As part of this week’s T-SQL Tuesday, hosted by the attractive and humble Brent Ozar, I have discovered a Microsoft Connect item I feel very strongly about.

Adam Machanic, the creator of an indispensable tool, sp_WhoIsActive, has created a Connect item entitled Temporal Tables: Improve History Retention of Dropped Columns.

As my readers know, temporal tables have to have the same schema as their base tables (the number and order of columns, and their respective data types, have to match).

Where this breaks down is when a table structure has changed on the base table. The history table also needs to take those changes into account, which could potentially result in data loss or redundant columns in the base table.

Adam suggests allowing columns which no longer appear in the base table to be retained in the history table and marked as nullable (or hidden), and should only appear when performing a point-in-time query by referring to the column(s) explicitly.

I have voted for this suggestion, and at the time of writing, it has 16 upvotes. I encourage you to add your voice to this suggestion.

If you have any other suggestions, or wish to discuss temporal tables, please contact me on Twitter at @bornsql .

Wait For Service Pack 1

Conventional wisdom tells us that when Microsoft releases a new version of any server product, we should wait until Service Pack 1 before deploying it to production.

This hasn’t been true for a while now, since Microsoft recommended that Cumulative Updates for SQL Server carry the same confidence:

SQL Server CUs are certified to the same levels as Service Packs, and should be installed at the same level of confidence.

However, Service Pack 1 itself has been mired in some controversy. Microsoft didn’t make things any easier for us with SQL Server 2012, or 2014. Both versions had issues with their respective Service Pack 1.

Fortunately, SQL Server 2016 has broken the cycle, and along with all of the fixes in Cumulative Updates 1–3, and a security fix, we get better feature parity between Enterprise Edition and lower editions, including Standard, Web, Express and LocalDB.

There are some restrictions, of course, but the idea is that developers and vendors can write T-SQL for features that now appear across the board.

SQL Server 2016 Service Pack 1 now includes the following features for Enterprise, Standard, and Web Edition:

  • In-Memory OLTP
  • In-Memory Columnstore
  • Always Encrypted
  • Partitioning
  • Data Compression
  • Change data capture
  • And more!

If you want to take advantage of these features, but you use an older version of SQL Server, you will need to upgrade to SQL Server 2016 with Service Pack 1, but I think this is a no-brainer.

The good news is licences have cost the same since SQL Server 2012, and Standard Edition is almost a quarter of the price of Enterprise Edition.

I maintain that this is the most exciting release of SQL Server since 2005. If you want to upgrade, contact us and we’ll help. We will even upgrade you from SQL Server 6.5.

Should I upgrade SQL Server 2008 / R2, or migrate to Azure?

Brent Ozar Unlimited runs a website, called SQL Server Updates, which comes in really handy for keeping your on-premises SQL Server up to date.

Last week I noticed something interesting: if you’re running SQL Server 2008 or 2008 R2, Microsoft’s extended support for it ends on 9 July 2019. That’s two versions of SQL Server, almost two years apart in release cycle, with different feature sets, expiring on the same day.

Granted, by that date SQL Server 2018 will probably have been released, which would make 2008 officially five versions old and 2008 R2 four versions old. But in corporations, time moves both slowly and quickly. By that I mean, planning to upgrade takes a long time, and before you know it, it’s 9 July 2019 and everyone is panicking.

It’s the right time, today, to start your upgrade plan. A lot of customers I speak to, running on 2008 or 2008 R2, are talking about upgrading to 2012 or 2014 right now.

On Twitter, I said that after the end of next year, I would not recommend upgrading any instances to SQL Server 2012, for this reason:

In 2019, moving to SQL Server 2016 buys you 7 more years of support. I wouldn’t consider moving to SQL Server 2012 after the end of 2017.

At the end of 2017, SQL Server 2012 will be more than five years old. Despite your best intentions assuming a major upgrade will be quick, and considering that SQL Server licensing costs the same regardless of the version you buy, there’s no practical reason not to go to at least SQL Server 2014, and preferably SQL Server 2016 (which I consider the biggest release since SQL Server 2005). You can always run your database in 2008 Compatibility Mode if you’re running SQL Server 2014.

I wrote down some reasons to upgrade from SQL Server 2005 last year. These reasons still apply.

Whither Azure?

Which brings up the big question: what about Azure SQL Database? As you know, my first question when customers ask about moving to Azure SQL Database is “Are you sure?”, followed by “Are you very sure?”.

By the time July 2019 rolls around, Azure will be ten years old. That’s hard to fathom right now, but at their current rate of progress, Azure SQL Database will be a mature platform (it already is mature for appropriate workloads, when taking elastic pools into account).

So, if you meet all of the following conditions:

  • you use SQL Server 2008 or 2008 R2,
  • you plan to upgrade after 2017, and
  • you’re running a database smaller than 1TB,

I would recommend that you seriously consider migrating to Azure SQL Database, instead of upgrading to an on-premises database server running SQL Server. The time is right, at the end of 2016, to spend 18 months on an appropriate migration strategy to achieve this.

Yes, 18 months from now we’ll be deep into 2018. Scary, isn’t it?

Act now.

If you want to discuss this on Twitter, look me up at @bornsql.

The Data Migration Assistant

I’ve written quite a lot about Azure SQL Database recently, but that doesn’t mean I’ve forgotten about the on-premises version of SQL Server.

What could be better than Microsoft announcing a new tool for upgrading to SQL Server 2016? The Data Migration Assistant (DMA) for SQL Server was announced on 26 August 2016.

Data Migration Assistant (DMA) delivers scenarios that reduce the effort to upgrade to a modern data platform by detecting compatibility issues that can impact database functionality after an upgrade. It recommends performance and reliability improvements for your target environment.

Why is this a big deal? Well, as Microsoft themselves put it:

DMA replaces SQL Server Upgrade Advisor.

The Data Migration Assistant helps us make sure that not only is our database instance upgradeable to the latest version of SQL Server, but it also helps us make the best use of new features.

My friend Kenneth Fisher has a post titled The new Data Migration Assistant has been released!, which I encourage you to check out.

We don’t have to run DMA on the SQL Server machine directly. We can install and run it from a workstation and point it at our server. In fact, this is a recommended best practice.

Kenneth and I also strongly expect that later versions of DMA will support Azure SQL Database migrations, which is also exciting (and though I don’t like to read too much into the names Microsoft gives its products, it’s pretty evident from the name that this is where we’re headed).

To share your horror stories about SQL Server Upgrade Advisor, find me on Twitter at @bornsql .

The Resilience of the Log Backup Chain

Much has been written about the log backup chain in SQL Server, where we are able to restore a database to a point in time using a combination of full, differential and transaction log backups.

This requires a database being in Full Recovery Mode and transaction log backups being taken at regular intervals.

If the database is put into simple recovery mode, and then switched back to full recovery mode, the log backup chain is broken, and a full or differential backup must be taken to fix it.

I’ve been working on a long-term project to migrate over fifty databases running on SQL Server 2008 R2 to Azure virtual machines running SQL Server 2014.

We use a custom form of Log Shipping to keep the Azure environment in sync with production. The log chain is stored on Azure Blob Storage, and each log file is restored every half hour to the new environment using a custom PowerShell script.

This allows us to have a very small maintenance window for the final cutover, requiring just the tail of the log backup to be restored.

Last week, I wondered aloud what would happen if we had to fail back to the original system, after processing transactions on the new environment. Would a log backup taken on the Azure environment work on the older servers? Technically speaking, the log backup chain would not have been broken. Perhaps a version conflict?

Let’s look at a test case.

  1. Create a database in SQL Server 2008 R2 with Full Recovery, and perform a full backup, plus several transaction log backups.
  2. Restore the full backup and log chain to a new SQL Server 2016 instance, perform some actions here, and then perform another log backup from the new instance.
  3. Test if the newest log backup can be applied, along with the older ones, in a recovery on SQL Server 2016. I say “Yes”.
  4. Test if the newest log backup can be applied, along with the older ones, in a recovery on SQL Server 2008 R2. I say “No”.

Tune in next week for the results. You’re always welcome to comment here or discuss it on Twitter at @bornsql.

Cumulative Updates Are First Class

Since 27 January 2016, Microsoft recommends that Cumulative Updates be installed with the same confidence as Service Packs, and installed as they become available.

The official announcement was posted on 24 March 2016 (their emphasis):

As of January CU releases, these caution messages have been updated, we now recommend ongoing, proactive installation of CUs as they become available. You should plan to install a CU with the same level of confidence you plan to install SPs (Service Packs) as they are released.

[Note: This guidance is also posted in the latest Cumulative Updates themselves, if you need additional proof to show your boss.]

This is new guidance and shows more rapid and proactive support for SQL Server from Microsoft.

With that in mind, I would strongly recommend that you make sure your production instances of SQL Server are up to date by installing the latest Service Pack and Cumulative Update for your instance.

If you would like to know what version you’re currently running, issue the command SELECT @@VERSION against your instance.

For example, my latest instance on a virtual machine I use is 12.00.4436.

Using the version matrix on the Microsoft SQL Server Version List, I know that this instance is running SQL Server 2014 Service Pack 1, with Cumulative Update 4.

It also shows that there is a newer Cumulative Update (CU 5) available, which will bring my instance up to the very latest version (as of this writing).

Brent Ozar Unlimited has a simpler site, SQL Server Updates, if all you want is the very latest Service Pack or Cumulative Update for supported versions of SQL Server.

If you have any comments about Cumulative Updates, Service Packs, or your favourite kind of chocolate, feel free to reach out to me on Twitter at @bornsql .