SQL Server Management Studio v17.0

Version numbers are confusing. SQL Server Management Studio (SSMS), the client user interface by which most DBAs access SQL Server, was decoupled from the server product for SQL Server 2016.

For the last 18 months or so, we have been receiving semi-regular updates to SSMS (which we can download from Microsoft, for free), which is amazing and awesome.

SQL Server 2017 was recently announced, as I mentioned last week, and the internal server version is going to be some variation of 14.0. The compatibility mode is 140. This follows logically from previous versions.

Management Studio, however, has taken a different tack with version numbers. The latest version, which was released at the end of April, is 17.0.

So if you’re confused, you’re not alone.

To make up for some of that confusion, here’s a neat trick in SSMS 17. Let’s say I want to present a session at a SQLSaturday. In the past, I had to modify the fonts and sizes for myriad settings in Management Studio. Paul Randal has a seminal post about this very thing.

With version 17, we can enable a new feature called Presenter Mode, which automatically sets the fonts and sizes of the SSMS interface to improve visibility when sharing your screen in a conference call or using a projector.

In the Quick Launch textbox on the top right of the screen (press Ctrl+Q), type the word PresentOn.

Our standard SSMS interface changes from this:

to this:

To switch it back to the default view, it’s not PresentOff as we might expect. Instead, we must type RestoreDefaultFonts in the Quick Launch textbox.

Note: the results grid will not take on the new settings until SSMS is restarted. Keep this in mind when switching Presenter Mode on and off.

If you have any more SQL Server Management Studio 17 tips to share, find me on Twitter at @bornsql.

SQL Server 2017 Announced

By now you will have heard that the next version of SQL Server has been announced. There’s no release date yet, but Brent Ozar seems to think it’ll be before June.

There are many new features, but the biggest deal is that SQL Server 2017 runs on both Windows Server and Linux. Yup, SQL Server 2017 is certified to run on Windows Server, Red Hat Enterprise Linux, Ubuntu, and SuSE. (You can even run it on macOS in a Docker container.)

There are some big improvements to the Query Optimizer as well, starting with Adaptive Query Processing. It’s like a smart go-faster button, making incremental improvements to your queries the more often they run.

Despite being in pre-release mode, Microsoft states that 2017 is production-ready, which means that it should work as expected.

Watch the announcement, and download the latest technical preview to try it for yourself.

Max Server Memory and SQL Server 2016 Service Pack 1

Everything changed for SQL Server Standard Edition on 16 November 2016, and how memory limits work.

On that day, a slew of Enterprise Edition features made their way into editions across the board, including Express Edition and LocalDB.

The memory limit of 128GB RAM applies only to the buffer pool (the 8KB data pages that are read from disk into memory — in other words, the database itself).

For servers containing more than 128GB of physical RAM, and running SQL Server 2016 with Service Pack 1 or higher, we now have options.

The max server memory setting always did only refer to the buffer pool, but for many reasons there was misunderstanding from a lot of people that it included other caches as well.

Because ColumnStore and In-Memory OLTP have their own cache limits over and above the 128GB buffer pool limit, the guidance around assigning max server memory is no longer simple.

ColumnStore now gets an extra 32GB of RAM per instance, while In-Memory OLTP gets an extra 32GB of RAM per database.

With that in mind, you are still welcome to use the Max Server Memory Matrix and associated calculator script for lower versions of SQL Server (up to and including 2014), but I will not be maintaining it further, unless someone finds a bug.

How much should I assign to max server memory? It depends.

It would be very easy to spec a server with 256GB RAM, install a single instance of SQL Server 2016 Standard Edition (with Service Pack 1, of course), have 128GB for the buffer pool, 32GB for the ColumnStore cache, three databases with 32GB of RAM each for In-Memory OLTP, and still run out of memory.

This is a brave new world of memory management. Tread carefully.

If you’d like to share your thoughts, find me on Twitter at @bornsql.

Configuration Manager Shortcut on Windows 10 and Server 2016

This is more for my own reference than anything.

On newer versions of Windows desktop and Windows Server, we may find that the shortcut to SQL Server Configuration Manager is missing.

According to this MSDN article, the reason for this change is that Configuration Manager is a Management Console snap-in:

Because SQL Server Configuration Manager is a snap-in for the Microsoft Management Console program and not a stand-alone program, SQL Server Configuration Manager does not appear as an application in newer versions of Windows.

I think this is ridiculous because it does not maintain backward compatibility.

This is especially frustrating because the same article reminds us that all changes to SQL Server services should be managed through the Configuration Manager.

The workaround is to create our own shortcut as follows:

SQL Server Version Path for Shortcut
SQL Server 2008 / R2 C:\Windows\SysWOW64\SQLServerManager10.msc
SQL Server 2012 C:\Windows\SysWOW64\SQLServerManager11.msc
SQL Server 2014 C:\Windows\SysWOW64\SQLServerManager12.msc
SQL Server 2016 C:\Windows\SysWOW64\SQLServerManager13.msc

Share your frustrations with the “modern” Windows UI with me on Twitter at @bornsql.

Wait For Service Pack 1

Conventional wisdom tells us that when Microsoft releases a new version of any server product, we should wait until Service Pack 1 before deploying it to production.

This hasn’t been true for a while now, since Microsoft recommended that Cumulative Updates for SQL Server carry the same confidence:

SQL Server CUs are certified to the same levels as Service Packs, and should be installed at the same level of confidence.

However, Service Pack 1 itself has been mired in some controversy. Microsoft didn’t make things any easier for us with SQL Server 2012, or 2014. Both versions had issues with their respective Service Pack 1.

Fortunately, SQL Server 2016 has broken the cycle, and along with all of the fixes in Cumulative Updates 1–3, and a security fix, we get better feature parity between Enterprise Edition and lower editions, including Standard, Web, Express and LocalDB.

There are some restrictions, of course, but the idea is that developers and vendors can write T-SQL for features that now appear across the board.

SQL Server 2016 Service Pack 1 now includes the following features for Enterprise, Standard, and Web Edition:

  • In-Memory OLTP
  • In-Memory Columnstore
  • Always Encrypted
  • Partitioning
  • Data Compression
  • Change data capture
  • And more!

If you want to take advantage of these features, but you use an older version of SQL Server, you will need to upgrade to SQL Server 2016 with Service Pack 1, but I think this is a no-brainer.

The good news is licences have cost the same since SQL Server 2012, and Standard Edition is almost a quarter of the price of Enterprise Edition.

I maintain that this is the most exciting release of SQL Server since 2005. If you want to upgrade, contact us and we’ll help. We will even upgrade you from SQL Server 6.5.

Should I upgrade SQL Server 2008 / R2, or migrate to Azure?

Brent Ozar Unlimited runs a website, called SQL Server Updates, which comes in really handy for keeping your on-premises SQL Server up to date.

Last week I noticed something interesting: if you’re running SQL Server 2008 or 2008 R2, Microsoft’s extended support for it ends on 9 July 2019. That’s two versions of SQL Server, almost two years apart in release cycle, with different feature sets, expiring on the same day.

Granted, by that date SQL Server 2018 will probably have been released, which would make 2008 officially five versions old and 2008 R2 four versions old. But in corporations, time moves both slowly and quickly. By that I mean, planning to upgrade takes a long time, and before you know it, it’s 9 July 2019 and everyone is panicking.

It’s the right time, today, to start your upgrade plan. A lot of customers I speak to, running on 2008 or 2008 R2, are talking about upgrading to 2012 or 2014 right now.

On Twitter, I said that after the end of next year, I would not recommend upgrading any instances to SQL Server 2012, for this reason:

In 2019, moving to SQL Server 2016 buys you 7 more years of support. I wouldn’t consider moving to SQL Server 2012 after the end of 2017.

At the end of 2017, SQL Server 2012 will be more than five years old. Despite your best intentions assuming a major upgrade will be quick, and considering that SQL Server licensing costs the same regardless of the version you buy, there’s no practical reason not to go to at least SQL Server 2014, and preferably SQL Server 2016 (which I consider the biggest release since SQL Server 2005). You can always run your database in 2008 Compatibility Mode if you’re running SQL Server 2014.

I wrote down some reasons to upgrade from SQL Server 2005 last year. These reasons still apply.

Whither Azure?

Which brings up the big question: what about Azure SQL Database? As you know, my first question when customers ask about moving to Azure SQL Database is “Are you sure?”, followed by “Are you very sure?”.

By the time July 2019 rolls around, Azure will be ten years old. That’s hard to fathom right now, but at their current rate of progress, Azure SQL Database will be a mature platform (it already is mature for appropriate workloads, when taking elastic pools into account).

So, if you meet all of the following conditions:

  • you use SQL Server 2008 or 2008 R2,
  • you plan to upgrade after 2017, and
  • you’re running a database smaller than 1TB,

I would recommend that you seriously consider migrating to Azure SQL Database, instead of upgrading to an on-premises database server running SQL Server. The time is right, at the end of 2016, to spend 18 months on an appropriate migration strategy to achieve this.

Yes, 18 months from now we’ll be deep into 2018. Scary, isn’t it?

Act now.

If you want to discuss this on Twitter, look me up at @bornsql.

Database Throughput Units

Last time we briefly touched on the metric that Microsoft uses to keep your Azure SQL Database in check: the DTU, or database throughput unit.

It uses a combination of CPU, I/O and log flushes/second, using a special algorithm, to calculate a single unit.

The reason for this is simple: Our databases don’t use the same resources all the time. We will have busy periods and quiet periods. Some of us are fortunate and only need the database to be available during business hours.

This leaves a bit of mystery around what sort of resources our database may need, if we want to run it in Azure instead of on premises.

The DTU Calculator, a third-party service created by Justin Henriksen (a Microsoft employee), will calculate the DTU requirements for our on-premises database that we want to migrate to Azure, by firstly capturing a few performance monitor counters, and then performing a calculation on those results, to provide the recommended service tier for our database.

Justin provides a command-line application or PowerShell script to capture these performance counters:

  • Processor – % Processor Time
  • Logical Disk – Disk Reads/sec
  • Logical Disk – Disk Writes/sec
  • Database – Log Bytes Flushed/sec

For best results, we should run this session during the busiest time of the day for our database, so as to get a realistic idea of what the load is. Justin recommends running it for one hour.

Once the results are captured, a CSV file containing the output is then uploaded to the DTU Calculator site. It will take a few seconds and then provide a recommended service tier for our database. We can then see how much it would cost to run our on-premises database in an Azure SQL Database environment (assuming of course that it is compatible).

In the sample I captured for an existing client for the purposes of this post, I was surprised to find that the Premium Tier was recommended, at level P2.

DTU Calculator graph

Based on your database utilization, we recommend you migrate your SQL Server workload to Premium – P2. This Service Tier/Performance Level should cover approximately 91.83 % of your utilization.

NOTE: There is approximately 8.17 % of your workload that falls into a higher Service Tier/Performance Level. After migrating your database to Azure, you should evaluate your database’s performance using the guidance mentioned in the  information section above.

This means that it would cost more than $1,350 (CAD) per month (assuming a 24/7 uptime) to run this database in the cloud at the P2 level. A SQL Server Standard Edition licence for the existing hardware, when it is time to upgrade, will cost a shade under $10,000 (CAD) for four cores. In other words, the licence will pay for itself within eight months.

Perhaps this is worthwhile for the customer, if they do away with hardware and other on-site costs, but if their database load increases for any reason and resource limits are repeatedly hit, they will have to move up to the next level (P3), which is immediately double the monthly cost.

Please make use of the DTU Calculator if you intend to migrate an existing database to Azure SQL Database. It should form an important part of your migration plan. There is no reason not to, and it will save you the pain of making a costly and time-consuming mistake.

Let me hear your DTU results on Twitter at @bornsql .

The Data Migration Assistant

I’ve written quite a lot about Azure SQL Database recently, but that doesn’t mean I’ve forgotten about the on-premises version of SQL Server.

What could be better than Microsoft announcing a new tool for upgrading to SQL Server 2016? The Data Migration Assistant (DMA) for SQL Server was announced on 26 August 2016.

Data Migration Assistant (DMA) delivers scenarios that reduce the effort to upgrade to a modern data platform by detecting compatibility issues that can impact database functionality after an upgrade. It recommends performance and reliability improvements for your target environment.

Why is this a big deal? Well, as Microsoft themselves put it:

DMA replaces SQL Server Upgrade Advisor.

The Data Migration Assistant helps us make sure that not only is our database instance upgradeable to the latest version of SQL Server, but it also helps us make the best use of new features.

My friend Kenneth Fisher has a post titled The new Data Migration Assistant has been released!, which I encourage you to check out.

We don’t have to run DMA on the SQL Server machine directly. We can install and run it from a workstation and point it at our server. In fact, this is a recommended best practice.

Kenneth and I also strongly expect that later versions of DMA will support Azure SQL Database migrations, which is also exciting (and though I don’t like to read too much into the names Microsoft gives its products, it’s pretty evident from the name that this is where we’re headed).

To share your horror stories about SQL Server Upgrade Advisor, find me on Twitter at @bornsql .

Should I move my on-premises database to Azure SQL Database?

This week is a shorter post than last week, but it answers a very important question:

Should I move my on-premises database to Azure SQL Database?

The short answer is:

It depends.

Firstly, why do you want to move your database to the cloud? Is it a business requirement? A better way to ask this is, does it help your business to move to Azure?

One of my clients runs a huge custom application, supported by SQL Server, running off a single 4GB database. They’ve been talking for years about moving to Azure or Amazon Web Services. Their current server is more than five years old, and the SQL Server version is 2008 Web Edition.

Moving to Azure SQL Database makes sense for them because they would benefit from new features and not have to worry about purchasing new hardware or new licences for Windows Server and SQL Server. Their developers can focus on improving the application and not worry about a database server going down.

This is probably the first main reason why people want to migrate.

Something to keep in mind, though, is that it’s a single database you’re migrating at a time. If you have a whole lot of databases that support different parts of your business, and you don’t want to run on-premises anymore, it makes more sense to investigate moving to Azure (or even Amazon or Rackspace) Virtual Machines instead. There, the cost of upgrading hardware is removed, but you can use the latest version of SQL Server as required, and everything else is the same.

Another main reason for moving to Azure SQL Database is when your database supports a website or web application.

Say you run a website with a very basic database structure, but you have a lot of users and need to store persistent data for a hugely popular mobile app.

It makes a lot of sense to move this kind of database to Azure SQL Database because it lets you forget about the administrative costs of maintaining a server, licences, operating system patches, SQL Server updates, backups, maintenance tasks, and all the rest that goes with it, and concentrate on your mobile app.

Another consideration is cost. You can save money in the short- to medium-term by moving to Azure SQL Database. Remember though, that after a certain number of years, you would be paying the same (and more) as purchasing new hardware and licences, and getting a DBA on speed dial. So for startups, Azure SQL Database make a lot of sense.

One final thing, that I cover in my new talk on Migrating to Azure SQL Database (insert plug here), is compatibility.

While Azure SQL Database supports most of the same features as SQL Server 2016, there are some things that could scuttle your plans from the get-go. It is very important to investigate those before jumping in and writing code.

For example, because we don’t have access to instance-level functionality, including master and tempdb databases, database collation is critically important to get right. Azure SQL Database, by default, uses SQL_Latin1_General_CP1_CI_AS collation. For starters, a difference in your database will cause performance issues and bad query plans because tempdb will be on a different collation. It goes downhill from there. Some applications require a specific collation to be able to work correctly.

Another limitation is that you are required to use Azure Active Directory or SQL authentication. Certain usernames are forbidden, which might require changes in your applications.

A short list of features not supported by Azure SQL Database can be found by visiting this link.

I hope this has been an informative read. Now that I’ve scared you away, next week we will finally get into the “how” of “how to migrate to Azure SQL Database”. Stay tuned.

If you would like to share your horror stories about failed migrations, find me on Twitter at @bornsql .

The Transaction Log Is Very Resilient

Last week I wondered whether a log backup chain would survive a database migration. Unsurprisingly, the answer is “yes”.

I also asked if a log backup taken on the new instance would be usable on the old instance, if the need arose to fail back to the original environment. Unsurprisingly, the answer is “it depends”.

Setting up the Test

Using our test case from last week, we start with a database on a SQL Server 2008 R2 instance.

CREATE DATABASE [TransactionLogChain];
GO

Make sure the database is in Full Recovery Mode, and then make sure we have a full backup, otherwise the database remains in pseudo-simple mode until it is backed up.

-- Full Recovery Mode, Right Now!
ALTER DATABASE [TransactionLogChain]
SET RECOVERY FULL WITH NO_WAIT;
GO

-- Full database backup
BACKUP DATABASE [TransactionLogChain]
TO DISK = N'D:\SQLBackup\TLC.bak';
GO

We now create a table to write to, taking a transaction log backup afterwards.

-- Identity column only
CREATE TABLE [dbo].[TransactionLogChain]
([ID] INT IDENTITY(1,1) NOT NULL);
GO

Insert some rows to have a database worth having:

-- Use the GO <n> batch processor to run 50 times
INSERT INTO [dbo].[TransactionLogChain] DEFAULT VALUES;
GO 50

Take a transaction log backup of this SQL Server 2008 R2 database.

-- Transaction log backup containing table creation
-- and first 50 row inserts
BACKUP LOG [TransactionLogChain]
TO DISK = N'D:\SQLBackup\TLC_01.trn';
GO

That’s enough to begin testing our first scenario because we have to restore this database to our SQL Server 2016 instance. Remember to use NORECOVERY because we have to apply the log afterwards.

-- Full backup is restored first
USE [master];
GO
RESTORE DATABASE [TransactionLogChain]
FROM DISK = N'D:\SQLBackup\TLC.bak'
WITH FILE = 1,
MOVE N'TransactionLogChain' TO N'D:\SQL2016\TLC.mdf',
MOVE N'TransactionLogChain_log' TO N'D:\SQL2016\TLC_log.ldf',
WITH NORECOVERY;
GO

Apply the log backup. A good habit is to restore all log files with the NORECOVERY option as well, just in case. When we are sure we want to bring the database online, then we can simply issue a WITH RECOVERY command.

-- Log backup is restored next
USE [master];
GO
RESTORE LOG [TransactionLogChain]
FROM DISK = N'D:\SQLBackup\TLC_01.trn'
WITH NORECOVERY;
GO

Bring the database online:

RESTORE DATABASE [TransactionLogChain] WITH RECOVERY;
GO

We now have an online version of our database on the SQL Server 2016 instance.

Let’s run the insert script again to add another 50 rows to the table.

-- Use the GO <n> batch processor to add another 50 rows
INSERT INTO [dbo].[TransactionLogChain] DEFAULT VALUES;
GO 50

A quick SELECT tells us that the table has 100 row(s) affected.

Test 1: Using a log from a new instance

Theoretically, nothing is wrong with this process so far. SQL Server considers the log backup chain unbroken. We can take a log backup on the new instance and use that in connection with the existing log backup chain to restore our database to its current state on SQL Server 2016.

-- Transaction log backup containing extra 50 rows
BACKUP LOG [TransactionLogChain]
TO DISK = N'D:\SQLBackup\TLC_02.trn';
GO

Let’s test this. First we must drop the [TransactionLogChain] database from the 2016 instance.

USE [master];
GO
DROP DATABASE [TransactionLogChain];
GO

Restore all three files now: the full backup and transaction log backup from 2008 R2, and the 2016 transaction log. Finally, bring the database online.

USE [master];
GO
RESTORE DATABASE [TransactionLogChain]
FROM DISK = N'D:\SQLBackup\TLC.bak'
WITH FILE = 1,
MOVE N'TransactionLogChain' TO N'D:\SQL2016\TLC.mdf',
MOVE N'TransactionLogChain_log' TO N'D:\SQL2016\TLC_log.ldf',
WITH NORECOVERY;
GO

RESTORE LOG [TransactionLogChain]
FROM DISK = N'D:\SQLBackup\TLC_01.trn'
WITH NORECOVERY;
GO

RESTORE LOG [TransactionLogChain]
FROM DISK = N'D:\SQLBackup\TLC_02.trn'
WITH NORECOVERY;
GO

RESTORE DATABASE [TransactionLogChain] WITH RECOVERY;
GO

USE [TransactionLogChain];
GO
SELECT * FROM [dbo].[TransactionLogChain];
GO

And our result is the same:

(100 row(s) affected).

Second Test: Applying a new log to an old instance

Now let’s take a look at applying a log backup chain to the original instance.

The full backup and first log file restore just fine, but on the log backup taken from 2016, we get the following error:

Msg 3241, Level 16, State 0, Line 17
The media family on device 'D:\SQLBackup\TLC_02.trn' is incorrectly formed. SQL Server cannot process this media family.
Msg 3013, Level 16, State 1, Line 17
RESTORE LOG is terminating abnormally.

As I originally suspected, the version of SQL Server on which the newer log backup is taken matters.

I was able to take a SQL Server 2008 R2 backup, restore it to SQL Server 2016, run a log backup, and restore the entire chain on SQL Server 2016.

I was not able to restore the 2016 log to the 2008 R2, 2012 or a 2014 instance, however.

If for whatever reason we need to fail back to the original environment, using logs taken in the new environment, we must be running the same version of SQL Server.

The build does not seem to matter as much, but honestly, don’t take my word for it.

How does this help me?

Let’s say we are migrating a massive database and have used log shipping to keep the old and new environments in sync. As soon as we cut over to the new environment, we can continue taking log backups and be confident that the existing log shipping files and original full backup can be retained, until our next full backup.

Please feel free to leave a comment here, or find me on Twitter at @bornsql. I would love to hear your thoughts.