Configuration Manager Shortcut on Windows 10 and Server 2016

This is more for my own reference than anything.

On newer versions of Windows desktop and Windows Server, we may find that the shortcut to SQL Server Configuration Manager is missing.

According to this MSDN article, the reason for this change is that Configuration Manager is a Management Console snap-in:

Because SQL Server Configuration Manager is a snap-in for the Microsoft Management Console program and not a stand-alone program, SQL Server Configuration Manager does not appear as an application in newer versions of Windows.

I think this is ridiculous because it does not maintain backward compatibility.

This is especially frustrating because the same article reminds us that all changes to SQL Server services should be managed through the Configuration Manager.

The workaround is to create our own shortcut as follows:

SQL Server Version Path for Shortcut
SQL Server 2008 / R2 C:\Windows\SysWOW64\SQLServerManager10.msc
SQL Server 2012 C:\Windows\SysWOW64\SQLServerManager11.msc
SQL Server 2014 C:\Windows\SysWOW64\SQLServerManager12.msc
SQL Server 2016 C:\Windows\SysWOW64\SQLServerManager13.msc

Share your frustrations with the “modern” Windows UI with me on Twitter at @bornsql.

Wait For Service Pack 1

Conventional wisdom tells us that when Microsoft releases a new version of any server product, we should wait until Service Pack 1 before deploying it to production.

This hasn’t been true for a while now, since Microsoft recommended that Cumulative Updates for SQL Server carry the same confidence:

SQL Server CUs are certified to the same levels as Service Packs, and should be installed at the same level of confidence.

However, Service Pack 1 itself has been mired in some controversy. Microsoft didn’t make things any easier for us with SQL Server 2012, or 2014. Both versions had issues with their respective Service Pack 1.

Fortunately, SQL Server 2016 has broken the cycle, and along with all of the fixes in Cumulative Updates 1–3, and a security fix, we get better feature parity between Enterprise Edition and lower editions, including Standard, Web, Express and LocalDB.

There are some restrictions, of course, but the idea is that developers and vendors can write T-SQL for features that now appear across the board.

SQL Server 2016 Service Pack 1 now includes the following features for Enterprise, Standard, and Web Edition:

  • In-Memory OLTP
  • In-Memory Columnstore
  • Always Encrypted
  • Partitioning
  • Data Compression
  • Change data capture
  • And more!

If you want to take advantage of these features, but you use an older version of SQL Server, you will need to upgrade to SQL Server 2016 with Service Pack 1, but I think this is a no-brainer.

The good news is licences have cost the same since SQL Server 2012, and Standard Edition is almost a quarter of the price of Enterprise Edition.

I maintain that this is the most exciting release of SQL Server since 2005. If you want to upgrade, contact us and we’ll help. We will even upgrade you from SQL Server 6.5.

Should I upgrade SQL Server 2008 / R2, or migrate to Azure?

Brent Ozar Unlimited runs a website, called SQL Server Updates, which comes in really handy for keeping your on-premises SQL Server up to date.

Last week I noticed something interesting: if you’re running SQL Server 2008 or 2008 R2, Microsoft’s extended support for it ends on 9 July 2019. That’s two versions of SQL Server, almost two years apart in release cycle, with different feature sets, expiring on the same day.

Granted, by that date SQL Server 2018 will probably have been released, which would make 2008 officially five versions old and 2008 R2 four versions old. But in corporations, time moves both slowly and quickly. By that I mean, planning to upgrade takes a long time, and before you know it, it’s 9 July 2019 and everyone is panicking.

It’s the right time, today, to start your upgrade plan. A lot of customers I speak to, running on 2008 or 2008 R2, are talking about upgrading to 2012 or 2014 right now.

On Twitter, I said that after the end of next year, I would not recommend upgrading any instances to SQL Server 2012, for this reason:

In 2019, moving to SQL Server 2016 buys you 7 more years of support. I wouldn’t consider moving to SQL Server 2012 after the end of 2017.

At the end of 2017, SQL Server 2012 will be more than five years old. Despite your best intentions assuming a major upgrade will be quick, and considering that SQL Server licensing costs the same regardless of the version you buy, there’s no practical reason not to go to at least SQL Server 2014, and preferably SQL Server 2016 (which I consider the biggest release since SQL Server 2005). You can always run your database in 2008 Compatibility Mode if you’re running SQL Server 2014.

I wrote down some reasons to upgrade from SQL Server 2005 last year. These reasons still apply.

Whither Azure?

Which brings up the big question: what about Azure SQL Database? As you know, my first question when customers ask about moving to Azure SQL Database is “Are you sure?”, followed by “Are you very sure?”.

By the time July 2019 rolls around, Azure will be ten years old. That’s hard to fathom right now, but at their current rate of progress, Azure SQL Database will be a mature platform (it already is mature for appropriate workloads, when taking elastic pools into account).

So, if you meet all of the following conditions:

  • you use SQL Server 2008 or 2008 R2,
  • you plan to upgrade after 2017, and
  • you’re running a database smaller than 1TB,

I would recommend that you seriously consider migrating to Azure SQL Database, instead of upgrading to an on-premises database server running SQL Server. The time is right, at the end of 2016, to spend 18 months on an appropriate migration strategy to achieve this.

Yes, 18 months from now we’ll be deep into 2018. Scary, isn’t it?

Act now.

If you want to discuss this on Twitter, look me up at @bornsql.

Database Throughput Units

Last time we briefly touched on the metric that Microsoft uses to keep your Azure SQL Database in check: the DTU, or database throughput unit.

It uses a combination of CPU, I/O and log flushes/second, using a special algorithm, to calculate a single unit.

The reason for this is simple: Our databases don’t use the same resources all the time. We will have busy periods and quiet periods. Some of us are fortunate and only need the database to be available during business hours.

This leaves a bit of mystery around what sort of resources our database may need, if we want to run it in Azure instead of on premises.

The DTU Calculator, a third-party service created by Justin Henriksen (a Microsoft employee), will calculate the DTU requirements for our on-premises database that we want to migrate to Azure, by firstly capturing a few performance monitor counters, and then performing a calculation on those results, to provide the recommended service tier for our database.

Justin provides a command-line application or PowerShell script to capture these performance counters:

  • Processor – % Processor Time
  • Logical Disk – Disk Reads/sec
  • Logical Disk – Disk Writes/sec
  • Database – Log Bytes Flushed/sec

For best results, we should run this session during the busiest time of the day for our database, so as to get a realistic idea of what the load is. Justin recommends running it for one hour.

Once the results are captured, a CSV file containing the output is then uploaded to the DTU Calculator site. It will take a few seconds and then provide a recommended service tier for our database. We can then see how much it would cost to run our on-premises database in an Azure SQL Database environment (assuming of course that it is compatible).

In the sample I captured for an existing client for the purposes of this post, I was surprised to find that the Premium Tier was recommended, at level P2.

DTU Calculator graph

Based on your database utilization, we recommend you migrate your SQL Server workload to Premium – P2. This Service Tier/Performance Level should cover approximately 91.83 % of your utilization.

NOTE: There is approximately 8.17 % of your workload that falls into a higher Service Tier/Performance Level. After migrating your database to Azure, you should evaluate your database’s performance using the guidance mentioned in the  information section above.

This means that it would cost more than $1,350 (CAD) per month (assuming a 24/7 uptime) to run this database in the cloud at the P2 level. A SQL Server Standard Edition licence for the existing hardware, when it is time to upgrade, will cost a shade under $10,000 (CAD) for four cores. In other words, the licence will pay for itself within eight months.

Perhaps this is worthwhile for the customer, if they do away with hardware and other on-site costs, but if their database load increases for any reason and resource limits are repeatedly hit, they will have to move up to the next level (P3), which is immediately double the monthly cost.

Please make use of the DTU Calculator if you intend to migrate an existing database to Azure SQL Database. It should form an important part of your migration plan. There is no reason not to, and it will save you the pain of making a costly and time-consuming mistake.

Let me hear your DTU results on Twitter at @bornsql .

The Data Migration Assistant

I’ve written quite a lot about Azure SQL Database recently, but that doesn’t mean I’ve forgotten about the on-premises version of SQL Server.

What could be better than Microsoft announcing a new tool for upgrading to SQL Server 2016? The Data Migration Assistant (DMA) for SQL Server was announced on 26 August 2016.

Data Migration Assistant (DMA) delivers scenarios that reduce the effort to upgrade to a modern data platform by detecting compatibility issues that can impact database functionality after an upgrade. It recommends performance and reliability improvements for your target environment.

Why is this a big deal? Well, as Microsoft themselves put it:

DMA replaces SQL Server Upgrade Advisor.

The Data Migration Assistant helps us make sure that not only is our database instance upgradeable to the latest version of SQL Server, but it also helps us make the best use of new features.

My friend Kenneth Fisher has a post titled The new Data Migration Assistant has been released!, which I encourage you to check out.

We don’t have to run DMA on the SQL Server machine directly. We can install and run it from a workstation and point it at our server. In fact, this is a recommended best practice.

Kenneth and I also strongly expect that later versions of DMA will support Azure SQL Database migrations, which is also exciting (and though I don’t like to read too much into the names Microsoft gives its products, it’s pretty evident from the name that this is where we’re headed).

To share your horror stories about SQL Server Upgrade Advisor, find me on Twitter at @bornsql .

Should I move my on-premises database to Azure SQL Database?

This week is a shorter post than last week, but it answers a very important question:

Should I move my on-premises database to Azure SQL Database?

The short answer is:

It depends.

Firstly, why do you want to move your database to the cloud? Is it a business requirement? A better way to ask this is, does it help your business to move to Azure?

One of my clients runs a huge custom application, supported by SQL Server, running off a single 4GB database. They’ve been talking for years about moving to Azure or Amazon Web Services. Their current server is more than five years old, and the SQL Server version is 2008 Web Edition.

Moving to Azure SQL Database makes sense for them because they would benefit from new features and not have to worry about purchasing new hardware or new licences for Windows Server and SQL Server. Their developers can focus on improving the application and not worry about a database server going down.

This is probably the first main reason why people want to migrate.

Something to keep in mind, though, is that it’s a single database you’re migrating at a time. If you have a whole lot of databases that support different parts of your business, and you don’t want to run on-premises anymore, it makes more sense to investigate moving to Azure (or even Amazon or Rackspace) Virtual Machines instead. There, the cost of upgrading hardware is removed, but you can use the latest version of SQL Server as required, and everything else is the same.

Another main reason for moving to Azure SQL Database is when your database supports a website or web application.

Say you run a website with a very basic database structure, but you have a lot of users and need to store persistent data for a hugely popular mobile app.

It makes a lot of sense to move this kind of database to Azure SQL Database because it lets you forget about the administrative costs of maintaining a server, licences, operating system patches, SQL Server updates, backups, maintenance tasks, and all the rest that goes with it, and concentrate on your mobile app.

Another consideration is cost. You can save money in the short- to medium-term by moving to Azure SQL Database. Remember though, that after a certain number of years, you would be paying the same (and more) as purchasing new hardware and licences, and getting a DBA on speed dial. So for startups, Azure SQL Database make a lot of sense.

One final thing, that I cover in my new talk on Migrating to Azure SQL Database (insert plug here), is compatibility.

While Azure SQL Database supports most of the same features as SQL Server 2016, there are some things that could scuttle your plans from the get-go. It is very important to investigate those before jumping in and writing code.

For example, because we don’t have access to instance-level functionality, including master and tempdb databases, database collation is critically important to get right. Azure SQL Database, by default, uses SQL_Latin1_General_CP1_CI_AS collation. For starters, a difference in your database will cause performance issues and bad query plans because tempdb will be on a different collation. It goes downhill from there. Some applications require a specific collation to be able to work correctly.

Another limitation is that you are required to use Azure Active Directory or SQL authentication. Certain usernames are forbidden, which might require changes in your applications.

A short list of features not supported by Azure SQL Database can be found by visiting this link.

I hope this has been an informative read. Now that I’ve scared you away, next week we will finally get into the “how” of “how to migrate to Azure SQL Database”. Stay tuned.

If you would like to share your horror stories about failed migrations, find me on Twitter at @bornsql .

The Transaction Log Is Very Resilient

Last week I wondered whether a log backup chain would survive a database migration. Unsurprisingly, the answer is “yes”.

I also asked if a log backup taken on the new instance would be usable on the old instance, if the need arose to fail back to the original environment. Unsurprisingly, the answer is “it depends”.

Setting up the Test

Using our test case from last week, we start with a database on a SQL Server 2008 R2 instance.

CREATE DATABASE [TransactionLogChain];
GO

Make sure the database is in Full Recovery Mode, and then make sure we have a full backup, otherwise the database remains in pseudo-simple mode until it is backed up.

-- Full Recovery Mode, Right Now!
ALTER DATABASE [TransactionLogChain]
SET RECOVERY FULL WITH NO_WAIT;
GO

-- Full database backup
BACKUP DATABASE [TransactionLogChain]
TO DISK = N'D:\SQLBackup\TLC.bak';
GO

We now create a table to write to, taking a transaction log backup afterwards.

-- Identity column only
CREATE TABLE [dbo].[TransactionLogChain]
([ID] INT IDENTITY(1,1) NOT NULL);
GO

Insert some rows to have a database worth having:

-- Use the GO <n> batch processor to run 50 times
INSERT INTO [dbo].[TransactionLogChain] DEFAULT VALUES;
GO 50

Take a transaction log backup of this SQL Server 2008 R2 database.

-- Transaction log backup containing table creation
-- and first 50 row inserts
BACKUP LOG [TransactionLogChain]
TO DISK = N'D:\SQLBackup\TLC_01.trn';
GO

That’s enough to begin testing our first scenario because we have to restore this database to our SQL Server 2016 instance. Remember to use NORECOVERY because we have to apply the log afterwards.

-- Full backup is restored first
USE [master];
GO
RESTORE DATABASE [TransactionLogChain]
FROM DISK = N'D:\SQLBackup\TLC.bak'
WITH FILE = 1,
MOVE N'TransactionLogChain' TO N'D:\SQL2016\TLC.mdf',
MOVE N'TransactionLogChain_log' TO N'D:\SQL2016\TLC_log.ldf',
WITH NORECOVERY;
GO

Apply the log backup. A good habit is to restore all log files with the NORECOVERY option as well, just in case. When we are sure we want to bring the database online, then we can simply issue a WITH RECOVERY command.

-- Log backup is restored next
USE [master];
GO
RESTORE LOG [TransactionLogChain]
FROM DISK = N'D:\SQLBackup\TLC_01.trn'
WITH NORECOVERY;
GO

Bring the database online:

RESTORE DATABASE [TransactionLogChain] WITH RECOVERY;
GO

We now have an online version of our database on the SQL Server 2016 instance.

Let’s run the insert script again to add another 50 rows to the table.

-- Use the GO <n> batch processor to add another 50 rows
INSERT INTO [dbo].[TransactionLogChain] DEFAULT VALUES;
GO 50

A quick SELECT tells us that the table has 100 row(s) affected.

Test 1: Using a log from a new instance

Theoretically, nothing is wrong with this process so far. SQL Server considers the log backup chain unbroken. We can take a log backup on the new instance and use that in connection with the existing log backup chain to restore our database to its current state on SQL Server 2016.

-- Transaction log backup containing extra 50 rows
BACKUP LOG [TransactionLogChain]
TO DISK = N'D:\SQLBackup\TLC_02.trn';
GO

Let’s test this. First we must drop the [TransactionLogChain] database from the 2016 instance.

USE [master];
GO
DROP DATABASE [TransactionLogChain];
GO

Restore all three files now: the full backup and transaction log backup from 2008 R2, and the 2016 transaction log. Finally, bring the database online.

USE [master];
GO
RESTORE DATABASE [TransactionLogChain]
FROM DISK = N'D:\SQLBackup\TLC.bak'
WITH FILE = 1,
MOVE N'TransactionLogChain' TO N'D:\SQL2016\TLC.mdf',
MOVE N'TransactionLogChain_log' TO N'D:\SQL2016\TLC_log.ldf',
WITH NORECOVERY;
GO

RESTORE LOG [TransactionLogChain]
FROM DISK = N'D:\SQLBackup\TLC_01.trn'
WITH NORECOVERY;
GO

RESTORE LOG [TransactionLogChain]
FROM DISK = N'D:\SQLBackup\TLC_02.trn'
WITH NORECOVERY;
GO

RESTORE DATABASE [TransactionLogChain] WITH RECOVERY;
GO

USE [TransactionLogChain];
GO
SELECT * FROM [dbo].[TransactionLogChain];
GO

And our result is the same:

(100 row(s) affected).

Second Test: Applying a new log to an old instance

Now let’s take a look at applying a log backup chain to the original instance.

The full backup and first log file restore just fine, but on the log backup taken from 2016, we get the following error:

Msg 3241, Level 16, State 0, Line 17
The media family on device 'D:\SQLBackup\TLC_02.trn' is incorrectly formed. SQL Server cannot process this media family.
Msg 3013, Level 16, State 1, Line 17
RESTORE LOG is terminating abnormally.

As I originally suspected, the version of SQL Server on which the newer log backup is taken matters.

I was able to take a SQL Server 2008 R2 backup, restore it to SQL Server 2016, run a log backup, and restore the entire chain on SQL Server 2016.

I was not able to restore the 2016 log to the 2008 R2, 2012 or a 2014 instance, however.

If for whatever reason we need to fail back to the original environment, using logs taken in the new environment, we must be running the same version of SQL Server.

The build does not seem to matter as much, but honestly, don’t take my word for it.

How does this help me?

Let’s say we are migrating a massive database and have used log shipping to keep the old and new environments in sync. As soon as we cut over to the new environment, we can continue taking log backups and be confident that the existing log shipping files and original full backup can be retained, until our next full backup.

Please feel free to leave a comment here, or find me on Twitter at @bornsql. I would love to hear your thoughts.

The Resilience of the Log Backup Chain

Much has been written about the log backup chain in SQL Server, where we are able to restore a database to a point in time using a combination of full, differential and transaction log backups.

This requires a database being in Full Recovery Mode and transaction log backups being taken at regular intervals.

If the database is put into simple recovery mode, and then switched back to full recovery mode, the log backup chain is broken, and a full or differential backup must be taken to fix it.

I’ve been working on a long-term project to migrate over fifty databases running on SQL Server 2008 R2 to Azure virtual machines running SQL Server 2014.

We use a custom form of Log Shipping to keep the Azure environment in sync with production. The log chain is stored on Azure Blob Storage, and each log file is restored every half hour to the new environment using a custom PowerShell script.

This allows us to have a very small maintenance window for the final cutover, requiring just the tail of the log backup to be restored.

Last week, I wondered aloud what would happen if we had to fail back to the original system, after processing transactions on the new environment. Would a log backup taken on the Azure environment work on the older servers? Technically speaking, the log backup chain would not have been broken. Perhaps a version conflict?

Let’s look at a test case.

  1. Create a database in SQL Server 2008 R2 with Full Recovery, and perform a full backup, plus several transaction log backups.
  2. Restore the full backup and log chain to a new SQL Server 2016 instance, perform some actions here, and then perform another log backup from the new instance.
  3. Test if the newest log backup can be applied, along with the older ones, in a recovery on SQL Server 2016. I say “Yes”.
  4. Test if the newest log backup can be applied, along with the older ones, in a recovery on SQL Server 2008 R2. I say “No”.

Tune in next week for the results. You’re always welcome to comment here or discuss it on Twitter at @bornsql.

Cumulative Updates Are First Class

Since 27 January 2016, Microsoft recommends that Cumulative Updates be installed with the same confidence as Service Packs, and installed as they become available.

The official announcement was posted on 24 March 2016 (their emphasis):

As of January CU releases, these caution messages have been updated, we now recommend ongoing, proactive installation of CUs as they become available. You should plan to install a CU with the same level of confidence you plan to install SPs (Service Packs) as they are released.

[Note: This guidance is also posted in the latest Cumulative Updates themselves, if you need additional proof to show your boss.]

This is new guidance and shows more rapid and proactive support for SQL Server from Microsoft.

With that in mind, I would strongly recommend that you make sure your production instances of SQL Server are up to date by installing the latest Service Pack and Cumulative Update for your instance.

If you would like to know what version you’re currently running, issue the command SELECT @@VERSION against your instance.

For example, my latest instance on a virtual machine I use is 12.00.4436.

Using the version matrix on the Microsoft SQL Server Version List, I know that this instance is running SQL Server 2014 Service Pack 1, with Cumulative Update 4.

It also shows that there is a newer Cumulative Update (CU 5) available, which will bring my instance up to the very latest version (as of this writing).

Brent Ozar Unlimited has a simpler site, SQL Server Updates, if all you want is the very latest Service Pack or Cumulative Update for supported versions of SQL Server.

If you have any comments about Cumulative Updates, Service Packs, or your favourite kind of chocolate, feel free to reach out to me on Twitter at @bornsql .

What is your SQL Server 2005 Upgrade Plan?

SQL Server 2005 will no longer be supported by Microsoft as of 12 April 2016.

One of the services I offer my customers is an upgrade from any* version of SQL Server to the latest and greatest version.

It is helpful to remember that version upgrades are quite tricky. The database engine does not support upgrades of more than two major version numbers at a time.

That means that, if you were to upgrade to SQL Server 2014 or 2016, you would need to have an intermediate step if you’re coming from anything before 2008.

Note: this rule applies to in-place upgrades only. If you are migrating to a new server and perform a backup/restore or detach/reattach, any database from 2005 and upwards does not require this intermediate step.

Even with upgrading to a higher version, you can still run your database in the same compatibility level as the version you’re upgrading from if it is in that two version window. This gives you compatibility with legacy code and applications that assume they are working with an older version.

For example, if you upgrade a SQL Server 2005 instance to SQL Server 2012, you can still run the database in compatibility level 90, which is equivalent to SQL Server 2005.

In fact, SQL Server 2014 will allow you to attach a SQL Server 2005 database, but the minimum compatibility level will be automatically updated to 100 (Source).

Installed Version Can Upgrade To Compatibility Level
SQL Server 6.5 (65) SQL Server 2000 2000: 80, 70, 65
SQL Server 7.0 (70) SQL Server 2005 2005: 90, 80, 70
SQL Server 2000 (80) SQL Server 2008/R2 2008/R2: 100, 90, 80
SQL Server 2005 (90) SQL Server 2012 2012: 110, 100, 90
SQL Server 2008/R2 (100) SQL Server 2014 2014: 120, 110, 100
SQL Server 2012 (110) SQL Server 2016 2016: 130, 120, 110
SQL Server 2014 (120) SQL Server 2018** 2018**: 140, 130, 120

Thomas LaRock covers some of this in a good checklist to look through when you’re considering an upgrade to SQL Server 2014.

* Any version of Microsoft SQL Server from 6.5 and higher.

** Based on current trends. Microsoft has not announced any product that will be available after SQL Server 2016.