Max Server Memory Revisited

Some time ago, I added a free resource to this site to help DBAs know how much max server memory should be assigned to SQL Server, based on an algorithm by Jonathan Kehayias.

On 19 August 2016, I went one step further and wrote a free script to perform the calculation for your own instance, and even provided some T-SQL code to make the change as well. The only thing you have to do is run it.

Download the script from GitHub

You can visit the download page right here.

Run the script on your SQL Server instance

It’s perfectly safe to do so, as all it does is run a series of SELECT statements. Because this script intends to help you modify a server-level setting, you must have the necessary sysadmin permissions.

NOTE: You may have to enable advanced options first. This is a requirement before the Max Server Memory setting can be changed.

Here’s the script running on SQL Server 2008 R2 (click to enlarge):

msm1

And here’s the same script running on SQL Server 2016 (click to enlarge):

msm2

Copy the configuration change script from the [Script] column

-- Show Advanced Options might
-- need to be enabled first.
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;
GO
-- The output from this new script
EXEC sp_configure 'max server memory (MB)', 11264;
RECONFIGURE WITH OVERRIDE;
GO
-- Turn off Show Advanced as best practice
EXEC sp_configure 'show advanced options', 0;
RECONFIGURE;
GO

Make that change!

If you did it right, you’ll see something like this in the Messages pane.

Configuration option 'max server memory (MB)' changed from 2147483647 to 11264. Run the RECONFIGURE statement to install.

This should make your life easier when setting up a new server or reconfiguring a virtual machine when you need to manage the RAM allocation.

If you have any questions or comments, please find me on Twitter, at @bornsql .

Importing a BACPAC into Azure SQL Database

In our final look at migrating a database to Azure SQL Database, we will import a BACPAC file.

From previous weeks we know that a BACPAC file contains the full schema, as well as the data (in BCP format), of the database we are migrating. It is based on the structure of a DACPAC file, which is created when we export a Data-tier Application.

This step is also surprisingly easy, because it uses knowledge we have gained in other blog posts.

To import the BACPAC, the file must be uploaded to Azure Blob Storage. We can either use AzCopy or the AzureBlobStorageSync tool to upload the file to a standard container.

Once the file is uploaded, we can use PowerShell or the Azure Portal. For demonstration purposes, I will use the Portal website.

Upload the BACPAC

Using AzCopy, copy the file to a container. Remember that we cannot use Premium Storage for this file, but that works out fine because it’s much cheaper to use Standard Blob Storage anyway.

For example:

AzCopy /Source:D:\Temp\bacpac /Dest:https://myaccount.blob.core.windows.net/mycontainer /DestKey:key /Z:D:\_Ops\AzCopy /Pattern:"sourcefile.bacpac"

(Refer to the previous blog post about AzCopy to understand what these switches mean.)

Importing the BACPAC

With the Azure Portal open, navigate to our SQL server. Notice the Import database option on the top menu.

Screenshot 2016-08-21 22.51.13

Similarly to the Management Studio wizard, we need to input some information to get the database created and populated.

Screenshot 2016-08-21 23.01.22

In the Storage configuration, navigate to the container where we just uploaded that BACPAC file, and select it. The rest is simple. Once you click on “OK”, the database will be created.

Conclusion

And there you have it. Two ways to migrate a SQL Server database from on-premises to Azure SQL Database.

I presented a session to the Calgary PASS Chapter last week on this topic and discovered that network speed matters a great deal. I recommend using the two-part method, via a BACPAC file, so that you can control the process and not have to rely on slow network connections. AzCopy is at least predictable.

If you have any questions or comments, you can find me on Twitter as usual, at @bornsql .

Moving a database to Azure SQL Database

This week we will actually move a database into Azure SQL Database, using the first of two of Microsoft’s recommended methods.

The main thing to keep in mind is that SQL Server (on-premises, or “earthed”) and Azure SQL Database (“cloud”) are not the same product. They support the same schemas and data, and allow the same scripts and applications to run, but they are not the same thing.

This is important, because when we migrate to Azure SQL Database from an on-premises SQL Server environment, we can’t simply do a backup and restore of our database.

The simple reason is that there is no file system access in Azure SQL Database. We won’t be able to see a backup file to restore it.

The more complicated reason is that we don’t have access to any instance-level features, including server configuration, tempdb access, and so on, and therefore our databases must be self-contained.

The recommended method to migrate a database to Azure SQL Database is the Migration Wizard in SQL Server Management Studio.

For larger databases, we can create an extract first, which can be imported later using the necessary tools.

Under the covers, the migration is using SQL Server Data Tools to extract a Data-tier Application (DAC). This acronym is not to be confused with Dedicated Administrator Connection.

According to MSDN (emphasis added):

A data-tier application (DAC) is a logical database management entity that defines all of the SQL Server objects – like tables, views, and instance objects including logins – associated with a user’s database. A DAC is a self-contained unit of SQL Server database deployment that enables data-tier developers and database administrators to package SQL Server objects into a portable artifact called a DAC package, also known as a DACPAC.

A BACPAC is a related artifact that encapsulates the database schema as well as the data stored in the database.

From this, we can establish that we are creating a BACPAC file, containing the schema in XML format, a few metadata files with version information, and of course the data itself, in BCP (Bulk Copy) format. This is then compressed (using standard PK-ZIP format) in order to save space.

Anyone who has used other products that create database dump files will be familiar with this process.

The BCP file format is not human-readable, unlike, say, MySQL’s dump files, and if we look at the MSDN article on format files, it will make our heads spin.

That said, I was able to pick out string values from the BCP file in my test that corresponded with row values in the source database table.

To be on the safe side, I like to break up the process into two stages, so that if the second part fails, we can retry it at our convenience:

  • Create the BACPAC file locally and verify it;
  • Import the BACPAC file into Azure SQL Database.

If the database is small enough, we use the Migration Wizard inside the latest version of SQL Server Management Studio and do both steps in one go. Let’s do that now.

(The following images are copyright © 2016 Microsoft Corporation, retrieved from the official documentation pages.)

Inside SQL Server Management Studio, right-click on the database we want to migrate. In this case, it’s [AdventureWorks2012]. Click Tasks > Deploy Database to Microsoft Azure SQL Database….

migrateusingdeploymentwizard01

Here we specify the Server connection of our Azure SQL account, by clicking on the Connect… button:

migrateusingdeploymentwizard02

This is where we connect to the Azure SQL account. Since this migration wizard is doing all of the heavy lifting, we do not need to create a database beforehand.

migrateusingdeploymentwizard03

Now that we have connected to our Azure account, we can create the database on this screen. Remember the different editions and their associated costs, so we won’t pick the $7000 per month option unless we can afford it.

migrateusingdeploymentwizard04

Once we start the migration, it can take a while, and it may even fail due to compatibility issues, which have to be resolved.

We briefly touched on these last week, and they must be resolved before migrating. We would use the latest SQL Server Data Tools to identify errors or warnings.

But because we’ve done our compatibility homework, everything works the first time.

It’s really that simple.

If the database is large, or the network connection is unstable or slow, we will have to do the import from our BACPAC file or using BCP directly. That will be covered in next week’s post.

If you have any questions or comments about this process, feel free to ask me on Twitter at @bornsql .

Should I move my on-premises database to Azure SQL Database?

This week is a shorter post than last week, but it answers a very important question:

Should I move my on-premises database to Azure SQL Database?

The short answer is:

It depends.

Firstly, why do you want to move your database to the cloud? Is it a business requirement? A better way to ask this is, does it help your business to move to Azure?

One of my clients runs a huge custom application, supported by SQL Server, running off a single 4GB database. They’ve been talking for years about moving to Azure or Amazon Web Services. Their current server is more than five years old, and the SQL Server version is 2008 Web Edition.

Moving to Azure SQL Database makes sense for them because they would benefit from new features and not have to worry about purchasing new hardware or new licences for Windows Server and SQL Server. Their developers can focus on improving the application and not worry about a database server going down.

This is probably the first main reason why people want to migrate.

Something to keep in mind, though, is that it’s a single database you’re migrating at a time. If you have a whole lot of databases that support different parts of your business, and you don’t want to run on-premises anymore, it makes more sense to investigate moving to Azure (or even Amazon or Rackspace) Virtual Machines instead. There, the cost of upgrading hardware is removed, but you can use the latest version of SQL Server as required, and everything else is the same.

Another main reason for moving to Azure SQL Database is when your database supports a website or web application.

Say you run a website with a very basic database structure, but you have a lot of users and need to store persistent data for a hugely popular mobile app.

It makes a lot of sense to move this kind of database to Azure SQL Database because it lets you forget about the administrative costs of maintaining a server, licences, operating system patches, SQL Server updates, backups, maintenance tasks, and all the rest that goes with it, and concentrate on your mobile app.

Another consideration is cost. You can save money in the short- to medium-term by moving to Azure SQL Database. Remember though, that after a certain number of years, you would be paying the same (and more) as purchasing new hardware and licences, and getting a DBA on speed dial. So for startups, Azure SQL Database make a lot of sense.

One final thing, that I cover in my new talk on Migrating to Azure SQL Database (insert plug here), is compatibility.

While Azure SQL Database supports most of the same features as SQL Server 2016, there are some things that could scuttle your plans from the get-go. It is very important to investigate those before jumping in and writing code.

For example, because we don’t have access to instance-level functionality, including master and tempdb databases, database collation is critically important to get right. Azure SQL Database, by default, uses SQL_Latin1_General_CP1_CI_AS collation. For starters, a difference in your database will cause performance issues and bad query plans because tempdb will be on a different collation. It goes downhill from there. Some applications require a specific collation to be able to work correctly.

Another limitation is that you are required to use Azure Active Directory or SQL authentication. Certain usernames are forbidden, which might require changes in your applications.

A short list of features not supported by Azure SQL Database can be found by visiting this link.

I hope this has been an informative read. Now that I’ve scared you away, next week we will finally get into the “how” of “how to migrate to Azure SQL Database”. Stay tuned.

If you would like to share your horror stories about failed migrations, find me on Twitter at @bornsql .

An Introduction to Azure SQL Database

It’s the second half of 2016, and Azure is the new hotness. I’ve raved about Blob Storage, virtual machines, AzCopy, and all those good things. When the dust settles, though, this blog is about managing data, and sometimes we don’t want to worry about the hassle of maintenance and administration.

Azure SQL Database takes the Administrator out of “DBA”. It’s a cute soundbite, but is it true? Let’s go through a short primer.

What’s with the unwieldy name?

Azure SQL Database is a bit of a mouthful, but be thankful that Satya Nadella took over from Steve Ballmer. Before that happened, it was Windows Azure SQL Database, or WASD. Can you imagine?

“ASD” is an entry from the Diagnostic and Statistical Manual of Mental Disorders, and we can’t call it “SQL Azure” either. “Azure SQL Database” is the official name, so let’s stick with it.

Who is Azure SQL Database for?

For a long time, I struggled to answer this question myself, mainly because I did not understand how anyone could get by with a tiny SQL Server database. The Azure SQL Database tiers seem, well, limited on the low end and expensive on the high end:

Basic – 2GB (up to $5 per month)
Standard – 250GB ($15 to $50 per month)
Premium – 500GB–1TB ($465 to $7,001 per month)

That last amount is not a typo. We can easily spend seven thousand US dollars, every month, running a 1TB Azure SQL Database.

Compare and contrast that with a fully-licenced Azure DS13 virtual machine, on Windows Server 2012 R2, with 56GB RAM, 8 CPU cores, and a 400GB system drive, running SQL Server 2016 Enterprise Edition, which (even with four 1TB disks running in a RAID 10 configuration) costs less than US$4,000 per month.

If it’s so expensive, who is it for?

This website, bornsql.ca, runs on a LAMP stack, which stands for Linux, Apache, MySQL and PHP. Linux is the operating system, Apache is the web server, MySQL is the database, and PHP is the programming language. I need all four of these things, with their interdependencies, to provide you with a website to read and comment on. That’s a lot of overhead, and it costs more than $5 a month, not even counting the administration costs.

Microsoft traditionally offers a similar stack, with Windows Server as the operating system, Internet Information Services (IIS) as the web server, SQL Server as the database, and ASP.NET (or another common language like C#, or even PHP if you want) as the programming language.

Vast swaths of the Internet are still hosted on Linux- or Windows-based web servers, with millions of hours dedicated to software updates, malware removal, replacing failing hard drives, taking regular backups, power failures, and so on. It adds up.

What Azure does (the entire service offering) is completely separate the dependency between these and other components. For example, if you want to build a static website, with no underlying database, you just need to use Azure App Service. Want to add a database? Throw in an Azure SQL Database. No operating system required. No server required. It’s all there, under the covers, and we don’t have to worry about it.

With Azure, we’re paying for ease of deployment of new stuff, with no administration and maintenance to worry about. Azure “just works”.

This sounds like virtualisation at a large scale

Sort of.

A funny thing happened in the last few years. Hardware kept getting faster and cheaper, and software stayed the same. This meant that newer hardware was being wasted because software just didn’t need it.

Some enterprising technologists decided that they could virtualise environments, by migrating physical servers to virtual servers, enabling multiple VMs (virtual machines) to run on one physical server.

Over time, they figured out how to configure VMs to make the best use of shared resources so that the VMs can run the same or even better than they did on their old physical servers. Fewer physical servers means lower administration costs, lower electricity bills, and so on.

Cloud computing providers, including Amazon Web Services, Google Apps, and Microsoft Azure, are basically heaps and heaps of servers that run custom software.

Is ease of administration worth the extra money?

I don’t know. That depends on you. This is why it was a difficult question to answer at first.

Imagine, instead of worrying about software and hardware costs, you can focus on growing your business and only pay for more resources when you need them. That’s the appeal of Azure.

How does Azure SQL Database fit in?

Azure SQL Database is a specific service offering that caters to people and companies who do not want to manage their own database servers and will be fine with “just” 50GB databases to run a particular web or mobile application. No need for a dedicated Database Administrator.

This is exciting! When can I migrate my database to Azure SQL Database?

Tune in next week to find out whether it’s a good idea to move your environment to Azure and how to get your on-premises SQL Server database there.

Comments

If you have any questions or comments, look me up on Twitter, at @bornsql .