Should I upgrade SQL Server 2008 / R2, or migrate to Azure?

Brent Ozar Unlimited runs a website, called SQL Server Updates, which comes in really handy for keeping your on-premises SQL Server up to date.

Last week I noticed something interesting: if you’re running SQL Server 2008 or 2008 R2, Microsoft’s extended support for it ends on 9 July 2019. That’s two versions of SQL Server, almost two years apart in release cycle, with different feature sets, expiring on the same day.

Granted, by that date SQL Server 2018 will probably have been released, which would make 2008 officially five versions old and 2008 R2 four versions old. But in corporations, time moves both slowly and quickly. By that I mean, planning to upgrade takes a long time, and before you know it, it’s 9 July 2019 and everyone is panicking.

It’s the right time, today, to start your upgrade plan. A lot of customers I speak to, running on 2008 or 2008 R2, are talking about upgrading to 2012 or 2014 right now.

On Twitter, I said that after the end of next year, I would not recommend upgrading any instances to SQL Server 2012, for this reason:

In 2019, moving to SQL Server 2016 buys you 7 more years of support. I wouldn’t consider moving to SQL Server 2012 after the end of 2017.

At the end of 2017, SQL Server 2012 will be more than five years old. Despite your best intentions assuming a major upgrade will be quick, and considering that SQL Server licensing costs the same regardless of the version you buy, there’s no practical reason not to go to at least SQL Server 2014, and preferably SQL Server 2016 (which I consider the biggest release since SQL Server 2005). You can always run your database in 2008 Compatibility Mode if you’re running SQL Server 2014.

I wrote down some reasons to upgrade from SQL Server 2005 last year. These reasons still apply.

Whither Azure?

Which brings up the big question: what about Azure SQL Database? As you know, my first question when customers ask about moving to Azure SQL Database is “Are you sure?”, followed by “Are you very sure?”.

By the time July 2019 rolls around, Azure will be ten years old. That’s hard to fathom right now, but at their current rate of progress, Azure SQL Database will be a mature platform (it already is mature for appropriate workloads, when taking elastic pools into account).

So, if you meet all of the following conditions:

  • you use SQL Server 2008 or 2008 R2,
  • you plan to upgrade after 2017, and
  • you’re running a database smaller than 1TB,

I would recommend that you seriously consider migrating to Azure SQL Database, instead of upgrading to an on-premises database server running SQL Server. The time is right, at the end of 2016, to spend 18 months on an appropriate migration strategy to achieve this.

Yes, 18 months from now we’ll be deep into 2018. Scary, isn’t it?

Act now.

If you want to discuss this on Twitter, look me up at @bornsql.

Azure Storage Throughput

Hot on the heels of DTUs from earlier this week, today we’re going to switch focus a little, while staying with Azure, and have a quick look at storage speeds on SQL Server running on Azure Virtual Machines (Infrastructure-as-a-Service).

Microsoft recently lowered their prices for more popular VMs, which is making this an attractive solution for a lot of folks who don’t want to run their servers on-site.

However, there’s a limit to how fast our storage can get, despite the attraction of solid-state drives on the Premium Storage offering.

We in the industry refer to “latency” when discussing storage. This is measured in seconds per MB (s/MB), or how long it takes to access 1 MB of data. Target latency for SQL Server is 5 ms to 10 ms. In other words, the best case scenario for SQL Server is that we should be able to access 1 MB of data in between 5- and 10-thousandths of a second.

Everything on Azure is virtualised, from the CPU and RAM, down to the storage layer. Behind the scenes, all of these components are connected by what Microsoft calls the “fabric layer”. In simpler terms, it’s the network, stupid.

Yes, it has a low latency. Yes, a customer of mine did save almost 50% of their budget by switching from a data centre to Azure, and yes, they did get a nice performance boost in the process.

However, there’s a limit, and that is the theoretical limit on network throughput. The absolute best case for any local network these days is 10 Gb/s, or in other words, 1.25 GB per second (there are 8 bits in a byte, so we have to divide the bits by eight to get the bytes per second).

In realistic terms though, the connection between our virtual machine and the SSD premium storage, depending on our region and storage tier, is more likely to be 1 Gbps, or 125 MB per second. It’s not really clear from the Azure Documentation what the actual limits are, but I’ve personally seen it max out at 125 MB/s. The region we host in matters a great deal.

Inverting that 125 MB/s figure to get the latency, we’re seeing 8 ms at best. That’s still within the window of 5–10 ms, so it’s not terrible. Now add in the fact that we’re not the only person using that underlying hardware, and the latency starts creeping up. There will be competition on the VM host, the network card, the storage, and it all adds up. Pretty soon we’re looking at 20–30 ms of latency, which is now more than twice as long as the preferred amount.

No matter what the CPU speed is, or how much RAM we have, our bottleneck is always going to be the network. Those of us who have experience with older SANs (storage area network) will recognise this. Even having the fastest spindles (hard drives) in the SAN is meaningless when we’re connected to the server with a 1 Gbps half-duplex network card (no sending traffic is allowed while receiving traffic).

As I say on this blog and in person, Azure is not the answer for everyone (whether we’re looking at SQL Server on an Azure VM or an Azure SQL Database). We need to make sure that we understand the inherent limits of the infrastructure before committing to a long-term project to migrate everything.

I’m not saying Azure isn’t a good idea. I just pointed out above that a client I migrated recently was very happy with their outcome. Their data centre provider offering was terrible, and Azure was a great improvement. I personally run a production system on SQL Server, running on a small Azure VM.

20–30ms might be twice as fast as your current solution. If your database is running on 7200 rpm SATA drives in a RAID 5 configuration, you’ll see a massive increase in performance by switching to an Azure VM with P30 Premium Storage.

Here are some fantastic resources to help you plan your migration better:

It all comes back to the question: Are you sure?

Let me know your Azure migration plans on Twitter, at @bornsql . I look forward to hearing from you.

Database Throughput Units

Last time we briefly touched on the metric that Microsoft uses to keep your Azure SQL Database in check: the DTU, or database throughput unit.

It uses a combination of CPU, I/O and log flushes/second, using a special algorithm, to calculate a single unit.

The reason for this is simple: Our databases don’t use the same resources all the time. We will have busy periods and quiet periods. Some of us are fortunate and only need the database to be available during business hours.

This leaves a bit of mystery around what sort of resources our database may need, if we want to run it in Azure instead of on premises.

The DTU Calculator, a third-party service created by Justin Henriksen (a Microsoft employee), will calculate the DTU requirements for our on-premises database that we want to migrate to Azure, by firstly capturing a few performance monitor counters, and then performing a calculation on those results, to provide the recommended service tier for our database.

Justin provides a command-line application or PowerShell script to capture these performance counters:

  • Processor – % Processor Time
  • Logical Disk – Disk Reads/sec
  • Logical Disk – Disk Writes/sec
  • Database – Log Bytes Flushed/sec

For best results, we should run this session during the busiest time of the day for our database, so as to get a realistic idea of what the load is. Justin recommends running it for one hour.

Once the results are captured, a CSV file containing the output is then uploaded to the DTU Calculator site. It will take a few seconds and then provide a recommended service tier for our database. We can then see how much it would cost to run our on-premises database in an Azure SQL Database environment (assuming of course that it is compatible).

In the sample I captured for an existing client for the purposes of this post, I was surprised to find that the Premium Tier was recommended, at level P2.

DTU Calculator graph

Based on your database utilization, we recommend you migrate your SQL Server workload to Premium – P2. This Service Tier/Performance Level should cover approximately 91.83 % of your utilization.

NOTE: There is approximately 8.17 % of your workload that falls into a higher Service Tier/Performance Level. After migrating your database to Azure, you should evaluate your database’s performance using the guidance mentioned in the  information section above.

This means that it would cost more than $1,350 (CAD) per month (assuming a 24/7 uptime) to run this database in the cloud at the P2 level. A SQL Server Standard Edition licence for the existing hardware, when it is time to upgrade, will cost a shade under $10,000 (CAD) for four cores. In other words, the licence will pay for itself within eight months.

Perhaps this is worthwhile for the customer, if they do away with hardware and other on-site costs, but if their database load increases for any reason and resource limits are repeatedly hit, they will have to move up to the next level (P3), which is immediately double the monthly cost.

Please make use of the DTU Calculator if you intend to migrate an existing database to Azure SQL Database. It should form an important part of your migration plan. There is no reason not to, and it will save you the pain of making a costly and time-consuming mistake.

Let me hear your DTU results on Twitter at @bornsql .

Azure SQL Database Limits

Let’s talk briefly about resource limits with Azure SQL Database. Because we have to share resources with other users, and because Microsoft doesn’t want us affecting others dramatically, they have implemented some limits.

If for some reason our database does get overeager, the operations relating to CPU, RAM and I/O will be queued up by the underlying resource manager, which will allow those operations to continue once the resources are free. Yes, that will slow down our system.

Resources other than CPU, Memory, Log I/O, and Data I/O, as noted by the documentation, will be denied, and clients will see an error message.

Microsoft uses something called DTUs (Database Throughput Units, a combination of CPU, RAM and log writes) to figure out our Azure SQL Database’s resource usage.

Check out the DTU Calculator to find out more, and see how your on-premises database compares. Next week, we will go into more detail about DTUs, and how to calculate the DTU usage of an on-premises database.

It gets really complicated, really quickly, but using the DTU Calculator before we decide to move to Azure SQL Database, and being conservative with our estimates, will make our lives easier.

The biggest an Azure SQL Database can be on the Standard Tier, is 250 GB. The biggest it can be on the Premium Tier, is 1000 GB.

Coincidentally, the SQL Server Customer Advisory Team (SQLCAT) wrote a blog post titled “Real-World Azure SQL DB: Unexpected Database Maximum Size Limit” last month, about a gotcha when upgrading between service tiers. It is recommended reading.

It helps to draw parallels between Standard Tier and Standard Edition of SQL Server, and Premium Tier and Enterprise Edition of SQL Server.

(Edit: With SQL Server 2016 Service Pack 1, you can now create Memory-Optimized objects in all editions of SQL Server.)

There is a Basic Tier for Azure SQL Database, but the resource limits are so strict that I barely consider it. Perhaps if you run a small MySQL database for a WordPress website, with a caching plugin, this may be appropriate.

Using elastic pools (the ability to share the count of resource DTUs between several databases in a pool, that shrinks and grows as your requirements change) can help with balancing costs across several databases, but it is going to seem more expensive at the outset.

The bottom line with Azure SQL Server is that the resource limits may take you by surprise if you don’t plan correctly. Spend the time to evaluate the requirements of your database, and don’t just use Azure SQL Database because it’s new and shiny. If you currently have an Enterprise licence for SQL Server, chances are you’ll spend the equivalent (or more) on a Premium Tier database.

Azure SQL Database is not a way to save money on your SQL Server licences. It is a different technology, with different resource requirements, that cater to a specific workload. Choose wisely.

If you have some thoughts about DTUs, come and talk to me on Twitter, at @bornsql.

Planning matters

It’s the 2016 PASS Summit at the end of this month, and I’ll be attending for my third year.

Because I’m a glutton for punishment, I’ll also be writing bonus posts for the month of October to celebrate Summit.

Fortunately, because I’m the luckiest person in the world, I am attending the Pet Shop Boys concert in Vancouver on the Monday evening beforehand, so that is going to be exciting, too.

Getting on a plane and flying from Calgary to Seattle is such a convenience these days. You book your flights, drive to the airport, breeze through airport security, and the plane jets you, undelayed, to Vancouver, where you hop on a train to your hotel, have dinner nearby, and then walk to the concert. The following morning after a good night’s uninterrupted sleep, you hop back on the train to the airport as scheduled, pass quickly through passport control (so as not to trouble you with that on the other side, one of the benefits of flying to the US out of Canada), once again breeze through airport security, and the plane jets you to Seattle-Tacoma airport, where you get off the plane quickly, fetch your luggage, breeze through customs, and there you are. A short train ride to the hotel later, you’re ready to face the networking track of the Summit for three days.

If only migrating a SQL Server database to Azure SQL Database were this simple.

Of course, I’m joking. There are many very important steps involved when thinking about flying to another country, even if it’s just for a few days. This is especially complex when you add in a side-trip to Vancouver for an evening. It requires careful planning, making sure enough time exists to get everything done. And you can’t forget your passport either.

So why do so many people in the Information Technology industry want to do things without careful planning?

Every time (and I’m guilty of this at times, too), a new technology comes out, and it becomes trendy, we all jump on the bandwagon and start playing around with it. So far, so good. That’s the equivalent of kicking the car tyres.

And then management says (because management plays golf with vendor sales, or the developer lead likes that it uses tabs instead of spaces) that we have to move to this new technology to save the company lots of money. Less good, but doable.

The next logical step is to do a proof of concept. That’s where we research the requirements of the existing system, the limitations of the new technology, and build a working model to see if it is fit for purpose. If it is not fit for purpose, the amount of time we have expended is minimal, and we continue bug fixing on the older technology stack, letting management know that it is not possible, or practical, to move to this new technology.

If the proof of concept shows merit, then we go back to the project manager and business analysts, and begin planning for migration. We include all the stakeholders, including customers (internal and external), and slowly develop clear goals and timelines. There’s a project lead, and politics don’t enter into it. And unicorns are real.

Why is it that we want so badly to “play” with the new technology that we lose sight of this necessary step?

Microsoft announced Azure SQL Database in 2009. While it has been seven years since that time, and children born in that year are now at school, it’s a very short time in the technology world. SQL Server itself is only 27 years old. COBOL, which is still very much in use, is 57 years old. C is 44 years old. Large chunks of the Internet infrastructure are built on C and C++.

My point is, if we are going to move a company’s systems to newer technology, we have to consider longevity:

  1. How long will the new technology be around?
  2. How long will the system we’re migrating be around?
  3. (Bonus question) Will someone getting upset on the Internet destroy my system?

The bonus question relates to the crisis NPM faced earlier this year, when a single 11-line JavaScript file caused thousands of projects, that depended on it, to fail.

Moving to Azure SQL Database is a complicated process. While the underlying structure might look familiar, and while most of our code will probably work fine, there are major stumbling blocks that could scupper our plans entirely.

Take for example the fact that you are not allowed to address tempdb directly. You can create local temp tables and table variables just fine, but you cannot check for their existence by referring directly to tempdb.

This works:

CREATE TABLE #test (col1 INT);
DROP TABLE #test;

This does not:

IF EXISTS (SELECT * FROM tempdb..sysobjects
WHERE id = OBJECT_ID('tempdb..#test')
DROP TABLE #test;

This “minor” issue occurs in many more places than you might think it does, because everyone reading this knows about that one legacy system we’re too scared to touch.

If management is pressuring you to migrate to the new stuff because it’s trendy, they’re doing it wrong.

Careful planning is what you need. This includes a proof of concept, a 100% pass rate using the Data Migration Advisor, and a project manager who understands the complexities involved, plus a proper testing plan that includes automated unit tests, edge cases, and humans bashing away at keyboards and mice.

And don’t forget to plan for a disaster, especially for a disaster that occurs during the migration.

Share your migration horror stories with me on Twitter, at @bornsql .

The Data Migration Assistant

I’ve written quite a lot about Azure SQL Database recently, but that doesn’t mean I’ve forgotten about the on-premises version of SQL Server.

What could be better than Microsoft announcing a new tool for upgrading to SQL Server 2016? The Data Migration Assistant (DMA) for SQL Server was announced on 26 August 2016.

Data Migration Assistant (DMA) delivers scenarios that reduce the effort to upgrade to a modern data platform by detecting compatibility issues that can impact database functionality after an upgrade. It recommends performance and reliability improvements for your target environment.

Why is this a big deal? Well, as Microsoft themselves put it:

DMA replaces SQL Server Upgrade Advisor.

The Data Migration Assistant helps us make sure that not only is our database instance upgradeable to the latest version of SQL Server, but it also helps us make the best use of new features.

My friend Kenneth Fisher has a post titled The new Data Migration Assistant has been released!, which I encourage you to check out.

We don’t have to run DMA on the SQL Server machine directly. We can install and run it from a workstation and point it at our server. In fact, this is a recommended best practice.

Kenneth and I also strongly expect that later versions of DMA will support Azure SQL Database migrations, which is also exciting (and though I don’t like to read too much into the names Microsoft gives its products, it’s pretty evident from the name that this is where we’re headed).

To share your horror stories about SQL Server Upgrade Advisor, find me on Twitter at @bornsql .

Importing a BACPAC into Azure SQL Database

In our final look at migrating a database to Azure SQL Database, we will import a BACPAC file.

From previous weeks we know that a BACPAC file contains the full schema, as well as the data (in BCP format), of the database we are migrating. It is based on the structure of a DACPAC file, which is created when we export a Data-tier Application.

This step is also surprisingly easy, because it uses knowledge we have gained in other blog posts.

To import the BACPAC, the file must be uploaded to Azure Blob Storage. We can either use AzCopy or the AzureBlobStorageSync tool to upload the file to a standard container.

Once the file is uploaded, we can use PowerShell or the Azure Portal. For demonstration purposes, I will use the Portal website.

Upload the BACPAC

Using AzCopy, copy the file to a container. Remember that we cannot use Premium Storage for this file, but that works out fine because it’s much cheaper to use Standard Blob Storage anyway.

For example:

AzCopy /Source:D:\Temp\bacpac /Dest:https://myaccount.blob.core.windows.net/mycontainer /DestKey:key /Z:D:\_Ops\AzCopy /Pattern:"sourcefile.bacpac"

(Refer to the previous blog post about AzCopy to understand what these switches mean.)

Importing the BACPAC

With the Azure Portal open, navigate to our SQL server. Notice the Import database option on the top menu.

Screenshot 2016-08-21 22.51.13

Similarly to the Management Studio wizard, we need to input some information to get the database created and populated.

Screenshot 2016-08-21 23.01.22

In the Storage configuration, navigate to the container where we just uploaded that BACPAC file, and select it. The rest is simple. Once you click on “OK”, the database will be created.

Conclusion

And there you have it. Two ways to migrate a SQL Server database from on-premises to Azure SQL Database.

I presented a session to the Calgary PASS Chapter last week on this topic and discovered that network speed matters a great deal. I recommend using the two-part method, via a BACPAC file, so that you can control the process and not have to rely on slow network connections. AzCopy is at least predictable.

If you have any questions or comments, you can find me on Twitter as usual, at @bornsql .

Moving a database to Azure SQL Database

This week we will actually move a database into Azure SQL Database, using the first of two of Microsoft’s recommended methods.

The main thing to keep in mind is that SQL Server (on-premises, or “earthed”) and Azure SQL Database (“cloud”) are not the same product. They support the same schemas and data, and allow the same scripts and applications to run, but they are not the same thing.

This is important, because when we migrate to Azure SQL Database from an on-premises SQL Server environment, we can’t simply do a backup and restore of our database.

The simple reason is that there is no file system access in Azure SQL Database. We won’t be able to see a backup file to restore it.

The more complicated reason is that we don’t have access to any instance-level features, including server configuration, tempdb access, and so on, and therefore our databases must be self-contained.

The recommended method to migrate a database to Azure SQL Database is the Migration Wizard in SQL Server Management Studio.

For larger databases, we can create an extract first, which can be imported later using the necessary tools.

Under the covers, the migration is using SQL Server Data Tools to extract a Data-tier Application (DAC). This acronym is not to be confused with Dedicated Administrator Connection.

According to MSDN (emphasis added):

A data-tier application (DAC) is a logical database management entity that defines all of the SQL Server objects – like tables, views, and instance objects including logins – associated with a user’s database. A DAC is a self-contained unit of SQL Server database deployment that enables data-tier developers and database administrators to package SQL Server objects into a portable artifact called a DAC package, also known as a DACPAC.

A BACPAC is a related artifact that encapsulates the database schema as well as the data stored in the database.

From this, we can establish that we are creating a BACPAC file, containing the schema in XML format, a few metadata files with version information, and of course the data itself, in BCP (Bulk Copy) format. This is then compressed (using standard PK-ZIP format) in order to save space.

Anyone who has used other products that create database dump files will be familiar with this process.

The BCP file format is not human-readable, unlike, say, MySQL’s dump files, and if we look at the MSDN article on format files, it will make our heads spin.

That said, I was able to pick out string values from the BCP file in my test that corresponded with row values in the source database table.

To be on the safe side, I like to break up the process into two stages, so that if the second part fails, we can retry it at our convenience:

  • Create the BACPAC file locally and verify it;
  • Import the BACPAC file into Azure SQL Database.

If the database is small enough, we use the Migration Wizard inside the latest version of SQL Server Management Studio and do both steps in one go. Let’s do that now.

(The following images are copyright © 2016 Microsoft Corporation, retrieved from the official documentation pages.)

Inside SQL Server Management Studio, right-click on the database we want to migrate. In this case, it’s [AdventureWorks2012]. Click Tasks > Deploy Database to Microsoft Azure SQL Database….

migrateusingdeploymentwizard01

Here we specify the Server connection of our Azure SQL account, by clicking on the Connect… button:

migrateusingdeploymentwizard02

This is where we connect to the Azure SQL account. Since this migration wizard is doing all of the heavy lifting, we do not need to create a database beforehand.

migrateusingdeploymentwizard03

Now that we have connected to our Azure account, we can create the database on this screen. Remember the different editions and their associated costs, so we won’t pick the $7000 per month option unless we can afford it.

migrateusingdeploymentwizard04

Once we start the migration, it can take a while, and it may even fail due to compatibility issues, which have to be resolved.

We briefly touched on these last week, and they must be resolved before migrating. We would use the latest SQL Server Data Tools to identify errors or warnings.

But because we’ve done our compatibility homework, everything works the first time.

It’s really that simple.

If the database is large, or the network connection is unstable or slow, we will have to do the import from our BACPAC file or using BCP directly. That will be covered in next week’s post.

If you have any questions or comments about this process, feel free to ask me on Twitter at @bornsql .

Should I move my on-premises database to Azure SQL Database?

This week is a shorter post than last week, but it answers a very important question:

Should I move my on-premises database to Azure SQL Database?

The short answer is:

It depends.

Firstly, why do you want to move your database to the cloud? Is it a business requirement? A better way to ask this is, does it help your business to move to Azure?

One of my clients runs a huge custom application, supported by SQL Server, running off a single 4GB database. They’ve been talking for years about moving to Azure or Amazon Web Services. Their current server is more than five years old, and the SQL Server version is 2008 Web Edition.

Moving to Azure SQL Database makes sense for them because they would benefit from new features and not have to worry about purchasing new hardware or new licences for Windows Server and SQL Server. Their developers can focus on improving the application and not worry about a database server going down.

This is probably the first main reason why people want to migrate.

Something to keep in mind, though, is that it’s a single database you’re migrating at a time. If you have a whole lot of databases that support different parts of your business, and you don’t want to run on-premises anymore, it makes more sense to investigate moving to Azure (or even Amazon or Rackspace) Virtual Machines instead. There, the cost of upgrading hardware is removed, but you can use the latest version of SQL Server as required, and everything else is the same.

Another main reason for moving to Azure SQL Database is when your database supports a website or web application.

Say you run a website with a very basic database structure, but you have a lot of users and need to store persistent data for a hugely popular mobile app.

It makes a lot of sense to move this kind of database to Azure SQL Database because it lets you forget about the administrative costs of maintaining a server, licences, operating system patches, SQL Server updates, backups, maintenance tasks, and all the rest that goes with it, and concentrate on your mobile app.

Another consideration is cost. You can save money in the short- to medium-term by moving to Azure SQL Database. Remember though, that after a certain number of years, you would be paying the same (and more) as purchasing new hardware and licences, and getting a DBA on speed dial. So for startups, Azure SQL Database make a lot of sense.

One final thing, that I cover in my new talk on Migrating to Azure SQL Database (insert plug here), is compatibility.

While Azure SQL Database supports most of the same features as SQL Server 2016, there are some things that could scuttle your plans from the get-go. It is very important to investigate those before jumping in and writing code.

For example, because we don’t have access to instance-level functionality, including master and tempdb databases, database collation is critically important to get right. Azure SQL Database, by default, uses SQL_Latin1_General_CP1_CI_AS collation. For starters, a difference in your database will cause performance issues and bad query plans because tempdb will be on a different collation. It goes downhill from there. Some applications require a specific collation to be able to work correctly.

Another limitation is that you are required to use Azure Active Directory or SQL authentication. Certain usernames are forbidden, which might require changes in your applications.

A short list of features not supported by Azure SQL Database can be found by visiting this link.

I hope this has been an informative read. Now that I’ve scared you away, next week we will finally get into the “how” of “how to migrate to Azure SQL Database”. Stay tuned.

If you would like to share your horror stories about failed migrations, find me on Twitter at @bornsql .

An Introduction to Azure SQL Database

It’s the second half of 2016, and Azure is the new hotness. I’ve raved about Blob Storage, virtual machines, AzCopy, and all those good things. When the dust settles, though, this blog is about managing data, and sometimes we don’t want to worry about the hassle of maintenance and administration.

Azure SQL Database takes the Administrator out of “DBA”. It’s a cute soundbite, but is it true? Let’s go through a short primer.

What’s with the unwieldy name?

Azure SQL Database is a bit of a mouthful, but be thankful that Satya Nadella took over from Steve Ballmer. Before that happened, it was Windows Azure SQL Database, or WASD. Can you imagine?

“ASD” is an entry from the Diagnostic and Statistical Manual of Mental Disorders, and we can’t call it “SQL Azure” either. “Azure SQL Database” is the official name, so let’s stick with it.

Who is Azure SQL Database for?

For a long time, I struggled to answer this question myself, mainly because I did not understand how anyone could get by with a tiny SQL Server database. The Azure SQL Database tiers seem, well, limited on the low end and expensive on the high end:

Basic – 2GB (up to $5 per month)
Standard – 250GB ($15 to $50 per month)
Premium – 500GB–1TB ($465 to $7,001 per month)

That last amount is not a typo. We can easily spend seven thousand US dollars, every month, running a 1TB Azure SQL Database.

Compare and contrast that with a fully-licenced Azure DS13 virtual machine, on Windows Server 2012 R2, with 56GB RAM, 8 CPU cores, and a 400GB system drive, running SQL Server 2016 Enterprise Edition, which (even with four 1TB disks running in a RAID 10 configuration) costs less than US$4,000 per month.

If it’s so expensive, who is it for?

This website, bornsql.ca, runs on a LAMP stack, which stands for Linux, Apache, MySQL and PHP. Linux is the operating system, Apache is the web server, MySQL is the database, and PHP is the programming language. I need all four of these things, with their interdependencies, to provide you with a website to read and comment on. That’s a lot of overhead, and it costs more than $5 a month, not even counting the administration costs.

Microsoft traditionally offers a similar stack, with Windows Server as the operating system, Internet Information Services (IIS) as the web server, SQL Server as the database, and ASP.NET (or another common language like C#, or even PHP if you want) as the programming language.

Vast swaths of the Internet are still hosted on Linux- or Windows-based web servers, with millions of hours dedicated to software updates, malware removal, replacing failing hard drives, taking regular backups, power failures, and so on. It adds up.

What Azure does (the entire service offering) is completely separate the dependency between these and other components. For example, if you want to build a static website, with no underlying database, you just need to use Azure App Service. Want to add a database? Throw in an Azure SQL Database. No operating system required. No server required. It’s all there, under the covers, and we don’t have to worry about it.

With Azure, we’re paying for ease of deployment of new stuff, with no administration and maintenance to worry about. Azure “just works”.

This sounds like virtualisation at a large scale

Sort of.

A funny thing happened in the last few years. Hardware kept getting faster and cheaper, and software stayed the same. This meant that newer hardware was being wasted because software just didn’t need it.

Some enterprising technologists decided that they could virtualise environments, by migrating physical servers to virtual servers, enabling multiple VMs (virtual machines) to run on one physical server.

Over time, they figured out how to configure VMs to make the best use of shared resources so that the VMs can run the same or even better than they did on their old physical servers. Fewer physical servers means lower administration costs, lower electricity bills, and so on.

Cloud computing providers, including Amazon Web Services, Google Apps, and Microsoft Azure, are basically heaps and heaps of servers that run custom software.

Is ease of administration worth the extra money?

I don’t know. That depends on you. This is why it was a difficult question to answer at first.

Imagine, instead of worrying about software and hardware costs, you can focus on growing your business and only pay for more resources when you need them. That’s the appeal of Azure.

How does Azure SQL Database fit in?

Azure SQL Database is a specific service offering that caters to people and companies who do not want to manage their own database servers and will be fine with “just” 50GB databases to run a particular web or mobile application. No need for a dedicated Database Administrator.

This is exciting! When can I migrate my database to Azure SQL Database?

Tune in next week to find out whether it’s a good idea to move your environment to Azure and how to get your on-premises SQL Server database there.

Comments

If you have any questions or comments, look me up on Twitter, at @bornsql .