Skip to content
Home » Azure Storage Throughput

Azure Storage Throughput

  • by

Hot on the heels of DTUs from earlier this week, today we’re going to switch focus a little, while staying with Azure, and have a quick look at storage speeds on SQL Server running on Azure Virtual Machines (Infrastructure-as-a-Service).

Microsoft recently lowered their prices for more popular VMs, which is making this an attractive solution for a lot of folks who don’t want to run their servers on-site.

However, there’s a limit to how fast our storage can get, despite the attraction of solid-state drives on the Premium Storage offering.

We in the industry refer to “latency” when discussing storage. This is measured in seconds per MB (s/MB), or how long it takes to access 1 MB of data. Target latency for SQL Server is 5 ms to 10 ms. In other words, the best case scenario for SQL Server is that we should be able to access 1 MB of data in between 5- and 10-thousandths of a second.

Everything on Azure is virtualised, from the CPU and RAM, down to the storage layer. Behind the scenes, all of these components are connected by what Microsoft calls the “fabric layer”. In simpler terms, it’s the network, stupid.

Yes, it has a low latency. Yes, a customer of mine did save almost 50% of their budget by switching from a data centre to Azure, and yes, they did get a nice performance boost in the process.

However, there’s a limit, and that is the theoretical limit on network throughput. The absolute best case for any local network these days is 10 Gb/s, or in other words, 1.25 GB per second (there are 8 bits in a byte, so we have to divide the bits by eight to get the bytes per second).

In realistic terms though, the connection between our virtual machine and the SSD premium storage, depending on our region and storage tier, is more likely to be 1 Gbps, or 125 MB per second. It’s not really clear from the Azure Documentation what the actual limits are, but I’ve personally seen it max out at 125 MB/s. The region we host in matters a great deal.

Inverting that 125 MB/s figure to get the latency, we’re seeing 8 ms at best. That’s still within the window of 5–10 ms, so it’s not terrible. Now add in the fact that we’re not the only person using that underlying hardware, and the latency starts creeping up. There will be competition on the VM host, the network card, the storage, and it all adds up. Pretty soon we’re looking at 20–30 ms of latency, which is now more than twice as long as the preferred amount.

No matter what the CPU speed is, or how much RAM we have, our bottleneck is always going to be the network. Those of us who have experience with older SANs (storage area network) will recognise this. Even having the fastest spindles (hard drives) in the SAN is meaningless when we’re connected to the server with a 1 Gbps half-duplex network card (no sending traffic is allowed while receiving traffic).

As I say on this blog and in person, Azure is not the answer for everyone (whether we’re looking at SQL Server on an Azure VM or an Azure SQL Database). We need to make sure that we understand the inherent limits of the infrastructure before committing to a long-term project to migrate everything.

I’m not saying Azure isn’t a good idea. I just pointed out above that a client I migrated recently was very happy with their outcome. Their data centre provider offering was terrible, and Azure was a great improvement. I personally run a production system on SQL Server, running on a small Azure VM.

20–30ms might be twice as fast as your current solution. If your database is running on 7200 rpm SATA drives in a RAID 5 configuration, you’ll see a massive increase in performance by switching to an Azure VM with P30 Premium Storage.

Here are some fantastic resources to help you plan your migration better:

It all comes back to the question: Are you sure?

Let me know your Azure migration plans on Twitter, at @bornsql . I look forward to hearing from you.