Flagrantly ignoring the 10% rule

My friend Michael J. Swart has a rule of thumb he calls Swart’s Ten Percent Rule.

If you’re using over 10% of what SQL Server restricts you to, you’re doing it wrong.

After a recent discussion on Twitter, I wondered what it would look like if I had 32,767 databases on one instance of SQL Server (that’s the hard limit according to the documentation).

This is a Very Bad Idea. Don’t do this in a production environment. I performed this experiment on a Docker container running SQL Server 2019 Preview CTP 2.0 so that you don’t have to.

Initial plan

The first problem was the initial size of each new database. Despite putting the model database in the simple recovery model and shrinking it as small as I could (2 MB for the data file and 512 KB for the transaction log), the default size of a new database on SQL Server these days is 8 MB, no matter how much we try to shrink it afterwards.

In the second attempt, I considered simply copying the model database files 32,767 times, and mounting them separately. In the words of kids these days, “L-O-L.”

Initial result

In any event, none of this worked very well. I got about as far as I expected to (just over 3,200 databases created, proving out Swart’s Ten Percent Rule) before the instance ran out of memory. It sputtered a little before eventually failing. I was jumping between SQL Server Management Studio and Azure Data Studio respectively to see what was going on.

No more resources for you!

 

A list of databases created by a sequence, with some numbers missing due to resource constraints

One, two, skip a few …

Next steps

After a quick sudo docker rm followed by sudo docker run to recreate the container, I thought about this “problem” again.

The minimum database size of 8 MB doesn’t help my home lab’s disk space situation. I would need 256 GB for the data files and another 16 GB for the transaction logs, and who has that kind of time to set up a large enough virtual machine or container?

Joey D’Antoni noted that it’s only really feasible to have so many databases if auto-close is enabled on all of them. I especially like this tweet because it is a subtweet. I can still feel Joey’s eyes judging me, even now.

While I believe this is cheating to have the databases simply registered in the master database as opposed to having them open, there’s a point where common sense is necessary.

Who needs roads rules?

Not one to be defeated by sensible thinking and resource constraints, I decided to push ahead with a new plan.

Star Trek: The Next Generation's Captain Picard in a face-palm image

This won’t end well

In my third attempt, I figured that cheating was indeed allowed. I would create a new database, set it offline, and delete the underlying files. This would eliminate my storage requirements beyond around 10 MB at a time. To make things interesting, I broke this process up into four separate scripts, with a range of 8192 databases in each script, and let it run.

What’s the worst that could happen?

While I believe that pictures paint a thousand words, this blog post is already up around 1,000 words, so here’s the actual error message:

“But Randolph,” I hear you cry, “that doesn’t say 32,767.” You are correct, dear reader. In fact, this number is two databases short of the maximum number allowed on a SQL Server instance, according to the official documentation. What gives?

The first of the two missing databases is the resource database, a read-only database containing all of the system objects in SQL Server.

The second is reserved for SQL Server replication, whether it be the publisher, distributor or subscriber database.

If you want to play along at home, here’s the script I used.

The script, which you absolutely should not use in a production environment

Note that while I successfully performed the experiment on a Dockerized version of SQL Server 2019 Preview CTP 2.0, the script below is designed for a Windows platform because it incorporates xp_cmdshell. Extended stored procedures are not supported on SQL Server on Linux (which includes Docker containers), and my bash script with sqlcmd hackery should never see the light of day.

On my home lab, this took just over an hour to complete.

Here’s a script to blow them all away if you don’t want to destroy your container, or are running this on Windows. It took approximately 5 minutes on my home lab.

Summary

If there are any limits of SQL Server you would like me to test next, feel free to leave a comment below and I’ll see about doing an experiment.

Photo by Trym Nilsen on Unsplash.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: