Whenever I restore a database — especially one I obtained outside of my regular environment (for example a customer database, a development database, or even a sample database like WideWorldImporters) — there are a few things I like to check to make sure it’s configured for peak performance.
Note that some of this advice may apply only to non-production databases.
Especially when restoring a database outside of the Active Directory environment in which it was created, you may find that the database owner is set to a non-existent user. I always set the owner to “sa” as that user exists on every SQL Server. Even if the sa account is disabled, the database at least has a valid owner for your instance.
You can set this option in SQL Server Management Studio by looking under the database Properties > Files > Owner.
File growth settings
You may find that a database still has its original file growth settings (10%, or 1MB increments), and in the era of Beyoncé you should be using at least the new default settings: a fixed increment of 64MB, or if your storage subsystem can handle it, even more (for example 1GB increments). This applies to both your data files and transaction log files. If you happen to have Instant file initialization (IFI) enabled for your SQL Server service account on Windows, remember that the transaction log file still needs zero out every byte as it grows.
You can set this option in SQL Server Management Studio by looking under the database Properties > Files > Database files > Autogrowth.
This may seem obvious, but if you’re in a non-production environment there is very little reason to keep a database in the full or bulk-logged recovery model. Switch to simple recovery today and save yourself from embarrassment when a database uses all your drive space.
You can set this option to Simple in SQL Server Management Studio by looking under the database Properties > Options > Recovery model.
Depending on the source of the backup and the SQL Server instance you’re restoring it to, you could have a database backup that was taken 16 years ago. This is especially likely with sample databases. SQL Server 2016 and higher can restore a database from as long ago as SQL Server 2005, and will set the compatibility level to at least 100 (SQL Server 2008 and SQL Server 2008 R2 era). There’s a good chance you’re going to want to use the cardinality estimator that was released with SQL Server 2014, so check that compatibility level.
You can set this option in SQL Server Management Studio by looking under the database Properties > Options > Compatibility level.
Released with SQL Server 2014, delayed durability reduces the number of transaction log writes by combining smaller writes and writing them together, and in a development environment this can handily speed up any work you’re doing. This suggestion comes with the warning that an error or shutdown of SQL Server can cause data loss in a database which is configured for delayed durability, so take care.
You can set this option to Forced in SQL Server Management Studio by looking under the database Properties > Options > Miscellaneous > Delayed Durability.
Many a time have I seen a database that originated before SQL Server 2005, when there was a mistaken belief that data page verification had a negative impact on performance. In SQL Server 2005, the page verification algorithm was rewritten to use a checksum as opposed to torn page detection and is extremely efficient. Turning off this option is a terrible idea. Enable it on all of your databases immediately, production and non-production alike.
You can set this option to CHECKSUM in SQL Server Management Studio by looking under the database Properties > Options > Recovery > Page Verify.
Released with SQL Server 2016 and improved significantly since then, Query Store is the so-called flight data recorder for your database. If you are planning an upgrade and will be changing the compatibility level to test out query plans between different versions of the cardinality estimator, you should turn on Query Store and let it track those plans. As long as the old plan exists, you can even force that plan after the compatibility level is changed. It’s really powerful stuff. In a development environment it is especially useful for comparing query plans.
You can set this option to Read write in SQL Server Management Studio by looking under the database Properties > Query Store > General > Operation Mode (Requested).
Share your favourite database settings in the comments below.
Photo by Drahomír Posteby-Mach on Unsplash.
Along with the Database Compatibility level could be the Legacy CE setting if you are restoring to a SQL2014 or above server. I know you mentioned it and this can be set at the database level or as a hint in a query.
Wouldn’t you want to look at the Auto settings too for Close and Shrink as well as the statistics settings?
All good suggestions, Chris. Thanks for commenting.
Does upgrading the Compatibility level can give you problems?
Like query not running any more because they use legacy commands or legacy features?
That is a possibility, but less common with more recent versions of SQL Server.
There’s features like service broker that have to be re-enabled when you restore a database too! 🙂
This is an excellent point, thank you! Other things include Change Data Capture (CDC), and if you’re using Temporal Tables, the retention policy.