Skip to content
Home » Saving Time By Splitting Files for a Backup

Saving Time By Splitting Files for a Backup

During a recent engagement with a client transferring large files from on premises to Azure Storage, we discovered that files over a certain size are automatically throttled, causing file uploads to take much longer than expected.

Through unscientific testing, we figured out that the throttling seems to kick in at around 30 GB.

I rustled up a T-SQL script, based on some online research, to split SQL Server backups, so no single file would be larger than 30 GB. As this is a one-off migration, it’s a bit of a hack and does not take database compression into account.

This script makes use of Ola Hallengren’s Maintenance Solution, because his backup process takes the parameter @NumberOfFiles.

-- Dabatabase name
DECLARE @dbName NVARCHAR(255) = N'Temp';
-- Backup path
DECLARE @backupPath NVARCHAR(255) = N'C:\Temp';
-- Max backup file size (in GB)
DECLARE @maxFileSize NUMERIC(10, 5) = 30;
-- Number of files needed
DECLARE @filesNeeded INT;

-- Calculate number of files needed at @maxFileSize
SELECT @filesNeeded = CEILING(SUM(CONVERT(NUMERIC(10, 2),
ROUND(FILEPROPERTY([a].[name], 'SpaceUsed') / 128.0, 2))) /
(@maxFileSize * 1024.0))
FROM [sys].[database_files] [a];

-- Run full copy-only backup using number of files specified
-- (Requires Ola Hallengren's Maintenance Solution)
EXEC [master].[dbo].[DatabaseBackup] @Databases = @dbName,
@Directory = @backupPath,
@BackupType = 'FULL',
@Compress = 'Y',
@CopyOnly = 'Y',
@CheckSum = 'Y',
@NumberOfFiles = @filesNeeded,
@Verify = 'Y';
GO

You don’t need to worry about data compression with this script because two 15 GB files will transfer just as quickly as a single 30 GB file. Provided you generate your restore script correctly, the number of files does not matter.

Think you have a way to improve this script? Let me know on Twitter at @bornsql .