Backup Disk Space
Backup Disk Space
Backups are by nature going to result in disk space being eaten up whenever they run. Most of
the backup routines we build involve full backups (once or maybe twice a week), differential
backups (daily on non-full backup days), and/or transaction log backups, which run frequently
and backup changes since the last log backup. Regardless of the specifics in your environment,
there are a few generalizations that we can make:
1. Data will get larger over time, and hence backups will increase in size.
2. Anything that causes significant data change will also cause transaction log backup sizes
to increase.
3. If a backup target is shared with other applications, then they could potentially interfere
or use up space.
4. The more time that has passed since the last differential/transaction log backup, the
larger they will be and the longer they will take.
5. If cleanup of the target backup drive does not occur regularly, it will eventually fill up,
causing backup failures.
Each of these situations lends themselves to possible solutions, such as not sharing the backup
drive with other programs, or testing log growth on release scripts prior to the final production
deployment. While we can mitigate risk, the potential always exists for drives to fill up. If they do,
then all further backups will fail, leaving holes in the backup record that could prove detrimental
in the event of a disaster or backup data request.
As with log space, we can monitor backup size and usage in order to make intelligent decisions
about how a job should proceed. This can be managed from within a backup stored procedure
using xp_cmdshell, if usage of that system stored procedure is tolerated. Alternatively, Powershell
can be used to monitor drive space as well. An alternative solution that I am particularly fond of is
to create a tiny unused or minimally used database on the server you’re backing up and put the
data and log files on the backup drive. This allows you to use dm_os_volume_stats to monitor disk
usage directly within the backup process without any security compromises.
For an example of this solution, we will use my local C drive as the backup drive and the F drive
as the target for all other database data files. Since our data files are on the F drive, we can easily
view the space available like this:
1
2 SELECT
4 FROM sys.master_files AS f
8
This returns the free space on the drive corresponding to the database I am querying from, in this
case AdventureWorks2014. The result is exactly what I am looking for:
With 14.5TB free, we’re in good shape for quite a while. How about our backup drive? If we are
willing to use xp_cmdshell, we can gather that information fairly easily:
1
3
5 (output_data)
7
8 SELECT
9 *
10 FROM @results
12
The result of this query is a single row with the number of directories and bytes free:
Unfortunately, xp_cmdshell is a security hole, allowing direct access to the OS from SQL Server.
While some environments can tolerate its use, many cannot. As a result, let’s present an
alternative that may feel a bit like cheating at first, but provides better insight into disk space
without the need to enable any additional features:
1
3 ON
4 ( NAME = DBTest_Data,
5 FILENAME = 'C:\SQLData\DBTest.mdf',
6 SIZE = 10MB,
7 MAXSIZE = 10MB,
8 FILEGROWTH = 10MB)
9 LOG ON
10 ( NAME = DBTest_Log,
11 FILENAME = 'C:\SQLData\DBTest.ldf',
12 SIZE = 5MB,
13 MAXSIZE = 5MB,
14 FILEGROWTH = 5MB);
15
This creates a database called DBTest on my C drive, with some relatively small data and log file
sizes. If you plan on creating a more legitimate database to be used by any actual processes, then
adjust the file sizes and autogrow settings as needed. With a database on this drive, we can run
the DMV query from earlier and get free space on this drive:
1
2 USE DBTest;
3
4 SELECT
6 FROM sys.master_files AS f
10
The result is exactly what we were looking for earlier, with no need for any OS-level commands
via xp_cmdshell or Powershell:
I currently have 154GB free, and the only cost of this data was the creation of a tiny database on
the backup drive. With this tool in hand, we can look at a simple backup stored procedure and
add logic in to manage space while it is running:
1
2 USE AdventureWorks2014;
3 GO
5 BEGIN
7 END
8 GO
9
12 AS
13 BEGIN
15
19 'MMddyyyyHHmmss');
21
24
26 (database_name, recovery_model_desc)
27 SELECT
28 name,
29 recovery_model_desc
30 FROM sys.databases
32
34 '
This simple stored procedure will perform a full backup of all databases on the server, with the
exception of msdb, tempdb, model, and master. What we want to do is verify free space before
running backups, similar to earlier. If space is unacceptably low, then end the job and notify the
correct people immediately. By maintaining enough space on the drive, we prevent running out
completely and causing regular transaction log backups to fail. The test for space on the backup
drive incorporates our dm_os_volume_stats query from earlier and assumes that we must
maintain 25GB free at all times:
1
3 BEGIN
5 END
6 GO
7
11 AS
12 BEGIN
14
18 'MMddyyyyHHmmss');
20
21 DECLARE @database_list TABLE
23
25 (database_name, recovery_model_desc)
26 SELECT
27 name,
28 recovery_model_desc
29 FROM sys.databases
31
37 USE [DBTest];
38 SELECT
40 1024) AS BIGINT)
41 FROM sys.master_files AS f
45
47 SELECT
49 FROM sysfiles;
50
52 NVARCHAR(MAX)) + '
53 BEGIN
55 RETURN;
56 END
57
60 '
61 FROM @database_list;
62
63 PRINT @sql_command;
END
Within the dynamic SQL, and prior to each backup, we check the current free space on the
backup drive, the size of the database we are about to back up, and compare those values (in GB)
to the allowable free space set in the stored procedure parameters. In the event that the backup
we are about to take is too large, an error will be thrown. We can, in addition, take any number of
actions to alert the responsible parties, such as emails, pager services, and/or additional logging.
In the event that I try to back up a particularly large database, the expected error will be thrown:
Msg 50000, Level 16, State 1, Line 656 Not enough space available to
process backup on AdventureWorks2014 while executing the full backup
maintenance job. 141GB are currently free.
Since backup failures are far more serious than an index rebuild not running, we would want to
err on the side of caution and make sure the right people were notified as quickly as possible.
The parallel job solution from earlier could also be used to monitor backup jobs and, in the event
that free space was too low send out alerts as needed and/or end the job.