SQLTeam.com | Weblogs | Forums

MSDB size


Is 300 MB for MS DB big? What should then normal size be for MS DB


That does seem big compared to what I see on couple of my servers. I work with small databases with not much activity on them, so don't know whether 300 MB is unusually large for a busier/larger database serer.

The following query lets you see what is taking up the space.

USE msdb
    t.NAME AS TableName,
    i.name as indexName,
    sum(p.rows) as RowCounts,
    sum(a.total_pages) as TotalPages, 
    sum(a.used_pages) as UsedPages, 
    sum(a.data_pages) as DataPages,
    (sum(a.total_pages) * 8) / 1024 as TotalSpaceMB, 
    (sum(a.used_pages) * 8) / 1024 as UsedSpaceMB, 
    (sum(a.data_pages) * 8) / 1024 as DataSpaceMB
    sys.tables t
    sys.indexes i ON t.OBJECT_ID = i.object_id
    sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
    sys.allocation_units a ON p.partition_id = a.container_id
    t.NAME NOT LIKE 'dt%' AND
    i.OBJECT_ID > 255 AND   
    i.index_id <= 1
    t.NAME, i.object_id, i.index_id, i.name 

I didn't write that query, I copied it from somewhere a while ago, and can't remember from where to give proper credit to the author.


yea its a small database about 21 GBs


msdb holds SSIS packages (not SSISDB projects on 2012) and Agent jobs. Packages can be big. Many of ours are 500k or more (XML, quite verbose). So depending on what you have in there. it could be quite big.

e.g. on one of ourservers, the table msdb.dbo.sysssispackages has 10097 pages or 39 megabytes (if it got the math right!)

Also, ssisjobhistory is quite large as are other tables.


I run msdb.dbo.backupset i have backup data since 2012. Will cleaning the MSDB make my database faster?


why do you think msdb is slow?


Yes, it will make it faster. It will also make SSMS faster. Run the command below to delete history up to the date specified:

EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = '20150701' --keep from 1-Jul-2015 forward, as an example


Have MS improved sp_delete_backuphistory?

It used to be dreadful (many folk wrote their own variation of that, I think TaraKizer has one on her blog)

The first time I ran it it took DAYS to delete a few tens-of-thousands of rows.


OK, I took a look and it is definitely better (SQL2008, I think that SQL2005 was still "bad", but no longer got access to one of those). At least it is not doing RBAR like it used to ... but, seriously, MS code contains this sort of stuff?

  WHERE bmf.media_set_id IN (SELECT media_set_id
                             FROM @media_set_id)
          FROM msdb.dbo.backupset
          WHERE media_set_id = bmf.media_set_id) = 0)

Using COUNT(*) instead of EXISTS is a horrific performance hog, although I don't suppose the number of rows involve means that "it matters" ... but still, I expect better quality code


Given the number of the DELETEs that sp_delete_backuphistory makes I would recommend doing it in "modest" date increments, from the "beginning of time"

This will tell you the oldest backup record in MSDB

SELECT	MIN(backup_start_date)
FROM	msdb.dbo.backupset

Deleting in, say, 3-month-chunks is risky because even if fast initially, on older date ranges, it may become unacceptably slow on more recent ones (e.g. the number of databases increased and/or the frequency of Log backups or more databases in FULL recover model) then the number of backups will increase, so date-size-chunks alone might not be a good indicator of how long it will take.

This will tell you what date to purge to in order to delete, in this example, 10,000 backup records

SELECT	'EXEC msdb.dbo.sp_delete_backuphistory @oldest_date = '''
	+ CONVERT(varchar(8), MAX(backup_start_date), 112)
	+ ''''
	FROM	msdb.dbo.backupset
	ORDER BY backup_start_date ASC
) AS X

I suggest trying that with a modest Batch Size, and increasing the batch size on each iteration until a reasonable elapsed time for each iteration is reached. Check that the Log File is not extending excessively with the batch size that you use.