Restoring a large database

I have received a backup of 100 GB for a large (160 GB) database from a customer.
The customer has the MDF file set up as one file.
Is there any way I can restore this database locally, breaking up the 160 GB MDF into, say, a 100 GB MDF and a second 60 GB MDF.
The servers on which we restore customer backups are tight for space but across all of the disks available to this server I can find 160 GB of space.

Thanks
Murray Sobol

No. The initial restore must use the same files as the original backup.

IMO that's a pretty severe restriction!!!
I have no control over how customers manage / mismanage MDF and LDF files.
Murray

SQL tables can be mixed on pages. That alone would make it extraordinarily different, if not impossible, for SQL to adjust files during a restore, since a backup has only data, not control info, in it anyway.

Consider too, when you backup a .pdf or .xls[s] file, you can't partially restore those either, and those are vastly less complex than a SQL database.

Other database vendors, notably Oracle, handle this problem much more elegantly.
A "Tablespace" consists of 1 or more datafiles; these can be placed on any disks available on the server.
When the Backup is restored, if there is not enough space in the first (and only) datafile, simply add another datafile
and re-do the import.
In this fashion, I don';t know (or care) how many datafiles the customer has their database spread across;
Also, I can place "hot" table in a secondary datafile, thus achieving higher performance assuming the disks are configured correctly.
I don't buy SQL Server's "everything in one spot" mantra; there are many valid reasons for separating UNDO files, REDO files, TEMP files etc from each other instead of being in one "TEMPDB" file.

Murray

I certainly wouldn't call Oracle's undo approach "elegant". They often caused the "too old" errors when long-running read tasks ran. Until recently, you had no choice, every table had to be fully logged like that.

And that doesn't even count the extraordinarily high licensing costs of Oracle. I've helped more than one client convert from Oracle to SQL almost solely because of cost.

Sorry, but I don't care that you pissed about how SQL does it, you are stuck with it.

What you can do is to restore on any machine that has SQL installed and enough disk space, then rearrange the data as you wish, into multiple files etc., then take a backup and restore it to your target server.

In the process you might find that there is slack space in the files which you can SHRINK (before taking your backup) and thus have the smallest possible DB at that point. You could also possibly? purge some old data not relevant to whatever tasks you need to perform on the customer DB and, again, SHRINK and then BACKUP

Or remote connect to the Client's machine and perform your task there?

I doubt this is the case, but just in case of misunderstanding: you can PLACE the files wherever you like, the restore drives / paths don't have to match the original locations, but you have to restore each file in full. That includes if the client extends the MDF file to xxx GB but the DB only contains xxx MB of data, or if the LOG file has not been backed up and has grown huge, etc.

To be perfectly honest, 160 GB is not even small by today's standards, it's tiny. You can't even buy disks twice that size anymore. Spend $100 and get some 1TB+ drives and drop them into your server, or ask your SAN/storage administrator for some extra space.

Once you get the initial restore done, then you can reconfigure filegroups and files to achieve the disk placement you want.

1 Like