Is it mandatory to keep the database in single user mode when we run dbcc checkdb...if yes..what would be the reason to take internal snapshot in dbcc checkdb process
its not mandatory to keep the DB in single user mode when running dbcc checks, while dbcc checkdb is running, user connection can remains and I don't think there will be any issues with it.
from MS, its recommended to run dbcc checkdb once a week, to ensure you don't have any issue with your DB, normally runs it at your off peak over weekends should be fine.
We run DBCC CheckDB on a restored backup (on a different machine). We do that restore, anyway, to prove that the backup file we have CAN be restored and running the Check DB on that means there is no load on the live system.
Not 100% true, as mentioned by Paul Randal before, running dbcc checkdb on a restored copy doesn't tell you anything about the state of the main DB, as they are running from a completely different IO sub-system. When your restored copy successfully pass the dbcc checks only tells you that it was fine at the time of the backup. Check the link below for details:
Yes, all good points, thanks. Paul is a rockstar in this landscape, but his article doesn't answer the question that occurs to me of "Does it matter?"
So the I/O system on my Production system is bust. But the backups on my Test system pass a CheckDB OK. Do I care? (I would IF I knew of course ). When my production system breaks I restore from backup (or fail-over to secondary, whatever) and I carry on as normal.
Clearly it would be better that I knew, i.e. had discovered, that my Production system I/O was bust, but to do that (i.e. running CheckDB on Production, but assuming that running CheckDB in that environment is not "trivial") has a significant cost - in terms of performance whilst CheckDB is running, storage for the Snapshot, or DEV time in building a really complex CheckDB system that spreads the load over all the days in a week (and, for me, any added Complexity brings a worry about whether we missed something critical in the process, or created a forward-opportunity for a Change to be outside the then-current test, etc. ...)
But keep in mind that while it is running, your system is likely going to slow down quite a bit. It is very heavy on IO resources. Run it during your slowest load, which is often at night. You could see a significant decrease in performance while it is running. IO speed is crucial here.
If you are using Availability Groups, you need to run it on all replicas.
In addition to the backup/restore to a test server, you could also use SAN snapshots which would allow you to complete the task faster since you are skipping the backup/restore. You can transfer a VLDB in just a few minutes which is not possible with backup/restore.
totally valid and agree that its useful to know from a backup that the DB is at the healthy state at the time of backup. I am only trying to saying that you might still have issues if the checks is fine in the backup.
as many DB out there, i don't think they will have an automated ways to restore all DB on another server and perform the dbcc checks then drop them, this might means either they don't do it at all, or they randomly pick their "critical" database to perform that restore dbcc check. So does it matter, it up to them really (or depends! ) In VLDB, or if you already have a way to automate the dbcc checks, by all means check the backup is fine. However, if the DB is small or you needs to do the restore checks manually (every week for a large number of databases) it might be better off just to run them in primary.(given you got the off peak hours and IO etc etc)
not to start an arguement just trying to point out that not many companies out there have good DBA likes those pros here
Yes, its a very good point. Easy to assume that checking the health of a restored backup copy, on another machine, is "all you need to do"
Its worrying, isn't it? Probably still more sites where there are no adequate backups at all ... but for anyone who IS making backups then NOT checking that the backups CAN be restored it is almost as worrying, to me, as having no backups at all.
On our severs we generally have enough space disk space for any, single, database to be restored (under a TEMP name) onto the Production server itself so, assuming no other server available, restoring and testing on the Production server would be better than no testing at all. And it would check the I/O subsystem reasonably well as a bonus ... still got the CPU & I/O "Burden" on the Production server of course.
I've got no argument with that
I think MS could (in fact SHOULD) do more to automate some of these things out-of-the-box for small shops. Maintenance Wizard is woefully inadequate. I'm thinking of something like a PROPERTY on a Database that sets the Housekeeping Strategy for the DB
I'm on it. Don't do anything. (that's for the good DBA likes those pros here )
Regular backups, index rebuilds and CheckDBs
2a) If Full Recovery Model then include TLog backups every 10 minutes
2b) If Simple then just Daily Fulls.
Could be a few, further, variations on the theme - e.g. Sunday Fulls and Daily Diffs; or TLog backups at a more/less frequent interval than 10 minutes (but I can't see the point of less often, 144 x 10minute backups a day, where most of the files are "empty", would not be a big deal compared to losing a database with 59minutes (say) of heavy data entry activity since the last TLog backup). Might need to be some choices for how often the Index Rebuilds occur, but it ought to be possible for a sophisticated tool to figure that out, including things like "Wow!This DB is huge, the indexes most commonly growing out-of-shape are THESE so I will rebuild those every night / on a rota / something else!" and so on. SOMETHING would be better than NOTHING! or assuming that a naive not-a-DBA Super User in the company is actually a safe pair of hands when let loose with the Maintenance Wizard
If MS built this in the only thing to do would be to define, for the Server, where the Backup Folders are, and a Mirror Location for the files to be copied to, so at least they are off-server albeit perhaps still on-site and not yet on-tape.
That's pretty much what my roll-your-own Backup stuff does. There are freebie tools out there that do similar, but IME they require setting up and choosing sensible parameters which I think is generally onerous on the user (assuming a user who is NOT a rock Star DBA) - with the exception, IMHO, of Minion Backup which will pretty much just "does a pretty good job of running a backup strategy" immediately it has been installed. Not quite the same thing as what could be achieved by a Database Property in SSMS that you have to turn OFF if you don't want automated housekeeping.
totally agree, as given storage is pretty cheap now days, setting up a VM is pretty much at a very minimal cost and you can have tons of space just to do the check.
You got a good point where if we only take backup and not try to restore it, it might be as worrying as not to take backup, cannot agree more. Did went to some companies where they never try to restore any of their backup for the past 1 or 2 years!!! Which they claim they already have a very good backup strategy.
I expect things are better / more robust now, but from Back In The Day I have plenty of horror stories ...
Stepper Motor on Tape Head broken. 8-track (I guess) tape. Recorded the whole backup on the one track ... verified it too (well ... verified the contents written on the final track ). Come time to restore ... they have everything off that last track ...