Sql Server Maintenance Plan - Best Practices on Tasks and Scheduling
I am tasked with devising a maintenance plan for our Sql Server 2005 databases. I know for backups I want to do a daily full database backup and transactional log backups every 15 minutes. My problem comes to figuring out which other tasks I want to do and how often I should do them.
So, so far I have this in mind. Correct me if there are any flaws in my thinking or a better way to do this.
- Backup - All Tables, Full Backup (daily)
- Backup - Selected Tables, Full Backup (hourly)
- Backup - Transaction Logs (every 15 minutes)
- Check database integrity (daily)
- Reorganize index (daily)
- Update statistics (daily)
- Shrink database (weekly)
- Rebuild index (weekly)
- Maintenance cleanup (daily)
I remembered reading some time ago (when I set up a similar plan at another job) that some of these tasks don't need to be run on a daily basis or should not be run daily. As to which ones, it escapes me. I could use a little guidance on creating a better maintenance plan that will reduce data loss in an disaster, but won't tax the system when running during peak hours (and also increase the performance).
The current database is over 30 GB's, so I thought a shrink would help. Thanks for your input, Endy.
Create a separate weekly job and update statistics once a week.
Yep, and also check if you need 100 percent scan on all tables. If there is a 50 mill rows table that is queried once a month (report) you'd better update it's statistics only before the query :). I've added my answer too, hope it will shed some light.
This is a very common task for all DBAs and the right answer is NOT the same for every one and for each server. As lot of other things, it depends on what you need.
Most definitely you don't want to run "Shrink Database" as already suggested. Its EVIL to performance and the below ref will show you why. It causes disk and as well as index fragmentation and this can lead to performance issues. You are better off by pre-allocationg a big size for the data and log files so that autogrowth will NOT kick-in.
I didn't understand your #2. selected tables full backup. Can you elaborate more on this?
Coming to Index reorganize, update statistics and index rebuilds, you need to be careful on how you do this otherwise you will end up using more resources and also end up with performance issues.
When you rebuild indexes the statistics of the indexes are updated with fullscan but if you do update statistics after that, then those will be updated again with a default sample (which depends on several factors, usually 5% of the table when table size > 8 MB) which may lead to performance issues. Depending on the edition you have, you may be able to do online index rebuilds. The right way of doing this activity is check the amount of fragmentation and depending on that either do index rebuild or index reorganize + update statistics. And also you may want to identify which tables need to update stats more frequently and try to update stats more often.
Maintenance Plans are OK but its hard to get the best out of them doing these customizations unless you can login to SSIS and tweak the MP's. that's why I prefer NOT to use them and use Ola Hallengren's free scripts that are more robust than MP's. Also, I would recommend to catch up on the referenced article by Paul Randal on this topic.
This is NOT a comprehensive answer to your question but a good starting point. HTH and let us know if you have any additional questions/comments.
Sankar, thanks for your input. I had thought that doing an hourly backup of certain tables (leaving out tables that don't change that often) might be a better approach to save some backup time on hourly backups. The big issue here is I really want 15 minute transaction log backups because data loss in our situation could have legal ramifications. As for the full backup frequency, hourly would be best, but I am afraid to tax the system too much. I did look at that script before posting, but I haven't had a chance to try it.
I'll share my experience, even if you already accepted an answer. Maybe it will be helpful :-).
- Full daily backup (daily) - great, but don't forget to check for space and remove old files after some predefined time.
- Backup selected tables (hourly) - don't understand why you'd need this, you'd better go with differential backups. How do you backup only certain tables: SSIS, scripts, bcp? Regarding diff backups, don't schedule it too often, as you'll steal the log backups role.
- Transaction log backups (every 15 minutes) - great, are you sure you need for all databases? Do all databases use full recovery model or not?
- Check db integrity - yes, but you need to make sure that you don't kill you environment. The DBCC check statements are pretty selfish on resources and scan complete dbs, so they need to be scheduled during off hours.
- Reorganize index (daily) - don't force it, do it only if needed. Check the index DMV's regarding fragmentation and reorganize only based on needs. I'd move all index and statistics operations on a single weekly task.
- Update statistics (daily) - Please see my answer to a previous question. Instead of just forcing update of all statistics, every day, you'd better check when statistics were lastly updated and only in some cases update them.
- Shrink database (weekly) - Oh, no. Please read Paul Randal's article regarding file shrinking.
- Rebuild index (weekly) - see 5.
Maintenance cleanup (daily) - ok with that.
You'd better also add a task to verify your backups - there's a version of RESTORE command (verifyOnly.. if I recall correctly) - let's say weekly, though I prefer it daily.
You mention you want to be shielded in case of data loss, so I'd say you need to add the system databases in this maintenance procedure. And also take care to backup the files on a different machine than the server itself. Keep somewhere a file with what to do in case you need to rebuild master db, msdb..etc. Good luck with your task!
Are fragmented indexes considered a "bad thing" on SQL Server? Where I live defragmenting indexes can kill performance and is anyway usually pretty pointless
@Jack - off course fragmented indexes are a bad thing :-). See Brent Ozar's article regarding fragmented indexes including examples. A single quote from a MS white paper used inside his article: "Index fragmentation slowed their systems down between 13% to 460%. Ouch.". And keep in mind that Tom's article is from way back, when he was using Rule based optimizer, not the later Cost based optimizer.
The CBO has nothing to do with it and what he said back then still applies today on Oracle I assure you. No idea about SQL Server though - I read the article and was left pretty unconvinced: why no consideration that defrag can slow updates down horribly (think about all those leaf blocks splitting again because you keep gluing them back together). I may start a new question on this...
@Jack - I didn't want to say anything about the optimizer, but the time exactly (which was 10 years ago). And I was thinking that the underlying stuff of our servers change and evolve with every version. Anyway, regarding defrag slowing updates, it's one tradeoff I will do any time, because my system has the main weight on reading, not on writing data. So everyone needs to make his own measurements.
Late answer but could be of use to other readers.
Please, have in mind that there are lots of maintenance or reporting tasks, you can create, that carry unseen risks associated with them.
What would happen when a drive gets filled up during differential backups performed daily? And what if an index rebuild job runs unusually long? How about if a data load process causes extensive resource contention, bringing normal operations to their knees? All of these are planned events, yet can cause considerable disruption to the very processes we are trying to safeguard.
Always consider how different jobs interact and time them such that there is no overlap in sensitive or resource intensive tasks.
I suggest reading this article first: http://www.sqlshack.com/removing-the-risk-from-important-maintenance-tasks-in-sql-server/
It describes how to perform important maintenance tasks in SQL Server – risk free. For example – you can build simple checks into your maintenance jobs that verify available resources as well as what an operation will require prior to execution. This allows you to ensure that your environment can handle what you are about to do, and abort with a meaningful error if resources are inadequate.
You might benefit from doing Differential Backups here. Look into them for sure
As stated previously, database shrinks are bad because they create physical fragmentation of your data and log files, thus causing more random IO reads.
5, 6, and 8: See following.
These really go hand in hand, as indexes rely on up to date statistics, and the order of these operations is fairly important. The baseline method that I employ, which admittedly may not work for everyone, is that I perform two rounds of index maintenance. First, I do the clustered indexes and then the nonclustered indexes. The method I employ for both is the following. If an index is large enough and fragmented enough (sys.dm_db_index_physical_stats), I will rebuild the index (which includes a statistics update with full scan). If an index is either too small for a rebuild or not quite fragmented enough for a rebuild (less than 100 pages and between 5% and 15% fragmentation), I will first perform an index reorganize. I will then perform an statistics update with full scan. If the index is not fragmented enough for either of these pieces of index maintenance, I will still update the statistics with a full scan.
Now, this covers index statistics, but neglects to do anything to column statistics. Weekly, I will update column statistics.
I hope this has helped in some way!
I tilted on your "data loss could have legal ramifications here" comment.
Then, you definitely want to get a powerful 3rd party tool (like DPM) that can handle backups (and recover from catastrophics events in a flash and minimal fussing around) alot faster and alot better than any script you can pull off the Internet.
Having backups is one thing. Knowing how to use them in an emergency is another.
Don't forget that if you are to the point of restoring a backup after a major failure, you are probably already under a crapload of stress and pressure. You don't need the additionnal burden of digging up and writing up flawlessly the RESTORE DATABASE statement with 12 transaction log files... And praying it doesn't fail you...
At my workplace, I can recover/restore a 350Gb database to any point within a 5 minute period for the last week using DPM. With a GUI interface. Worth it, in my book.
For the rest, definitely look into Ola Hallengren's index script, and adjust the parameters to your needs. Personnally, I coupled it with a scheduled task that make it run for an hour each night with no rescan, so it handles the worst indexes every time, and force a full rescan of the fragmentation every saturday, or when all the indexes in the list have been defragmented.
Lastly, I add my voice to the "don't shrink your files automatically, ever" group. File Shrink is only a tool to use sporadically when an unregular event occurs that overgrew your logs or DB files (infinite loop or such). It is not supposed to be a maintenance tool. If you have disk space pressure, shrink will only delay the inevitable anyway.