Author |
Topic |
jaydev
Starting Member
1 Post |
Posted - 2010-01-28 : 16:29:46
|
Hello friends, I am a Web developer. I am tasked to setup a MS SQL database and maintain it in the dedicated server. I created the database and everything is working fine. I have few questionsI already setup a job for daily backup of the server.What else I need to perform every day? DO I need to run any job to shrink the transaction file everyday or sql server will do that?What are the other daily maintenance I need to take care to run the site without any issues.Thanks for your advice and help.PS: I have SQL workgroup edition.Jay |
|
agiotti
Starting Member
37 Posts |
|
Lumbago
Norsk Yak Master
3271 Posts |
Posted - 2010-01-29 : 03:14:27
|
The single most important task of a DBA is to make sure that backups are run according to your backup policy, that the backups are stored in a secure place (preferably offserver/site) and that you are actually able to restore the database in case of a failure of some sort. And in regards to your question: a full backup will not empty the transaction log so you need transaction log backups also. Have a look at this link to give you something to think about: http://vyaskn.tripod.com/sql_server_administration_best_practices.htmWhen it comes to other DBA tasks there are about a million things you can do...some more important than others of course:http://www.mssqltips.com/tip.asp?tip=1240- LumbagoIf the facts don't fit the theory, change the facts. Albert Einstein |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-29 : 03:48:04
|
Paul Randall has enormous experience and skill, but I don't think he has a Database Maintenance 101 article, and all the detail on that site are probably better suited to more experienced DBAs, or Google hits for a specific problem.A quick Google didn't get me a Maintenance 101 ... someone will probably be along in a minnute with a suitable link, but here is my two-pennyworthUsing the Wizard is a good place to start (but a bad place to end up - when it fails the error messages are useless, and its hard to know whether its doing the right things for you, or not).Full backup - daily if the files are not too big, weekly otherwise and then do a Differential backup dailyTLog backup every 15 minutes (NOT once a day ). Not necessary if your database is in Simple Recovery ModelCopy the backup files somewhere else as soon as they are made (another machine will do). Copy them to tape promptly (for most folks this will be once a day). Have a proper tape rotation, including getting tapes offsite. Plan for a disaster recovery.Update Statistics - we do this daily, using WITH FULLSCAN because the quicker sample method didn't get us reliable query plans.Rebuild Indexes. If you have large tables, and the database is online 24/7 you will need to use De-fragmentation rather than Re-Index. We only rebuild indexes which are fragmented - to reduce the time that the rebuilds takeDBCC CHECKDB - do this daily, and ensure that you get any alerts that it raises. This will get you early warning that there is damage in your files.Monitor your filesizes - so you can predict when you will need more disk space"DO I need to run any job to shrink the transaction file everyday or sql server will do that?"When you backup the TLog that happens automatically (file doesn't shrink, but space used by the transactions that have been backed up is marked for reuse) |
 |
|
Lumbago
Norsk Yak Master
3271 Posts |
Posted - 2010-01-29 : 04:12:34
|
Do you seriously run CHECKDB every day Kristen? I usually do CHECKDB once a week and run full/log backups with CHECKSUM...- LumbagoIf the facts don't fit the theory, change the facts. Albert Einstein |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-29 : 04:19:18
|
"Do you seriously run CHECKDB every day Kristen?"Yup. The sooner I know that "there is trouble ahead" the less time it will take to fix the mess Old habit from SQL 5.6 days I expect ... SQL 2005 / SQL 2008 CHECKSUM probably much more reliable ... but ... |
 |
|
agiotti
Starting Member
37 Posts |
Posted - 2010-01-29 : 09:34:51
|
Here is a partial list of steps we perform at our company.1. Index Maintenancea. For both small and large databases, index are maintained based on thresholds. Index Threshold Defragmentation Process Below 10% fragmented Perform an index statistic update. Note: This process checks for out-of-date indexes and updates statistics only as needed. And will also determine the ratio of rows sampled to number of rows in the table which will be used as the sample size. If sample size is less than previous sample size abort. This will help prevent undoing a good sample from a previous run 10 – 30% fragmented Issue an ALTER INDEX REORG command. Perform an index statistic update. Greater than 30% fragmented Issue an ALTER INDEX REBUILD command.2. Perform DBCC’s prior to any full backup a. We do not want to backup a corrupted database3. Perform full backup a. On success cut a new TLOG backup b. NOTE: Prior to SQL 2005 perform a DBCC CHECKCATALOG prior to step 2 above. 4. Every 15 minutes perform Transaction Log backups5. Have a job to check whether full backups were successful6. Have a job to check whether transaction log backups have run in the last x amount of time7. Have a job to scan the SQL Server error logs for any potential issues. a. Cycle the error log prior to each business day8. Have a job to capture database growth statistics on a daily basis a. With this you can send alerts if the database grows over a certain percentage over night. This has helped me catch rogue users backing up large tables. b. Helps with database and disk forecasting9. Setup ALERTS a. Failed Login attempts and send notification with IP address b. Alerts on all errors with severity >= 19 i. In resource ii. In current process iii. In database processes iv. Table integrity v. Hardware errors vi. Alert on 825 (a read-retry error) vii. Alert on 823 viii. Alert on 82410. Monitor SQL Agent Jobs for failure and duration of run a. Duration of run, for example if DBCC are taking longer than your baseline. SQL 2005 and newer check error log for msg 5268 (DBCC rescanning data)11. Monitor for blocking conditions12. Monitor for long running transactions a. This could cause the LOG file to grow beyond acceptable size13. Capture Baseline Data a. Wait Stats b. File Stats c. Index Usage stats i. They are not persistent so you need to load a table for analysis d. Memory e. Disk IO f. CPU14. Process to monitor the OS event logs for issues and send notification |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-29 : 12:03:46
|
That's a very useful list Agiotti. |
 |
|
agiotti
Starting Member
37 Posts |
Posted - 2010-01-29 : 12:23:41
|
Hi Kristen,Thank you very much. Take care. |
 |
|
agiotti
Starting Member
37 Posts |
Posted - 2010-01-29 : 12:27:15
|
If you need assistance setting these up I can guide you through most of them. For the baseline and ongoing benchmarks for the KPI (CPU, MEMORY), and SQL Agent Jobs, we use Idera's Diagnostic Manager. For Disk IO we utilize a CMS (central mgmt server) and use system monitor to load tables with this information. As for Wait's and File Stats, a good friend, Andrew Kelly, has written a very nice process to capture these stats, I can send you that information as well. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-29 : 13:35:25
|
"2. Perform DBCC’s prior to any full backupa. We do not want to backup a corrupted database"SQL2008 in fact SQL 2005 too I think) will terminate a backup if the Database is using CHECKSUM and if the CHECKSUM is bad.(Doesn't guarantee other errors might not be there of course, but it pretty much rules out a faulty I/O controller).Also, that checksum is also used on the Restore - so safeguards against the Backup file itself becoming damaged."10 – 30% fragmented Issue an ALTER INDEX REORG command.Perform an index statistic update.Greater than 30% fragmented Issue an ALTER INDEX REBUILD command."We don't like to use any (Offline) ALTER INDEX REBUILD commands on large tables (database in use 24/7) so we use ALTER INDEX REBUILD on very small tables, and ALTER INDEX REORGANISE on larger tablesI would use ALTER INDEX REBUILD WITH ONLINE=ON for large tables but that is an Enterprise Only option"3. Perform full backupa. On success cut a new TLOG backup"Why (3a)?We take TLog backup immediately before Full backup, thus if we restore the first Tlog file to restore does not contain transactions already committed to the DB (except those made after the start of the Full backup). This, perhaps!!, saves a little time in restore. |
 |
|
agiotti
Starting Member
37 Posts |
Posted - 2010-01-29 : 16:50:42
|
<"2. Perform DBCC’s prior to any full backup a. We do not want to backup a corrupted database"> Are you implying not to perform DBCC in hopes that your backup with checksum will catch your issues? You are correct on the backups stopping, however, I prefer to run DBCC's to verify all is well before the backup. Keep in mind that BACKUP WITH CHECKSUM will stop when it detects a bad checksum, and you will not know the whole story of the corruption, where DBCC will read every page and report on all the issues. Keep in mind that, BACKUP WITH CHECKSUM will only read pages that contain checksums, and DBCC will read every page whether it has a checksum or not. If you are time-constrained then you could run DBCC when you have a larger window and employ the BACKUP with CHECKSUM to allow your maintenance to complete quicker. Lastly, you need to make sure that the CHECKSUM option is enabled, and if you migrated from 2000 this option is not on rather torn page detection is. One more point regarding backups that we perform. Once the backup is successfully created we also perform RESTORE VERIFYONLY immediately after to check the validity of any backup and send alerts if any issues are encountered. This way, we are well aware, before the next business day, and can act quickly to determine cause and rerun backups. Of course this all happens through the weee hours of the morning, yuk.Why (3a)?The TLog backup prior to the full backup is useless for the backup being created. We create TLog backups directly after just to begin the chain. This TLOG creation is not necessary since we do have transaction log backups running every 15 minutes. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-01-30 : 02:06:21
|
"Are you implying not to perform DBCC in hopes that your backup with checksum will catch your issues?"No, we run DBCC CHECKDB as part of our daily routine, so I will know if there is corruption soon after it occurs, but we don't make it part of our Full Backup routine. I think I would still want the backup to happen (even if it then fails). I might need it in case something else goes wrong Worth me reconsidering how we time the CHECKDB in our daily routine though."Lastly, you need to make sure that the CHECKSUM option is enabled, and if you migrated from 2000 this option is not on rather torn page detection is. "Yup, that's in our migration notes, plus we do that first and then do Rebuild All Indexes (and all tables have clustered indexes) so we should have checksums on all/most pages."Once the backup is successfully created we also perform RESTORE VERIFYONLY immediately after to check the validity of any backup and send alerts if any issues are encountered. This way, we are well aware, before the next business day, and can act quickly to determine cause and rerun backups."I've never bothered with that, but its a good point. We might be writing the Backup to a "broken disk". We do test restore once a week, but knowing "immediately" that a backup is broken would definitely be worthwhile."The TLog backup prior to the full backup is useless for the backup being created."Indeed, but it "empties" the Tlog, so the Tlog after the backup is smaller - how to described this - it only contains transactions needed from the point the Full backup was started.Your Tlog backup after Full backup contains, say, 14 minutes of transactions before the Full backup - so a restore will have to "ignore" those before starting actual restore of transactions.It may be a very small point (on the other hand the previous 14 minutes might have made 10GB of transactions!), but it was the basis of our decision to backup Tlog just before Full backup. The next 15-minute Tlog backup after Full will be our first Tlog backup after the Full backup. |
 |
|
agiotti
Starting Member
37 Posts |
Posted - 2010-01-30 : 08:34:29
|
<Indeed, but it "empties" the Tlog, so the Tlog after the backup is smaller - how to described this - it only contains transactions needed from the point the Full backup was started.> Okay, now this make sense. Good idea.Great thread! |
 |
|
|