Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
FUNCTOR
Starting Member
5 Posts |
Posted - 2010-03-31 : 12:07:17
|
Hey!I've got a DB that stores a lot of information in a few tables. Each record has over a meg of binary data. This DB is connected to a production line that needs to be fast. As the DB grows, the time it takes to write to the big records grows, and eventually the production line becomes unstable. What we do right now is we backup, and then purge the big tables when the line becomes unstable, and all is well for the next few months. The problem with this is when we want to access that old information, we have to restore a backup.What I want is to have an online and history version of the DB. The online version will only contain the last month of data. The history version will have all the data. The production line will only be connected to the online DB, and should never become unstable again. All of our reporting applications will point to the history DB as they are not as critical as keeping the production line running.I see there are tools like log shipping to move the data to the history DB, but how do I prevent the normal purging of the online DB from being purged on the history DB?Is there another solution? Any advice?Thanks! |
|
harsh_athalye
Master Smack Fu Yak Hacker
5581 Posts |
Posted - 2010-03-31 : 12:11:50
|
If there are only few big tables, I would suggest periodic archival of data from main tables to history tables. Secondly, you can also partition big tables based on certain criteria. Let's say, based on time, you may want to create partition for each month, to further optimize fetching the data from big tables.As for the history db strategy, why should purge process happen in historical db?Harsh Athalyehttp://www.letsgeek.net/ |
 |
|
FUNCTOR
Starting Member
5 Posts |
Posted - 2010-03-31 : 12:31:19
|
For the archiving of the big tables to tables int he historical DB, how would you set that up? Is there a tool built in, or would I have to write a stored procedure combined with some triggers?That table partitioning sounds good. I'll have to look up how to set that up.We wouldn't want to purge anything from the history DB. From what I understood from the log shipping, if I used that strategy, whenever I purged the online DB, it would also purge the history DB (which I don't want to happen).Thanks for the quick response! |
 |
|
harsh_athalye
Master Smack Fu Yak Hacker
5581 Posts |
Posted - 2010-03-31 : 12:37:11
|
For archival purpose, you don't have to use separate database. You can create history tables in the same database, with say, _history suffix added to them. On our side, we have nightly batches (SSIS packages) which do the archival job.Harsh Athalyehttp://www.letsgeek.net/ |
 |
|
FUNCTOR
Starting Member
5 Posts |
Posted - 2010-03-31 : 13:39:13
|
So you think the slow performance would go away by moving those records to a history table within the same DB? |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
Kristen
Test
22859 Posts |
Posted - 2010-04-01 : 03:20:02
|
One point to consider IF you decide to use a separate database for history.How do you keep live and History databases in sync?If you have to restore Live database how will you restore History to the exact same point in time?You can do this with recovery from TLogs, but it makes any recovery operation more complex. You cannot just restore the FULL backup - because the full backup for each database will not be taken at exactly the same time.If your copy to History Database happens at, say, 1AM then you take full backup at 2am then you are probably OK - provided the History-Copy does not, one day, take more than an hour ...So it just increases the complexity.Also (even if you do it all in one database) INSERT INTO HistoryTable SELECT * FROM LiveTable WHERE SomeDate < @TwoMonthsAgoDELETE FROM LiveTable WHERE SomeDate < @TwoMonthsAgo is going to create massive logging records, which may put strain on your system (disk space, backup size, backup system ...)As others have said:I would just rename the "live" table to "Table01" and create a new, empty, "Live" table, once a month, and then then have a horizontal-partition-view across the various "live" table and "Table01", "Table02", ... |
 |
|
|
|
|
|
|