Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 General SQL Server Forums
 New to SQL Server Administration
 Server I/O Design - Reality vs Ideal

Author  Topic 

RobertKaucher
Posting Yak Master

169 Posts

Posted - 2010-03-10 : 09:02:19
I am probably going to replace my company's SQL Server this year and I am looking at various vendors: IBM, HP, Dell, etc. I work as the IT support (i.e. Net Admin, Sys Admin, DBA, Help Desk) for a medium sized manufacturing firm. We have about 80 client PCs, of which about 60 probably use our ERP on a regular basis, which uses SQL Server as its back end.

My question is this. I know the ideal configuration for I/O on a SQL Server. But we cannot afford to spend $20,000 on a system that will have enough drives for the physical RAID configuration that would be ideal. For an environment like ours, where there would never be more than 80 users accessing our server on a regular basis, would it be acceptable to have a system drive (mirrored) and a single RAID 5 volume for all the DBs and log files? Would it be acceptable to have a single RAID 5 for the system, logs, and DBs provided backup was being done properly and tested on a regular basis?

russell
Pyro-ma-ni-yak

5072 Posts

Posted - 2010-03-10 : 13:46:35
depends on the i/o throughput. fire up perfmon and see what's going on now.

i have had smaller clients with exactly that setup (os on mirrored set and everything else on raid5)
Go to Top of Page

RobertKaucher
Posting Yak Master

169 Posts

Posted - 2010-03-10 : 20:50:28
Thank you. I will do that.
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-03-11 : 00:38:32
Consider splitting your database into multiple data files if the database is more than 64GB in size. Even if the data files are on the same drive, you will get a performance improvement since you are striping the data across the files. SQL Server uses a proportional fill algorithm on those data files.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog

"Let's begin with the premise that everything you've done up until this point is wrong."
Go to Top of Page

RobertKaucher
Posting Yak Master

169 Posts

Posted - 2010-03-11 : 22:33:34
Thanks for the tip, Tara. No need to worry about that. The primary DB is no more than 9 GB. Like I said, we are a relatively small shop.

My primary concern is that it can scale as the company grows. But I don't think it would really be a significant change until we reach close to 200 users and I just don't see that happening within 4 years.
Go to Top of Page

Lumbago
Norsk Yak Master

3271 Posts

Posted - 2010-03-12 : 03:03:06
I've read somewhere that you should have one database file for each CPU core on your server "regardless" of the database size and the number of volumes as this will provide better parallelism. A separate RAID 10 for each of them would probably be a stretch for most companies but the more the merrier, right? But in the end it all depends on the application...if the application and the database are tuned correctly it's amazing how much juice you can get out of a piece of hardware.

- Lumbago
If the facts don't fit the theory, change the facts. Albert Einstein
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-03-12 : 05:17:34
I'd be interested in other peoples views on this. The conundrum of "what's the minimum hardware I can get away with" is a common issue, of course. For very big databases hardware can, and should, be specified without stinting, but most databases are in the small-to-medium size range, and budget constraint will usually apply. I think some prudent choices can reduce the chance of catastrophe.

1) Put BAK on different device to MDF / LDF; optimise BAK drive for sequential access.

My theory is:

If LDF becomes corrupt first attempt a Final/Tail TLog backup - this will most likely fail if the LDF is corrupted.

If TLog backups are on separate drive they should be usable. Full backup can be got from disk, or a copy (possibly older / much older) can be got from Tape. Then Full + All TLog backups can be restored.

Make TLog backups every 15 minutes - then maximum exposure for loss of "tail log" is 15 minutes. Longer intervals than this and you might as well not have Full Recovery model at all!!

If the BAK files are on either the MDF or LDF drive then there is a chance that system failure could remove all possibility of using them.

If there is no budget for BAK files on separate media then I would put effort into an immediate copy-to-another-machine as soon as BAK files are created, thus single-point-of-failure will be less devastating.

2) For performance reasons put MDF and LDF on different drives (optimised for Random access / Sequential access respectively)

If MDF is corrupted AND the LDF is on a different drive then maybe we can still get a Tail TLog backup from the LDF (which itself is undamaged being on a different drive). Then we can restore Full, all Tlogs, including the Tail backup, and have zero data loss.

It may also be possible to rescue the MDF file if it is undamaged (but LDF file is damaged). This is not something I would plan for, but it may give me an option in extremis.

3) Move TEMPDB to a different drive (does this require separate drives optimised for TEMPDB.MDF and .LDF ??)

4) At what point do multiple files come into play? and at what point do they need separate drives?
Go to Top of Page

RobertKaucher
Posting Yak Master

169 Posts

Posted - 2010-03-12 : 08:40:29
@Kristen -
1). This is done and has always been the case.
2). This is my primary concern is starting this thread. I believe very strongly that in our current situation, meaning the current number of people accessing the existing server, we would not have an issue with the MDF/LDF files being on the same physical volume.

*BUT* At what point will that no longer be the case? I expect that by 2011 we will have a 5% to 10% increase in the number of people accessing the server or more. Which is not very much. But if we are back to 2007 volumes by 2012 we could have 150 people who need access to the system as each work center will need multiple PCs accessing the DB every minute to refresh data and there will be an increase in office staff who will be accessing the system as well. So when am I hitting the ceiling for a single RAID 5 for MDF/LDF? If I am not witht his company in a year (heck even if I am) I want to ensure the system that I buy and design is scalable enough to meet their needs. I suppose that just means I have to suck it up and ensure I get a system with 3 seperate physical volumes?

3 and 4 are not really issues for me. This company will not experience that kind of growth (requiring MDF partitioning, etc) in the life of this server and if it did such growth would justify a new purchase any way so I cannot factor that into this design.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-03-12 : 10:22:18
For (2) I think you need to look at peak activity currently.

A new server is, presumably, going to be faster anyway ...

Are you planning to upgrade SQL version at the same time? That will most probably be faster like-for-like, and may allow you to make any slow-running queries significantly faster [using SQL-2008 specific features]

But I too would like to know how much difference going from MDF+LDF on same drive to separate drives makes. I look after booth types of configuration, but we've never done side-by-side comparisons.

Perhaps I should shift the DB from a busy site, with separate MDF/LDF drives, onto a single-drive shared server for a week and see what happens!
Go to Top of Page

Michael Valentine Jones
Yak DBA Kernel (pronounced Colonel)

7020 Posts

Posted - 2010-03-12 : 10:48:35
A lot depends on the IO patterns of your server. Most SQL Server applications are very read intensive compared to write IO. If that is the case, you would be better off spending more money for RAM than for high performance disk.

Since your DB is only 9 GB, putting 16 GB of RAM on the server would allow SQL server to cache the whole database in memory and give you some room for future growth. This would eliminate almost all read IO, and the only thing your disk drives would have to do is keep up with the write workload.

If you find that you really need high performance IO, you should look into one of the solid state IO options, like Fusion IO. They are much faster than mechanical disks, and for the small amount of data that you have would be cheaper than a $20,000 array and outperform it. I just saw that Dell offers an 80 GB Fusion IO drive for $3,300 with a write bandwidth of 550 MB/s and read bandwidth of 700 MB/s. For $6,600 in a mirrored configuration you would have IO bandwidth that you would find very hard to duplicate with a $20,000 disk array. They are especially good for random IO, because the average access time is around 50 microseconds, instead of the 5000 microseconds that you can expect with mechanical disk.


See this link for a comparison of two solid state disk types and how they compare to mechanical disks:
Intel’s X25-E Versus The Fusion-io ioDrive
http://www.tomshardware.com/picturestory/493-x25-e-fusion-io-iodrive.html


CODO ERGO SUM
Go to Top of Page

RobertKaucher
Posting Yak Master

169 Posts

Posted - 2010-03-12 : 11:12:28
There might be an insurrection among your users but I would be thankful for the information.

As for the upgrade: yes and no. The current system runs on SQL Server 2000. But the company has a license for 2005 so we will probably be going to 2005. However I may be able to get buy in from some of the VPs based on the BI capabilities of 2008.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-03-12 : 11:19:58
We went from SQL 2000 to SQL 2008. I would object strongly if I was asked (i.e. today) to go to SQL 2005 instead. Got to do all that Migration and Regression testing, might as well spend the DEV money once and go straight to SQL 2008, and then camp there for several years, rather than having to do the job twice in quicker succession. And we can adopt SQL2008-only features now ...

We moved to new servers at the same time, but all our clients are very happy with the improvement in speed (and we haven't rolled out a single SQL-2008-specific feature yet, all we did was move to SQL-2008 compatibility - so maybe it was all hardware!)
Go to Top of Page

russell
Pyro-ma-ni-yak

5072 Posts

Posted - 2010-03-12 : 12:32:03
I think part of the answer to "What's the minimum I can get away with" question is another question "How long do you want it to last based on future growth?"

No one can answer that without taking measurements over a period of time, and making some educated guesses about future growth.

A company that suddenly experiences explosive growth is going to be scrambling for their systems to keep up if they weren't designed to scale in the first place (and they so seldom are lol).
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-03-12 : 17:54:55
quote:
Originally posted by Lumbago

I've read somewhere that you should have one database file for each CPU core on your server "regardless" of the database size and the number of volumes as this will provide better parallelism.



I believe this is only for tempdb.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog

"Let's begin with the premise that everything you've done up until this point is wrong."
Go to Top of Page

jet1337
Starting Member

11 Posts

Posted - 2010-03-16 : 03:05:06
probably
Go to Top of Page
   

- Advertisement -