Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 General SQL Server Forums
 New to SQL Server Administration
 Database Disk Space Rule of Thumb?

Author  Topic 

ostinoh
Yak Posting Veteran

66 Posts

Posted - 2010-03-17 : 16:23:05
Hello -

I'm in the process of creating a layout for new sql servers to replace our current ones. And was hoping someone can give me a rule of thumb when it comes to the disk space needed for the database, logs, backups, etc...

I currently have the old database sizes and need to plan for growth in the company.

Any help would be greatly needed.

Thank you,
Ostinoh

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-03-17 : 16:28:09
If you have the old sizes, then you know how much you need now. We can't help with the growth portion without more information. How much growth do you expect?

Native backups without compression will be about the same size as the used portion of your data file(s). We use compressed backups though. The LDF as a starting point might be around 25% of your data file(s). The size of your data file(s) depend on your data and indexes.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog

"Let's begin with the premise that everything you've done up until this point is wrong."
Go to Top of Page

ostinoh
Yak Posting Veteran

66 Posts

Posted - 2010-03-17 : 19:54:42
True about the old sizes. These databases are our Time and Billing and everyone access it. We are now at 700 people with the plan to go to 2000 in 2010 and 2011.

My biggest db is 10GB with 4 days retention of trans log backups taken every hour and 1 full backup of the DB's taken at night.

My system admins are very sticky about disk space.

Can the info I supplied give an idea of what I should plan for?

Thanks
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-03-17 : 20:42:15
Are you saying 2000 people using the application or is 2000 representative of data? People accessing a database wouldn't change the growth projections.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog

"Let's begin with the premise that everything you've done up until this point is wrong."
Go to Top of Page

ostinoh
Yak Posting Veteran

66 Posts

Posted - 2010-03-17 : 20:48:39
Good point it would be 2000 people entering time and running reports. So is there a way I should approch the data side. Example finding out what tables I currently have? Is there a tool I can use to help with that. Sorry if I'm giving the wrong info. I was move into the DBA postion and learning as I go.

Thank you
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-03-17 : 20:52:23
What matters for projecting disk space requirements are the row counts of your tables. Someone needs to tell the DBA what the projections are for the future. The DBA typically does not provide this. The DBA typically receives this information and then performs the calculations to determine disk space requirements.

Once you have the projected row counts, you then calculate max/avg row size by looking at the data types. Add those together to get table size. You then need to figure out your index sizes. Add all of those together to get your projected database size. Then ask for at least 5 times that space to account for tlog and 4 days of backups (check your current ratio).

Or maybe someone can tell you to expect double the size (or similar) in the next couple of years to make your life easy.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog

"Let's begin with the premise that everything you've done up until this point is wrong."
Go to Top of Page

ostinoh
Yak Posting Veteran

66 Posts

Posted - 2010-03-17 : 23:22:42
Thank you for the responses.

You stated "calculate max/avg row size by looking at the data types." Do I have to look at each table to get that infomation? Or is there a script or SP to run to gather all that info?

Thank you...
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-03-18 : 00:06:38
Each table needs to be looked at. I'm sure there are tools out there that can help you calculate the information, but I don't know of any.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog

"Let's begin with the premise that everything you've done up until this point is wrong."
Go to Top of Page

ostinoh
Yak Posting Veteran

66 Posts

Posted - 2010-03-18 : 00:46:02
I was able to query the following

Name Rows reserved data index_Size unused
Expense 65875 158552 KB 53720 KB 64000 KB 40832 KB

Does this data help the calculations needed? Ofcourse I do not have the projection yet.

Thanks
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-03-18 : 00:48:51
That just provides the current information. It doesn't provide any projections.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog

"Let's begin with the premise that everything you've done up until this point is wrong."
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-03-18 : 03:22:53
quote:
Originally posted by ostinoh
... trans log backups taken every hour


Taking TLog backups every hour may mean that any maintenance (index rebuild etc) is responsible for most of the size of the LDF file - i.e. normal daily use may only be a small fraction of that.

We escalate TLog backup to every 2 minutes during overnight maintenance to keep the LDF filesize down, and do backups every 15 minutes the rest of the time.

No reason not to do them very regularly - same total amount of actual transactions to backups, plus a bit for the "overhead" of each file - and a bit more faff if you have to restore - 4x as many TLog files to script for the restore!

But on the upside you reduce maximum data loss to 15 minutes data ...

How big is your LDF relative to your MDF? If LDF is more than 120% of MDF then worth checking what transactions are bloating it - it may be that large from a single, one-off, huge transaction (e.g. a mass deletion that won't be repeated any more) and could be reduced in size (i.e. as a one-off job).
quote:

and 1 full backup of the DB's taken at night.


If you get tight for disk space there are a couple of things you can do.

You could take FULL backup less frequently (once a week say) and differentials each other night. The differentials quite quickly become "large", but they are unlikely to get to 50% of the size of a full backup.

You could also store them in a compressed folder (not recommended by Microsoft, but I've done it for years and never had a problem, but you could change the Compression of the backup file after it was created if it bothers you creating it directly in a compressed folder)

SQL2008 has Compression built in, and Sparse Column support - useful if you have some columns that are not used very often. Also filtered indexes - if you have indexes on columns that have, say, a lot of NULLs you could leave out the NULLs - saving a lot of index space.

There is a table size script here:
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=61762

which will tell you numbers of rows and space used for table / indexes. You may find that helpful when extrapolating your projection.
quote:

My system admins are very sticky about disk space


They should be doing your bidding, not the other way round!

Nowadays we seem to have everything virtualised. I can get IT to provide me with twice the disk space and another couple of CPUs in a couple of hours if I needed it for some emergency, and I can give them back again tomorrow when the emergency has passed ...
slightly facetious example but it would definitely be possible if I needed it.
Go to Top of Page
   

- Advertisement -