Author |
Topic |
sqlmjkm
Starting Member
14 Posts |
Posted - 2011-09-27 : 08:07:10
|
Looking at one specific table in my database - Index Properties, Fragmentation, --- Where do I find some documentation on what are "good" values and what isn't. For example - If my Page Fullness is 99.67% is that bad? |
|
Kristen
Test
22859 Posts |
Posted - 2011-09-27 : 08:31:02
|
One thing. Ignore very small tables. "small" for us is less than 100 pages.We then reindex up to some threshold (forgotten what that is, but its what we found, empirically, will reindex OFFLINE without disrupting other users) and above that we REORGANISE to allow concurrent use.With Enterprise version of SQL Server you can reindex ONLINE, but I think reorganise does as good a job and doesn't need as much Log and Data file space.I think we trigger reindex / reorganise if there is more than 40% fragmentation (but ignoring "small tables" - which often have 50% even after a reindex) |
 |
|
sqlmjkm
Starting Member
14 Posts |
Posted - 2011-09-27 : 09:28:38
|
So I am taking it my table is considered huge? I have 3 indexes setup on my table.(1) clustered-----Page Fullness 68 %, Average Row Size 2787, Depth 4, Leaf-level rows 2,664,000, Max Row size 2792, Min Row Size 2780, Pages 1,331,853(2) non-clustered-----Page Fullness 99.67 %, Average Row Size 28, Depth 3, Leaf-level rows 2,663,705, Max Row size 28, Min Row Size 28, Pages 9,903-----Page Fullness 99.93 %, Average Row Size 27, Depth 3, Leaf-level rows 2,663,705, Max Row size 27, Min Row Size 27, Pages 9,548 |
 |
|
Kristen
Test
22859 Posts |
Posted - 2011-09-27 : 10:45:04
|
" I am taking it my table is considered huge?"Yup - well ... its "big" in this context |
 |
|
sqlmjkm
Starting Member
14 Posts |
Posted - 2011-09-27 : 11:02:47
|
I'm having trouble with the search speed of this table. Are these stats ok that I listed? |
 |
|
Kristen
Test
22859 Posts |
Posted - 2011-09-27 : 12:53:04
|
"I'm having trouble with the search speed of this table"Make sure the indexes are recently rebuilt, and the statistics updated. If that improves performance then make sure that "housekeeping" is done regularly.If not then you need to optimise your query. Maybe there is no suitable index for your query (in which case, with 2,000,000 rows it is likely to be slow). But there are other possible reasons for slow queries - maybe you have DISTINCT where it is not needed, or UNION where UNION ALL would do ... etc.If you have rebuilt indexes and updated statistics and its still slow post the SQL of your query (in a new thread) and folk here will help you optimise it.It would help if you post the DDL of the Table and Indexes too. |
 |
|
cmbergin
Starting Member
14 Posts |
Posted - 2011-09-27 : 16:11:10
|
Here's a variation on a procedure we use to automate index maintenance. The automated version reorganizes at 5% and reindexes at 30%. Reorganizing or reindexing can take a lot of log space, so be sure to take log backups before and after doing either on large indexes. As usual, "large" is relative.This variant here just lists the indexes that need a little work along with a REORGANIZE command. This variation sticks with REORGANIZE since we don't have Enterprise edition, and I assume it's running during production and not during a maintenance window.SELECT x.name + '.' + t.name AS TableName, i.name AS IndexName, s.avg_fragmentation_in_percent, s.page_count, 'ALTER INDEX [' + i.name + '] ON [' + x.name + '].[' + t.name + '] REORGANIZE; --' + CAST(s.page_count AS VARCHAR) + ' / ' + CAST(ROUND(s.avg_fragmentation_in_percent, 2) AS VARCHAR) AS [command]FROM sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'LIMITED') AS S INNER JOIN sys.indexes i ON s.[object_id] = i.[object_id] AND s.index_id = i.index_id INNER JOIN sys.tables t ON i.[object_id] = t.[object_id] INNER JOIN sys.schemas x ON t.[schema_id] = x.[schema_id]WHERE s.avg_fragmentation_in_percent > 5.0 AND s.index_depth > 0 AND s.index_type_desc <> 'HEAP' AND s.alloc_unit_type_desc = 'IN_ROW_DATA' AND s.page_count > 1000 |
 |
|
cmbergin
Starting Member
14 Posts |
Posted - 2011-09-27 : 16:13:57
|
Oh, I forgot to mention: the first NULL parameter in dm_db_index_physical_stats can be used to specify just one table to examine. OBJECT_ID('tablename') can be put there to limit your results to that table. |
 |
|
sqlmjkm
Starting Member
14 Posts |
Posted - 2011-09-28 : 14:04:28
|
OK - Now I am really confused and need help understanding this. Uggg. The indexes were rebuilt and my query was going really quick. It was great. Then I had to delete / append a ton of records. Now the query is back to going slow again. I know I am missing something. Thank you again! |
 |
|
cmbergin
Starting Member
14 Posts |
Posted - 2011-09-28 : 14:38:51
|
No, that's pretty normal. Index values are stored in order; if you're inserting a bunch of stuff in the middle of the index, you'll get a bunch of page splits, especially with the default 100% fill factor on the pages. If you run the fragmentation query again, you'll probably find your index is again fragmented.Index defragmenting (or rebuilding) is just a normal part of maintaining a database. |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
Cindyaz
Yak Posting Veteran
73 Posts |
Posted - 2011-09-30 : 11:20:19
|
quote: Originally posted by sqlmjkm If my Page Fullness is 99.67% is that bad?
It depends. if this table is heavily used for reading only, then it's good. SQL Server will have to fetch lesser number of pages to process your request. However, if there are lots of inserts/deletes/updates happening, then this number is not good. There will be lots of page splits. You can choose fill factor for a particular table depending on what type of operation is going to take place on it. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2011-09-30 : 12:19:54
|
Very rare that a fill factor of less than 100% wins. Unless its all inserts and no reads ...But even then, likelihood is that inserts are clustered so after a few inserts a page split becomes necessary anyway.Plenty of considered opinion amongst the regulars here that reducing the total number of pages, by using 100% fill, results in a faster system, because of fewer reads, than allowing some slack in the indexes to accommodate inserts without page splits. |
 |
|
sqlmjkm
Starting Member
14 Posts |
Posted - 2011-09-30 : 12:26:23
|
quote: Originally posted by Cindyaz
quote: Originally posted by sqlmjkm If my Page Fullness is 99.67% is that bad?
It depends. if this table is heavily used for reading only, then it's good. SQL Server will have to fetch lesser number of pages to process your request. However, if there are lots of inserts/deletes/updates happening, then this number is not good. There will be lots of page splits. You can choose fill factor for a particular table depending on what type of operation is going to take place on it.
------Yes right now there are alot of inserts/deletes/updates going on. Eventually it will only be read only. How do you choose the fill factor? |
 |
|
Kristen
Test
22859 Posts |
Posted - 2011-09-30 : 12:31:44
|
I suggest you leave it at 100% to start with. You can make experiments reducing it, but its rare that that improves performance (contrary to what may seem intuitive) |
 |
|
sqlmjkm
Starting Member
14 Posts |
Posted - 2011-09-30 : 12:37:59
|
quote: Originally posted by Kristen Very rare that a fill factor of less than 100% wins. Unless its all inserts and no reads ...But even then, likelihood is that inserts are clustered so after a few inserts a page split becomes necessary anyway.Plenty of considered opinion amongst the regulars here that reducing the total number of pages, by using 100% fill, results in a faster system, because of fewer reads, than allowing some slack in the indexes to accommodate inserts without page splits.
-------------- OK, I think I figured out what you are talking about. Uggg. Here is what I am going to try:1. Pull up the table in SQL Manag Studio2. Indexes3. Index Properties4. Options5. Check Set Fill Factor to Yes and make it 100%Is that correct? (thank you) |
 |
|
cmbergin
Starting Member
14 Posts |
Posted - 2011-09-30 : 15:42:03
|
100% is the default fill factor. You only need to specify it if you're reducing it to mitigate page splits, which you should only do after determining that page splits are a problem for you. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2011-09-30 : 17:52:50
|
Just watch out for the scheduled maintenance if you use the Maintenance Wizard. There is an option in there to set "index free space" on ALL indexes to, say, 10%. If that is turned on it will change all your indexes - including the monotonically incrementing ones like IDENTITY !!(Sorry, I don't use the Maintenance Wizard so I don't know the exact terminology for the relevant settings) |
 |
|
sqlmjkm
Starting Member
14 Posts |
Posted - 2011-10-01 : 08:31:03
|
OK - I've gotten a lot of advice on this. Thank you so much. I think... the problem is solved. What was done -1) Indexes rebuilt and additional added with the optimizer (stats didn't seem to help)2) And.... I think this was my HUGE problem. I started getting this error message in SQL testing even the most basic query:"There is insufficient system memory in resource pool "internal" to run this query."AndError Source: .Net SQLClient Data ProviderError Message: There is insuffient memory available in the buffer poolOur SQL admin did something and even my largest query started running fast again. So I think I am back on track again. Is anyone familiar with this error? Doesn't sound anything to do with my query. Sounds like problem with the server.Thanks again!! |
 |
|
Cindyaz
Yak Posting Veteran
73 Posts |
Posted - 2011-10-01 : 13:41:36
|
see this link.[url]http://support.microsoft.com/kb/982854 [/url] |
 |
|
|