Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2008 Forums
 SQL Server Administration (2008)
 SQL Server 08 is slow

Author  Topic 

GregDDDD
Posting Yak Master

120 Posts

Posted - 2010-06-07 : 15:24:51
We recently moved from SQL Server 2000 to 2008. This is for a 3rd party application that we purchased. With SS 2000 the front-end was a compiled VC++ application and was very fast and responsive as it pulled and displayed data from the SS 2000 server.

This new version that we just upgraded to is SS 2008 back-end and the front-end is a web based .net application run in Internet Explorer 7. To say it is noticeably slower is an understatement. What happened in the blink of an eye on the old version now takes 4 to 7 seconds on the new version. Many of our users use this software all day long doing a lot of data entry and processing. It is quite frustrating.

When we migrated to SS 08 we archived a lot of data on our older databases, so we are working with even less data than before. Even when we report directly off the back-end, linking to views via ODBC, the system is slower, so it is not just the front-end that has taken a hit.

The company that we purchased the software from said they have made attempts to improve the response time, but I'm not sure what they have done. I know we split up the log and data files to different drives, but that didn't help much.

Can anyone think of some settings I could look at on the SS 08 server to improve the response time?

Thanks,

Greg

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-06-07 : 15:27:33
How often are statistics updated? Is auto-stats enabled on the database? Is READ_COMMITTED_SNAPSHOT isolation level in use? Are there missing indexes? Is there a DBA that you can work with to help answer these questions about the SS08 system?

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

GregDDDD
Posting Yak Master

120 Posts

Posted - 2010-06-07 : 19:49:54
First, my apologies. We are really running SQL Server 05. I recall now that when we purchased the new servers we downgraded to 05. Should this post be moved to the proper form?

Here are the settings for one of the databases. I'm not sure how to tell if READ_COMMITTED_SNAPSHOT isolation level is in use. Is there a TSQL command I can use to check this.



Greg

Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-06-08 : 03:16:23
It is unusual for Page Verify to be CHECKSUM (which is A Good Thing ) after a SQL 2000 database is converted to SQL 2005.

So ... either this was done deliberately (shows people know what they are doing) or the database was created afresh and then the data migrated from the old database (which means that most/all other "sensible" defaults will be in place)

Although I suppose we shouldn't rely on that!

It is important that all tables are rebuild after migrating to new version of SQL and a few other things. See http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=138230 (Its SQL2008 but pretty much everything applies to SQL2005 too)

I would check that the compatibility level is 90 *SQL2005) and not 80 (SQL2000)

I would also look at the CPU %age usage on the server. If it is low (as I am expecting) then either the system is waiting on the disks, or the problem is elsewhere

Presumably SQL and ASP.NET are running on the same server? If so SQL may be allocated too much memory (if not restrained it will take all available memory), if dedicated machine for SQL then it probably should be set to 2GB less than the total amount. However, if the system is still slow immediately after a reboot of the server this is not the (immediate) problem

My gut feeling is that a Web App is never going to perform like a C++ Windows App, but its also possible that the Queries in the new app perform very badly compared to the previous App - just by the way they have been written. If they are the same as before, and the database structure is the same (same indexes etc.) then the problem is unlikely to be with SQL

Is SQL2005 on a new box? Better spec than the old box?

Is SQL2005 sharing the box with ASP.NET or a box dedicated to SQL?
Go to Top of Page

GregDDDD
Posting Yak Master

120 Posts

Posted - 2010-06-08 : 17:51:45
Thanks for the feedback.

There is both an app server and sql server and both are new and better than before. The databases were built new and then the data migrated in. I will look at the link you supplied, along with the other suggestions.

After reading what you had to say, and thinking back to some of the issues we encountered just after we migrated, I'm wondering if some of our problems have to do with the way the data is structured and the new queries for the views. One of the things they did was to make a Common Database for codes and lookup tables that were common with all of the databases.

With the old version, each of the 12 databases was its own "Company" even though there was a lot of common data among them. Now, we still have the 12 databases but we also have a common database so we only need to update one database when new codes are added or changes are made. This is the big change to the back-end.

Some of these tables would probably make a few of you cringe. It is not uncommon to see tables with more than 300 fields in them.
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-06-09 : 02:53:14
Having a Code Lookup table in another database (on the same server) should not make any appreciable difference to performance.

300 columns in a table might make me cringe, but the only time that is likely to cause a performance problem is if queries do "SELECT *" rather than explicitly specifying the columns required - because all columns will be retrieved from the database, they will push other stuff out of the cache, use bandwidth getting to the APP server, and then be ignored! This is particularly bad news if there are "big text" columns involved, and when additional columns are added in the future.

I suppose it is also bad news if the columns should be rows (database not normalised). But if you have 300 different attributes on a given object then its not unreasonable.

(If you have columns that are infrequently used, but indexed, then definitely worth looking at Filtered Indexes in SQL2008 - you can have an index on a column ONLY for rows where the column is NOT NULL - which saves a lot of index space of sparely-populated columns)

Given you have a dedicated SQL box, better hardware than before, newer version of SQL Server, then I think the problem is elsewhere.

Because you have different hardware / SQL version there may be some tuning issues, which may be new to your colleagues, so some wiggle-room ... (such as restricting memory to less than total RAM and so on). BUT ... those aren't going to cause order-of-magnitude drop in performance compared to what you had before. Verification of that will be that CPU on SQL box is not heavily utilised and nor is disk queue long.
Go to Top of Page

GregDDDD
Posting Yak Master

120 Posts

Posted - 2010-06-09 : 10:28:51
Thank you very much for the feedback. I am a programmer and not an administrator, so a lot of this tuning and tweaking of the back-end is new to me. Not only that, but we are small and must rely on the vendor for these sorts of things.

I'll take a look at the things you mentioned and make a back-up/restore of one of the larger databases that I can play with. There is something not right here. Even linking to tables on the new server via ODBC for reporting takes longer than on the old server.
Go to Top of Page

SwePeso
Patron Saint of Lost Yaks

30421 Posts

Posted - 2010-06-09 : 10:53:39
Have you set "MAX MEMORY" for the SQL Server? Leave at least 2 GB (preferable 3GB) for OS.



N 56°04'39.26"
E 12°55'05.63"
Go to Top of Page

tkizer
Almighty SQL Goddess

38200 Posts

Posted - 2010-06-09 : 11:30:12
The recommendation for memory is actually to leave 1GB to the OS and other processes per CPUs. We've got an 8 CPU box (2 quad-core sockets) with 48GB of memory, so we cap SQL at 40GB. On this system we also set max degree of parallelism to 4 per another recommendation. These recommendations are coming from Microsoft.

Tara Kizer
Microsoft MVP for Windows Server System - SQL Server
http://weblogs.sqlteam.com/tarad/

Subscribe to my blog
Go to Top of Page

Kristen
Test

22859 Posts

Posted - 2010-06-09 : 12:36:36
I'm with Tara on this, but immediately after reboot (or even just restarting SQL Service) SQL won't have got around to consuming all memory, so as a quick "rule of thumb test" I suggest if it is still slow after a restart then, whilst getting the memory settings right is important, it won't be the answer.

OTOH if it runs light greased-lightening after restart, and then slows down, then this is the first thing to fiddle with.
Go to Top of Page
   

- Advertisement -