Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
Kristen
Test
22859 Posts |
Posted - 2010-11-17 : 15:24:30
|
I'd appreciate your thoughts on this upgrade plan please.We have two servers BOX-1 and BOX-2 which are running Citrix VM. Normally BOX-1 runs IIS and BOX-2 runs SQL. We can fail-over so that a single box runs both VM processes.We have been trying to get all 4 CPU Sockets to be recognised by SQL, but it seems there is a limitation in Citrix VM that only 2 Sockets can be seen (so we are seeing 8 CPU Cores in the VM and SQL, instead of 16).We are also suffering bottlenecks on the disk controller in the SAN.So we have decided to make an emergency upgrade (would be nice if it wasn't an emergency, but the sites have become extremely busy, and we were causing 30 second page rendering yesterday at peak - so can't carry on like that)BOX-01 and BOX-02 share a SAN and are configured (for the SQL VM) to have:C: - Local drive for O/SE: DataF: LogsG: BackupsPlan is:Fail over BOX-01 IIS Web to BOX-02 - BOX-02 then running both SQL and IISRebuild BOX-01 to not have VM any more. Install Windows and SQL.(We will take out the original C: drives and put them to one side, thus easy to back-out if we need to)C: - Local drive for O/SE: - Local drive for DataF: Logs (on SAN as before)G: Backups (on SAN as before)(We would prefer to have Logs on Local Drive too, but the server does not have enough drive slots, and we can't wait the 24 hours to get a new cabinet delivered)Full Backup all databases on BOX-02Disable all jobs on BOX-02Disconnect all usersFinal Differential backup all databases on BOX-02Stop SQL on BOX-02Rename all folders on SAN so that they are not in their expected place / path (to prevent accidents)Stop SQL on BOX-01Copy BOX-01 MASTER and MSDB physical files - so we can put them back if next step failsCreate original Folder Names on E:, F: and G: (but they will be empty)Copy physical files MASTER and MSDB files from BOX-02 (renamed folder) to the folders used at install for BOX-01(I'm a bit vague on this, it may not be possible to use the same path as the F: is shared, and that would cause conflict between SQL MASTER DB on BOX-02 and the new install on BOX-01 - if that is the case we will have to do RESTORE instead of COPY, using new locations)I am hopeful that this Copy / Restore will work - i.e. that the environment will be seen to be "the same as before"If not I will have to reinstate MASTER / MSDB and set them up manually (all logins, jobs, linked servers, etc)If the COPY is successful all User Databases will be SUSPECT at this point (files not found as Path deliberately renamed)DROP all user databases at this point (on BOX-01)Rename all folders on SAN back to original namesRestart SQL on BOX-02(Users can now resume using the service, but its still on the old box)BOX-01 now has MASTER and MSDB from BOX-02, so we have all logins, jobs, linked servers, etc.Backup Staging database from BOX-02 and restore to BOX-01 (Data to local E: drive, Logs to a new, separate, path on original SAN F: drive - and similar for setup of Backups on G:)Perform some tests on the Staging database, if this is all OK then move the User Databases from BOX-02 to BOX-01 and stop SQL on BOX-02(We have a procedure for migrating a database between machines, so I'm comfortable with the steps for that)Thanks for your input and thoughts  |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
Kristen
Test
22859 Posts |
Posted - 2010-11-17 : 15:41:45
|
We don't want to go live immediately on the new BOX-01 SQL as we have had no opportunity to test this solution, so we want to trial the Staging database first.There are 4 databases which are around 30GB MDF and 10GB LDF each, so copy from A-to-B is probably going to take several tens-of-minutes, and we can't be offline for that length of time - even at 4amSo my migration will be along the lines of:Full backup on Source MachineRestore to Target machine with NORECOVERYDisconnect users (e.g. holding page on the web site)Differential backup on Source MachineRestore to Target machine with RECOVERYChange website to reference SQL on Target machinethis generally gives us only a minute or two downtime for the Differential Backup (if the DIFF back is still rather large we restore that with NORECOVERY and then use a TLog backup to actually make the switch over)I did wonder whether we could go live with the new BOX-01 SQL using the files from SAN, exactly as before, and then add a new file (on the new, local, storage) and then use EMPTYFILE on the file on the SAN to actually move the data across from SAN to Local storage - but that's a bit outside my comfort zone! |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
Posted - 2010-11-17 : 16:01:16
|
EMPTYFILE doesn't always empty the entire file. I had to do something similar last year, and EMPTYFILE left several hundred megabytes in the file. After messing with it for a while, it finally moved the rest of the stuff so that I could delete the file. I'd say it took several hours to complete.To move from SAN to local disks, and since you are proposing RESTORE anyway, then WITH MOVE can be used to achieve your desired results.But why do you want the logs on local disk? That seems a bit too risky for me!Tara KizerMicrosoft MVP for Windows Server System - SQL Serverhttp://weblogs.sqlteam.com/tarad/Subscribe to my blog |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-11-17 : 16:35:31
|
Actually we will still have the LDF on the SAN.Dunno why, but we are maxing out the controller in the SAN and its disks are the bottleneck.We gotta fix it first, and then come up with a longer term plan for better performance and D.R.Most failures we are likely to get we will be able to make a Tail Log backup - and restore including that for zero data loss (fingers crossed!)We will log ship from this configuration to give us some D.R. within a reasonable timefame (warm standby)If we cannot get a Tail backup we will lose 15 minutes of data - which would be bad news for sure. We would have to manually increase the IDENTITY on all tables that have one to prevent any accidental reuse of Order Numbers and so on, and we will have no idea what orders we took in those 15 minutes ...I'm not uncomfortable with the decision, given that we have to do something, and it has to be today!The basic spec of the machine seems fine to me. Not sure why we are not getting better throughput. The original setup which was SQL2000, 4GB RAM, 2xCPU, 32Bit ... this is 4 Socket x 4 Core 64Bit, 32GB RAM, Posh SAN disks, separate spindles for O/S, MDF, LDF, BAKs ...We are maxing out at twice the CPU, Trans/second and Users (all three are double the previous Max threshold) ... and I had expected it to do WAY more than that.I have felt the machine was sluggish from day one, not been able to put my finger on why. Dunno if it is the VM, or if the VM is just disguising the real problem.Tomorrow I'll tell you if its actually a fast machine, or not! |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-11-18 : 03:41:03
|
Well it all went fairly badly as might be expected!Failover revealed that two CPUs were faulty (dunno if that was coincidence, or if the fact that the VM had not being using them had disguised the problem)F: and G: drives in SAN used by SQL and shared by BOX-01 and BOX-02 could not be reassigned dedicated to BOX-01. Required reformat. That gave us considerable trouble with where to put the files whilst swapping things around and formatting.I moved the LDF files on BOX-02 from F: to E:. They were 50GB, but as we had all users thrown off at the time I shrunk them before copying - so they were only a few MBs. Then once they were on E: I re-grew them (optimised for VLBs).That freed up the F: driveOnly way to get the backup files from G: across to the new "G:", once formatted, was via a share onto local drive, and then back onto G: after formatting. We have 7 days TLog and DIFF backups online, but 4 weeks weekly Full backups ... 500GB of data across the LAN to local drive, and then back out to SAN drive far from ideal ...In theory they are all on tape, but I didn't want to put that to the test ... |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-11-18 : 03:43:37
|
Copying MASTER seemed to work, but ADMINISTRATOR can't login using windows authentication.I can log in with a SQL login.Can I rename something?Server used to be DB-01. How it is showing as BOX-01-DB-01. In secruity I can see a userDB-01\Administratorcan I just rename it to BOX-01-DB-01\Administratorthere was also an error in Event log "Decryption error". I moved all the files out of the DATA folder, and only copied back the MDF / LDF for system databases from the old machine. I will copy the Security Certificate over and restart SQL to test that. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-11-18 : 03:54:53
|
Neither the Security Certificate from the original server, nor the one from the new install, made any difference.Which one should I use?I still get "An error occurred during decryption." during startup |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
Kristen
Test
22859 Posts |
Posted - 2010-11-18 : 13:09:57
|
quote: Originally posted by tkizer Has the "rename SQL Server" procedure been followed? sp_add/dropserver...
No. Have you got a pointer to it please? (I remember that it needs to be done, but not how!)SELECT SERVERPROPERTY('MachineName')gives the machine's New Nameselect @@SERVERNAMEgives the old name |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-11-18 : 13:11:05
|
Still having trouble with "An error occurred during decryption."got that trying to run a process via a linked server. Assumed it was my password (stored in the "link"). Dropped the Linked Server and trying to recreate it get the same message |
 |
|
tkizer
Almighty SQL Goddess
38200 Posts |
|
X002548
Not Just a Number
15586 Posts |
|
Kristen
Test
22859 Posts |
Posted - 2010-11-25 : 14:13:05
|
We've ditched the SAN. I don't know why, but its performance is just way below what we need.We had two virtualised servers connected to it. Either could run SQL, IIS or both.Turned out Xen Server could only make two of the four CPUs available to the virtual machine, and SQL could only see 4 Cores (I have no idea whether they where 1 core from each CPU, or all 4 from one, either way it was not enough to get us through the day when the site got busy)So the machine was rebuilt to remove the virtualisation; as noted above 2 CPUs failed (or were found to be faulty) in the process. HP supposed came within the 4 hour call-out window and just left the processors somewhere rather than fitting them, so they then had to come back (another 4 hour call-out window) later to fit them. Limping along on one machine during all this time ...Then decided to move files off the SAN to a hastily fitted local drive. Just moving the files took a significant amount of time.Dunno what we should have expected from the SAN, but if the site was running at "modest" (i.e. WAY below peak) and we did a Cut and Paste of a 1GB file in Windows Explorer from one drive to another then SQL ground to a halt. It was as though the bandwidth for the filecopy had priority.Even using ROBOCOPY with severe throttling did not completely solve that issue.Each time a backup ran there was a severe slowdown on other processes using the SAN.I guess we didn't spend enough money on the box (although it was GBP 4,000 without any drives, so I don't think that is exactly cheap!, but maybe no one uses SANs without spending GBP 10Ks and upwards?)So then someone decided we would move the LDFs to local storage too, and get rid of the SAN altogether. By this point it is not providing much in the way of redundancy. There is no virtualised machine to quickly switch over, and I'm not convinced that the LDF drive could be switched to a new machine - we had to reformat the SAN drives that were repurposed from being shared by two machines to being dedicated to one.So we needed more drives than would fit in the local server's box. 24 hours to get a case extension. Something went wrong when they fitted the drives at 1am ... they were a couple of hours fiddling about - I was asleep ... we had a software upgrade rollout scheduled for 5am, so convenient to move the LDF across to the new drives at the same time.Rollout to STAGING had been done and tested the night before, all ready to Rock & Roll ...5am put up the website holding page, do all the rollback backups etc. Cushtie ...Spotted that a backup file was dated Jul-2009. Then realised that the clock was showing that date / time. Aborted the rollout plans and set about sort out the mess. Rollout window was 30 minutes, sorting out this mess was not going to happen in that time interval.Put the computer clock forwards (should have stopped to think first, but maintenance window was short, and whole team is tired ...) that caused the cleanup job (which had not run overnight as it was not due yet according to the system time!) to run. That deleted the roll-back backup files that had been made earlier (no other copy of those as yet, of course) which created a broken-chain for DR recovery that we urgently needed to plug. Full backup is about 40 minutes on a good day ...I suppose we should have some sort of startup test that Server Time is no earlier than the most recent transaction, or somesuch. Do you have that in your system?All orders placed overnight were dated Jul-2009. The Archive Table of what anyone had done during those 3 or 4 hours had been purged ... (and the TLog backups of that data had gone, too, in the cleanup job) so no means of telling what data had changed when and how.So ... manually had to change all the Order Dates on things that had the wrong date, and had an order ID in the right range for "overnight". We did have real orders from July-2009 in the time interval in question, sadly, so we had to knit-spaghetti to sort it out.Then had to sort out all the downstream effects. Lots of other tables (registrations ...) had to be sorted out, and data sent to third party systems too - order fulfilment had denied the EDI files as being too old (that saved another job), but then we had to work out which ones HAD got through, and which not and needed resending.This server is (supposedly) set up to synchronise with a Time Server - but I notice its a minute wrong at the moment, so something not working right.So last night we reschedule the upgrade rollout. And to move the Log files to local storage.I DETACHed the database and moved the LDF from the SAN to the new local drive. During the ATTACH SQL couldn't see the LDF file for some reason (probably permissions, I don't know, my colleague reset the permissions on the drive for me so I don't know what they were, nor what they should have been). Anyway, SQL didn't see the LDF file so created a new one - in the folder the MDF was in and changed the database from DBO only to MULTI-USER so things started connecting to it.(More details here: http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=153409)Thus had to push the DB Offline and restore it instead. That's not a quick task - particularly as we needed FULL plus a DIFF which was almost as big as the FULL and a half dozen multi-GB TLog backups ... which left us very tight in the maintenance window. Got everything installed in time though - at least the rollout of our software I have some control over! I took the little one to school and when I came back the sites were live, so I guess the holds-water testing went ok!Then we hit a problem with PayPal payments. Part of the upgrade was to add features that allow the customer to "Reserve in Store" and "Deliver to Store for collection". These make significant changes to the checkout process, so we had tightened up all the status codes and disallowed anything "unexpected". Watching the site logs we saw that some routes through PayPal were causing problems - causing the order confirmation email to fail and giving the customer an error page. Customers then tried a number of things! Looking at the site logs they either just exited the site (very trusting!), went to My Account and saw that their order was there ... or pressed BACK far enough to try again - that failed if they visited PayPal again (same transaction number). But worse was if they just went to the home page, their Basket had not cleared, so they then paid using a credit card instead. One customer, still not convinced his order had worked, called the call centre and placed a new order over the phone! - and then sent them an email asking if any of his other orders might have worked - I expect he had three order confirmations in his Inbox by then!So we removed PayPal from the permitted Payment methods, and spent the morning working through the logs seeing who had used paypal, and encountered an error, and what else they had gone on to do. That enabled Customer Services to, hopefully, give each customer some reassuring follow-up and prevent anyone getting charged twice (or sending them two identical packages). We had run with the error in place for less than an hour but even so we had "more than a few" paypal orders to sort out.We fixed the PayPal issue, but prudence has prevailed and that won't be put live until tomorrow morning. I was thinking of having a lie-in ...Actually, not, as it turns out. Apparently someone has decided that some Personalised Products cannot be sold from tomorrow onwards. Too close to Christmas to actually make them, and get the Couriers to deliver them, I expect. We have a simple business rule in the DB - "If it is personalised its in stock". We've long championed for the Order Fulfilment system to have a VIEW that we can query that tells us the stock level for ALL products. But no ... we get the stock level for most products that way, but some others we have to "put back in stock" during data transfer.So now that will change to 100% of products have a stock level in the Stock View we interrogate. Hooray! Much better. Good job you don't want it until first thing tomorrow - "Miracles I can do by tomorrow, the impossible takes a bit longer". But I don't trust them and their casual testing-less approach to data management, so I will run a complete comparison of their New View against the actual, current, stock level data to make sure the New View is water tight. Not going to be an early night tonight then ... but I guess the Lie-in tomorrow will be OK |
 |
|
dataguru1971
Master Smack Fu Yak Hacker
1464 Posts |
Posted - 2010-11-25 : 14:29:40
|
Sounds like lots of ibuprofen and no sleep, but interesting to see how all this played out for you. I read a lot of that twice, and it just kept sound like "Kristen is underpaid". Poor planning on your part does not constitute an emergency on my part. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-11-25 : 14:32:22
|
"Kristen is underpaid".Hehehehe ... well that's the case of course, regardless of how much I get paid!I like the 6 P's:"Proper Planning Prevents Piss Poor Performance"in this outfit we leave off the 7th P - "Probably" |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-11-26 : 14:50:58
|
OK, so more sense was seen today. Fixing up the Stock Level was ditched in favour of Finally! - two months after I said we needed to reserve time - starting to tuning the database.The Client's SEO Consultant said that we had a caching problem on the images on the web server. He had done something like "Save as" in IE and got a local copy of the HTML (static HTML at that, natch!) and the images and it all seemed Jolly Fine to him; therefore it was a caching problem on our web server. I don't think he knows Jack Shit about SEO either, but that's another story ...Some other consultant dragged in to help did at least send a note putting the SEO Expert right on all points, and did ask us some sensible questions about how the web server was set up etc. but nothing we didnn;t already know and hadn't already done. And then he said "So what are you doing about the database?" ... so that indicated we had reached the limit of his knowledge ... notwithstanding the well meaning help we do know how to fix this, we just need enough time to do it ... two months ago would have been good ... oh! I said that already ...So now I want to build something that will bring the Server Busy page up. I figure I need something like the Traffic Flow camera we have on the main roads in the UK. They photographs the number plate and then see how long it takes to get to the next camera. The fastest car tells you what the best possible speed is, at that moment in time. We don't care about the occasional slow car, but if the traffic gets snarled up the fastest car will be slower than normal (and in the UK that enables you to get a "Journey time is ... TEN ... minutes longer than normal" (think "pathetic synthesised voice" and you've got it!)I want to avoid logging every page view, or incremented a counter, because that's just going to add to the burden on the server.So my thinking is:Have a single row in a table with Date/Time and Fastest Page Render time. We will do this for a specific page only - we have about 5 pages that are very heavily used, and one of these has worse performance than the others, so that's the one. When that gets to, say, 10 seconds to render then we need to activate Server Busy.The routine that detects Page Render Time will decide if this is the page being monitored and will update the Fastest Page table/row:UPDATE PageSpeedControlSET LogDate = GetDate(), ElapsedTime = @MyElapsedTimeWHERE ElapsedTime > @MyElapsedTime then we want to know when it slows down. Lets say we want to check that every minute:So ... if the LogDate is more than a minute ago there has been no page render faster than that. AND if the ElapsedTime for that page render (i.e. the one a minute ago) was more than 10 seconds - then the site is slow. Activate the Server Busy page. The Busy Page will stay up until a Page Render is faster than the ElapsedTime and also less than 10 seconds.If the ElapsedTime for that page Render is less than 10 seconds then it was fast enough at that time. So how quick was the page we just rendered? If that was SLOWER than 10 seconds then we update the PageSpeedControl table. That sets a new LogDate and an ElapsedTime for other page renders to beat. If our Page Render was slower than the one in PageSpeedControl, but more than a minute has elapsed, then we update PageSpeedControl regardless - that creates a new target for others to beat and if, after another minute, there is no Render Time better than 10 seconds then the Server Busy page will be come activated.I'll tootle off and see if I can write an elegant SQL statement for that which does minimal updates. (But your answers are also welcome ) |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-11-29 : 19:44:49
|
Some analysis revealed that the "Fastest Car on the Road" method wouldn't do for establishing if the Server Busy page should be deployed. Looking at Duration of the Sproc for one of the main "barometer" pages it was clear that every few seconds at least one Sproc call managed to run in just a few milliseconds - when all other executions were 5-10 seconds. Probably an identical repeat request. Either way, it needs proper recordal of the total number of executions and the median or mean (or some more fancy statistical analysis) to determine whether the site is busy. That means lots more updates to a "counter" row, or some report against the logging tables (which IME is not fast to do).I had an idea for the "cheapest" way to do this - based on the Card Counting that mathematics students at MIT used to cheat at Vingt-et-un in Vegas:Define the duration threshold - lets say 5 seconds - so a page-render longer than 5 seconds is too slow.Define how often the holding page will be reviewed - lets say every 30 secondsIf the page render is less than 5 seconds decrement the counter, otherwise increment it.and if the Last Review Time is more than 30 seconds ago update the Server Busy Flag : if the count is negative set the Flag=0, otherwise Flag=1 : and reset the Counter = 0But it needs an additional database row update for every single page view, which [when we are busy] is an additional strain.We seem to have managed OK on the site this evening with a more string-and-gum approach: I got he Managing Director to sit at his screen with a glass toggling the Server Busy Flag when the TCP connections approached 500, or the number of locks started to sky rocket   . We got more orders-per-hour than ever before using this method despite the fact that we showed Server Busy for more than 40 minutes in the hour. Web logs analysis shows that Spiders tried to pull 50,000 pages from the site - I don't know why we bother to let any of them in - we get a pitiful amount of useful traffic from them - except for Google.So tomorrow we will devote time to building a better Server Busy page - it will save the M.D. having to drink another half a bottle of port!Although it pains me to be doing this ... I curse whoever wrote the silly little paper-throwing animation when you copied a file in Windows XP (or was it earlier? Windows ME perhaps ... I forget). Why couldn't s/he have spent the time improving the algorithm for calculating the estimate of completion time instead? and thus I feel a fraud spending time making the Best Server Busy Page on the Planet(tm) , rather than actually fixing the system to allow absolutely anyone who wants to to place an order. The number of Spider requests is outrageous, although mostly our own fault. Like so many things the Client demanded we release a feature out of DEV before it had been through QA. Furthermore, like most such scenarios, because the feature was tightly bound to a whole raft of other changes within DEV it could not be decoupled without a full, formal, upgrade (involving QA cycle, building a release package, and finally the deployment cycle to the client - cold comfort that we do actually know best-practice and how it should be done).So a "bodged release" instead, which involves pretty much rewriting the code so that it is no longer closely-coupled to other features. The side effect was unforeseen. The purpose of the "feature" was to replace the old fashioned "?FOO=1&BAR=2" type URLs with "/the-best-widgets-in-the-world" because the SEO people persuaded the Client that this just Had To Be. Tosh! Interestingly our corporate marketing CMS system still uses "?FOO=1&BAR=2" type URLs. Our business is selling web systems, but we are also able to look after client's hardware so they can one-stop-shop with us - but its not "core" for us. Anyway, we get Google-top-10 ranking on a number of "Office server" type search phrases - with our supposedly-crapy "?FOO=1&BAR=2" URLs. Ho! Hum!Sorry, if you haven't nodded off yet!: we implemented the new SEO-friendly URLs for the clientThey still have some existing CMS data which has relative URL links to, say, "?XXX=8&YYY=9". In the good old days this would have been appended to the current page - DEFAULT.ASP - and we would have be fine. Now the URL becomes "/the-best-widgets-in-the-world?XXX=8&YYY=9" - which is a problem because "/the-best-widgets-in-the-world" is the same as "?FOO=1&BAR=2"- but what the User actually wants is "?XXX=8&YYY=9" instead.No problem, we bodged the system so that if there was both an SEO URL and an old-style set of parameters we'd just use the old-style one. Google was supplied with a SiteMap XML file so it happily digested, and indexed, all the new fangled SEO URLs.Trouble is, Google is now looking for "/the-best-widgets-in-the-world?XXX=8&YYY=9" in addition to "default.asp?XXX=8&YYY=9" and "?XXX=8&YYY=9" is appearing on the end of thousand of such SEO-URLs.There's an issue within the system where we handle the old-style URL. We just display the content for "?XXX=8&YYY=9" (we don't do a redirect) so it doesn't matter that there is some random SEO-URL on the front, but the trouble is that Google thinks that's a valid composite URL (Google is smart enough to decide on a canonical URL to show in searches, but it still has to spider ALL the variants before it can determine that). Google keeps coming back asking for the composite URLs it has remember from previous visits. So for every real page we have Google asking for it in Perm N-from-M different ways . We've long since removed all the routes where these composite URLs could be found within the site, but Google [and all its cousins] keep asking for the composite URLs, and we keep giving them content for them . Sooner or later Gogoge will find that there are no links within the site that match the URL, and will throw it away - but IME that's probably 3 months and the spidering activity is killing us. I expect we'll deploy a URL Rewriter solution to provide a Permanently Moved response code that will cheer up the search engines ...... but in the meantime we will devote time to making the Best Server Busy Page on the Planet!So here's the thinking:Currently if you have an active session you are allowed through the Server Busy page. Everyone else has to wait.We don't really want spiders on the site when we are busy. We would like loyal customer to have priority, plus anyone who is likely to actually place an order.If you don't process Javascript, or you won't store a Cookie, then you are banned! (But at 2am the Server Busy page won't be needed, so Spiders will get their chance)If you happen to have a Persistent Cookie from a previous visit then you are a loyal customer and you have priority.Otherwise you are in a queue. If you are patient to sit and wait you can get in, if you were just following a speculative link then you are less likely to place an order, and probably unlikely to wait.That leaves the PPC traffic - 90% of which is junk ... but ... it might well be that 100% of it is paid for ... denying that is a more tricky conundrum. I expect I will have to bend over and do the client's bidding on that one.So what's the plan? "You are held in a queue, current wait time is 30 seconds" is the new Server Busy page! Lets hope this doesn't get picked up by the Daily papers or everyone will rock up just to see it! |
 |
|
dataguru1971
Master Smack Fu Yak Hacker
1464 Posts |
Posted - 2010-11-29 : 20:08:46
|
My previous analysis stands. Kristen is underpaid :) Poor planning on your part does not constitute an emergency on my part. |
 |
|
Kristen
Test
22859 Posts |
Posted - 2010-11-29 : 20:17:40
|
You are now promoted to be my boss. Lets see if the analysis changes? !! |
 |
|
dataguru1971
Master Smack Fu Yak Hacker
1464 Posts |
Posted - 2010-11-29 : 20:21:57
|
I am perfect for management of you. I don't understand the problem* and can't help solve it*** Not true** Probably true. Poor planning on your part does not constitute an emergency on my part. |
 |
|
Next Page
|
|
|
|
|