Please start any new threads on our new site at https://forums.sqlteam.com. We've got lots of great SQL Server experts to answer whatever question you can come up with.

 All Forums
 SQL Server 2005 Forums
 SSIS and Import/Export (2005)
 Execute SQL tasks parallely

Author  Topic 

vijayisonly
Master Smack Fu Yak Hacker

1836 Posts

Posted - 2009-02-19 : 09:48:02
I have 6 SQL tasks (6 SPs) that run and populate 6 different stagiung tables.
I have scheduled them sequentially right now. All of them operate against the same tables (No Updates, Only SELECTs with NOLOCK).
The first SP takes about 5 mins to execute and the rest about 2 mins each. So the total execution time is around 15-20 mins on an average

If I schedule them to run parallely in SSIS, is there a chance to bring down the execution time to may be 8-10 mins? Or is this a bad idea as they operate against the same tables?

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2009-02-19 : 09:51:32
why are they taking that long in the first place. maybe your problem is your sprocs.
you might be trying to solve the wrong problem?

have you checked your procs using execution plan? maybe you are missing indices etc etc
recently I found out that the problem with my procs was that the replicated data (which is corporate's data we do not touch) was the problem , it had absolutely no indices for the key field I was joining to. They added these indices and turbo perfomance! literally sprocs that were taking 3-4 minutes , now return in seconds
2 cents
Go to Top of Page

vijayisonly
Master Smack Fu Yak Hacker

1836 Posts

Posted - 2009-02-19 : 09:53:06
I have checked the execution plans and most of them Index Seeks except a couple of Scans which I cant avoid.

The parent table has about 2 millions records, which is why it takes about 2 mins I guess. (15-10 mins overall for all SPs)
Go to Top of Page

vijayisonly
Master Smack Fu Yak Hacker

1836 Posts

Posted - 2009-02-19 : 09:54:19
Also I'm executing them in a Production Mirror DB, where the performance is not as great as production. Expect this to run a little faster in Prod.
Go to Top of Page

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2009-02-19 : 09:59:52
can you show us your SPROCs...what is actually happening in your procs.
do you need all of the 2 million records. Can you get a subset of that parent table? maybe there are records in there you do no need such as 6 months back or something....then create an indexed view , see if you are using table variables vs temp tables. if using temp tables could you add index to them etc etc
Go to Top of Page

vijayisonly
Master Smack Fu Yak Hacker

1836 Posts

Posted - 2009-02-19 : 10:06:57
The parent table has about 50 million records and growing. There is an initial package which creates a Daily table which contains transactions of that particular day. On some days, this can be about 1 or 2 million. All the SPs thereafter uses this Daily table as the parent table. So what I'm saying is, all the records in the daily table are needed for processing.
Here's an expample of an SP.

INSERT INTO CMCHKEXT(
TRAN_ID,
COMPANY_CODE,
POLICY_NUMBER,
TRANSACTION_DATE,
TRANSACTION_TIME,
CHECK_AMOUNT,
CHECK_NUMBER ,
CHECK_DATE,
CHECK_PRINT_DATE,
INDIVIDUAL_NAME ,
NAME_ID ,
CHECK_SEQ,
TRANSACTION_CODE,
TRANSACTION_AMOUNT_STAGE ,
DEBIT_TRANS_CODE ,
DEBIT_ACCOUNT_CODE,
CREDIT_ACCOUNT_CODE,
LAST_PROCESSING_DATE ,
LAST_PROCESSING_TIME ,
CODER_ID ,
STATUS_IND ,
NEW_CHECK_NUMBER,
CHECK_MESSAGE ,
LAST_CHAR ,
CREDIT_ACCOUNT,
DEBIT_ACCOUNT,
PAYEE_RELA_CODE,
PAYEE_SEQUENCE
)

SELECT
'P',
A.COMPANY_CODE,
A.POLICY_NUMBER,
A.DATE_ADDED,
A.TIME_ADDED,
A.TRANS_AMOUNT,
'00000000000',
'00000000',
'00000000',
CASE ISNULL(PO_NAME.NAME_FORMAT_CODE,' ')
WHEN 'I' THEN CAST(PO_NAME.INDIVIDUAL_LAST AS CHAR(20)) + CAST(PO_NAME.INDIVIDUAL_FIRST AS CHAR (20)) + CAST (PO_NAME.INDIVIDUAL_MIDDLE AS CHAR (10))
WHEN 'B' THEN SUBSTRING(PO_NAME.NAME_BUSINESS,1,50)
ELSE ' '
END,
ISNULL(PO_NAME.NAME_ID,' '),
'0',--NEEDS ANALYSIS
CASE WHEN A.CREDIT_CODE >=250
THEN A.CREDIT_ACCOUNT
ELSE A.DEBIT_ACCOUNT
END,
A.TRANS_AMOUNT,
A.DEBIT_ACCOUNT,
A.DEBIT_ACCOUNT,
A.CREDIT_ACCOUNT,
CAST(CONVERT(VARCHAR(8),GETDATE(),112) AS CHAR(8)),
CAST(RTRIM(LTRIM(REPLACE(CONVERT(VARCHAR(11), GETDATE(), 114),':',''))) AS CHAR(8)),
A.CODER_ADDED,
'P',
'00000000000',
'000000000000000000000000000000000000000000',
'0',
A.CREDIT_ACCOUNT,
A.DEBIT_ACCOUNT,
A.PAYEE_RELA_CODE,
A.PAYEE_SEQUENCE
FROM CACTGDAILY AS A WITH (NOLOCK)
LEFT JOIN PPOLC AS D WITH (NOLOCK) ON D.POLC_KEY0 = CAST(A.COMPANY_CODE + CAST(A.POLICY_NUMBER AS CHAR(12)) AS BINARY(14))
LEFT JOIN PRELA_KEY4 AS PO_KEY WITH (NOLOCK) ON ((PO_KEY.RELA_KEY4 >= D.POLC_KEY0 + CAST('00PO00000000' AS BINARY(12))) AND PO_KEY.RELA_KEY4 <= D.POLC_KEY0 + CAST('00PO99999999' AS BINARY(12)))
LEFT JOIN PNAME AS PO_NAME WITH (NOLOCK) ON (PO_NAME.NAME_KEY0 = SUBSTRING(PO_KEY.RELA_KEY4,19,8))
LEFT JOIN (SELECT NALK_RELA_KEY, NALK_ADDRESS_ID, RANK1 FROM
(SELECT SUBSTRING(NALK_KEY0,34,8) AS NALK_ADDRESS_ID,
SUBSTRING(NALK_KEY0,1,8) AS NALK_RELA_KEY,
RANK() OVER (PARTITION BY NAME_ID ORDER BY EFFECTIVE_DATE DESC, ADDRESS_ID DESC) AS RANK1
FROM PNALK WITH (NOLOCK)WHERE ADDRESS_ID <> 0 AND EFFECTIVE_DATE <= CONVERT(char, GETDATE(),112)) A
WHERE RANK1 = 1) NA
ON NALK_RELA_KEY = PO_NAME.NAME_KEY0
LEFT JOIN PADDR AS PO_ADDR WITH (NOLOCK) ON PO_ADDR.ADDR_KEY0 = NALK_ADDRESS_ID
WHERE A.CREDIT_ACCOUNT > ''
AND ((A.CREDIT_CODE >= 250 AND A.CREDIT_CODE < 259) OR (A.DEBIT_CODE >=250 AND A.DEBIT_CODE < 259))

--UPDATE THE NAME AND ADDRESS IF THERE IS A 'PE' RELATION BASED ON SEQUENCE
UPDATE CMCHKEXT
SET INDIVIDUAL_NAME =
CASE ISNULL(PE_NAME.NAME_FORMAT_CODE,' ')
WHEN 'I' THEN CAST(PE_NAME.INDIVIDUAL_LAST AS CHAR(20)) + CAST(PE_NAME.INDIVIDUAL_FIRST AS CHAR (20)) + CAST (PE_NAME.INDIVIDUAL_MIDDLE AS CHAR (10))
WHEN 'B' THEN SUBSTRING(PE_NAME.NAME_BUSINESS,1,50)
ELSE ' '
END,
NAME_ID = ISNULL(PE_NAME.NAME_ID,' ')

FROM CMCHKEXT A WITH (NOLOCK)
LEFT JOIN PPOLC AS D WITH (NOLOCK) ON D.POLC_KEY0 = CAST(A.COMPANY_CODE + CAST(A.POLICY_NUMBER AS CHAR(12)) AS BINARY(14))
LEFT JOIN PRELA_KEY4 AS KEY1 WITH (NOLOCK) ON ((KEY1.RELA_KEY4 >= D.POLC_KEY0 + CAST(('0'+SUBSTRING(A.PAYEE_SEQUENCE,1,1)+'PE00000000') AS BINARY(12))) AND KEY1.RELA_KEY4 <= D.POLC_KEY0 + CAST(('0'+SUBSTRING(A.PAYEE_SEQUENCE,1,1)+'PE99999999') AS BINARY(12)))
LEFT JOIN PNAME AS PE_NAME WITH (NOLOCK) ON PE_NAME.NAME_KEY0 = SUBSTRING(KEY1.RELA_KEY4,19,8) AND PE_NAME.NAME_ID <> 0
LEFT JOIN (SELECT NALK_RELA_KEY, NALK_ADDRESS_ID, RANK1 FROM
(SELECT SUBSTRING(NALK_KEY0,34,8) AS NALK_ADDRESS_ID,
SUBSTRING(NALK_KEY0,1,8) AS NALK_RELA_KEY,
RANK() OVER (PARTITION BY NAME_ID ORDER BY EFFECTIVE_DATE DESC, ADDRESS_ID DESC) AS RANK1
FROM PNALK WITH (NOLOCK)WHERE ADDRESS_ID <> 0 AND EFFECTIVE_DATE <= CONVERT(char, GETDATE(),112)) A
WHERE RANK1 = 1) NA
ON NALK_RELA_KEY = PE_NAME.NAME_KEY0
LEFT JOIN PADDR AS PE_ADDR WITH (NOLOCK) ON PE_ADDR.ADDR_KEY0 = NALK_ADDRESS_ID
WHERE A.PAYEE_RELA_CODE = 'PE'

--VALUE TRANSACTION_AMOUNT
UPDATE CMCHKEXT SET TRANSACTION_AMOUNT = REPLACE( RIGHT(REPLICATE('0', 15) + CONVERT(VARCHAR, TRANSACTION_AMOUNT_STAGE), 16), '.', '')

----CONVERT TRANSACTION_AMOUNT WITHOUT DECIMALS
--UPDATE CMCHKEXT SET TRANSACTION_AMOUNT = SUBSTRING(TRANSACTION_AMOUNT,1,LEN(TRANSACTION_AMOUNT)-3) + SUBSTRING(TRANSACTION_AMOUNT,LEN(TRANSACTION_AMOUNT)-1,2)
--REPLICATE FOR ZEROS
UPDATE CMCHKEXT
SET NAME_ID = REPLICATE('0',8-LEN(NAME_ID)) + SUBSTRING(NAME_ID,1,LEN(NAME_ID)),
TRANSACTION_TIME = REPLICATE('0',8-LEN(TRANSACTION_TIME)) + SUBSTRING(TRANSACTION_TIME,1,LEN(TRANSACTION_TIME))

--UPDATE CHECK AMOUNT WITH TRANSACTION AMOUNT
UPDATE CMCHKEXT
SET CHECK_AMOUNT = TRANSACTION_AMOUNT

INSERT INTO CMCHKEXT(
TRAN_ID,
COMPANY_CODE,
POLICY_NUMBER,
TRANSACTION_DATE,
TRANSACTION_TIME,
CHECK_AMOUNT,
CHECK_NUMBER ,
CHECK_DATE,
CHECK_PRINT_DATE,
INDIVIDUAL_NAME ,
NAME_ID ,
CHECK_SEQ,
TRANSACTION_CODE,
TRANSACTION_AMOUNT ,
TRANSACTION_AMOUNT_STAGE,
DEBIT_TRANS_CODE ,
DEBIT_ACCOUNT_CODE,
CREDIT_ACCOUNT_CODE,
LAST_PROCESSING_DATE ,
LAST_PROCESSING_TIME ,
CODER_ID ,
STATUS_IND ,
NEW_CHECK_NUMBER,
CHECK_MESSAGE ,
LAST_CHAR ,
CREDIT_ACCOUNT,
DEBIT_ACCOUNT,
PAYEE_RELA_CODE,
PAYEE_SEQUENCE
)

SELECT
TRAN_ID,
COMPANY_CODE,
POLICY_NUMBER,
TRANSACTION_DATE,
TRANSACTION_TIME,
CHECK_AMOUNT,
CHECK_NUMBER ,
CHECK_DATE,
CHECK_PRINT_DATE,
INDIVIDUAL_NAME ,
NAME_ID ,
CHECK_SEQ,
DEBIT_ACCOUNT_CODE,
TRANSACTION_AMOUNT ,
TRANSACTION_AMOUNT_STAGE ,
DEBIT_TRANS_CODE ,
DEBIT_ACCOUNT_CODE,
CREDIT_ACCOUNT_CODE,
LAST_PROCESSING_DATE ,
LAST_PROCESSING_TIME ,
CODER_ID ,
STATUS_IND ,
NEW_CHECK_NUMBER,
CHECK_MESSAGE ,
LAST_CHAR ,
CREDIT_ACCOUNT,
DEBIT_ACCOUNT,
PAYEE_RELA_CODE,
PAYEE_SEQUENCE

FROM CMCHKEXT WHERE (CREDIT_ACCOUNT >= 250 AND CREDIT_ACCOUNT < 259) AND (DEBIT_ACCOUNT >= 250 AND DEBIT_ACCOUNT < 259)
Go to Top of Page

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2009-02-19 : 10:57:41
just as I suspected! you got some crazy joins going there buddy! :) You either need to add those formatted join keys as indexes in a view or in the tables themselves. or create a temp table here in your sproc on the fly
this will cause you major bottle neck >>>>> ((KEY1.RELA_KEY4 >= D.POLC_KEY0 + CAST(('0'+SUBSTRING(A.PAYEE_SEQUENCE,1,1)+'PE00000000') AS BINARY(12)))

take these out and do as mentioned above. it is worth a try!
Go to Top of Page

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2009-02-19 : 11:07:01
you say There is an initial package which creates a Daily table ..which one is it in this proc. does it have indexes? hold on I see it : it is called CACTGDAILY ? which has no indexes I see. so that is why you are doing that crazy formatted join. ......better yet create extra fields ..name them the same name as the POLC_KEY0 in PPLOCK table....populate them with the formatting you have in the JOIN then index those..then you do a join like this
FROM CACTGDAILY AS A WITH (NOLOCK)
LEFT JOIN PPOLC AS D ON A.POLC_KEY0 = D.POLC_KEY0

2 cents
Go to Top of Page

SQLforGirls
Starting Member

48 Posts

Posted - 2009-02-19 : 11:19:31
yosiasz is right. You definitely need to optimize to the bone first. It is well worth it. However, if you find yourself in a situation where you absolutely need to execute 5 procedures concurrently, you can do it by using a stored proc to create and execute 5 different jobs.

As you can see, this is not a task that's worth embarking on unless you absolutely have to. In my case, I have 6 procedures that aggregate data from about 30 tables, each having 10-300million rows, down to 6 tables of 35 million rows, and ultimately down to 1 K-wide table. The whole process needs to finish in under 3 hours. Each step has been painstakingly optimized. I too have some common tables that are being accessed by all 6 procedures, so I was wary of whether or how much performance gain I would see. I have been using this scenario in nightly processing for about 2 years now. Periodically I test timing of the consecutive run vs. the concurrent run. Most recently I had a time of 2.5 hours to run the 6 procs consecutively, and about 1.1 hours using this method. So depending of course on all the particulars of your setup, you may hope to find a 50% increase??


/***********************************

The idea here is that you create a job to execute your procedure. This proc that creates and
runs the job only takes a moment to run, then your calling script is free to execute this calling
proc again to create and run the second job, and so on.

So the calling script needs to create the polling table and make sure it's empty.
The job has steps to insert rows into the polling table, and update those rows when the job has completed.
The calling script then runs a routine to check the "polling table" to see when an end time
has been entered for each of the 5 jobs. Then the calling script knows the 5 jobs have
finished, and it can continue with other tasks (like notifying you that the procs are done).


if not exists (select 1 from sysobjects where type = 'u' and name = 'polling_table')
BEGIN
create table dbo.polling_table
(proc_name varchar(255),
start_date datetime,
finish_date datetime)
END


-----------------------------------------------------------
-- Begin polling to find out when all 5 jobs have finished
-----------------------------------------------------------

-- set up a counter variable, to see how long the overall polling is taking,
-- so we can set a threshhold and kick out if necessary

declare @pollstart datetime,
@pollcurrent datetime
set @pollstart = getdate()

declare @finished int
set @finished = 0

while @finished = 0
BEGIN --------------polling loop----------------

WAITFOR DELAY '000:00:05'

-- if the polling process exceeds timeout period, send an email and kickout
set @pollcurrent = getdate()
if (datediff(mi,@pollstart,@pollcurrent) > 20) -- if 20 minutes have passed
BEGIN
set @log_description = 'Process hasn't finished in 20 minutes. abort.'

**[EXEC YOUR LOGGING ROUTINE TO WRITE @log_description TO LOG TABLE]**

truncate table polling_table

GOTO CleanUp
END

---------------check for polling
select @finished = 1
where exists (select 2 from grid_polling_table where proc_name = '[FULL NAME OF PROC1]' and start_date is not null and finish_date is not null)
and exists (select 3 from grid_polling_table where proc_name = '[FULL NAME OF PROC2]' and start_date is not null and finish_date is not null)
and exists (select 4 from grid_polling_table where proc_name = '[FULL NAME OF PROC3]' and start_date is not null and finish_date is not null)
and exists (select 5 from grid_polling_table where proc_name = '[FULL NAME OF PROC4]' and start_date is not null and finish_date is not null)
and exists (select 6 from grid_polling_table where proc_name = '[FULL NAME OF PROC5]' and start_date is not null and finish_date is not null)
END

if @finished <> 1
print 'invalid @finished value: '+cast(@finished as varchar(10))

-- continue with final steps

if @finished = 1
BEGIN
truncate table polling_table
-- LOGGING
set @log_description = 'Finished creating tables. Proceed with next step.'

**[EXEC YOUR LOGGING ROUTINE TO WRITE @log_description TO LOG TABLE]**

END




****************************************************/

create proc dbo.run[JOBNAME]job (@tablenum varchar(10))

as

/*******************************************
creates and executes 1 job to generate 1 table
accepts parameter for table num = 'table1'... 'table10'

exec run[NAME]job 'table1'


@tablenum is the identifying suffix for each of your 5 procedures and 5 resulting tables.
This approach of having a single proc that creates and runs all 5 jobs only works if your
5 jobs/procs/result tables can have the same basic name with a numbered suffix. If your
situation does not allow that, you can make 5 procedures that create and execute 5 different jobs.
But that (and indeed this) might be too much volume of code to throw at your problem.


[JOBNAME] prefix for the jobname you will create. One job will be created for each of the 5 procedures to be run.
[STEPNAME] name of the main step of your job, the one that executes the stored proc
[POLLINGTABLENAME] name of the polling table you create in the outer script
[PROCPREFIX] common part of the name of all 5 procs you wish to run (PROCPREFIX+@tablenum variable should equal your proc name exactly)
[YOURSERVERNAME]
[YOUR LOGGING ROUTINE] up to you, if you have a common logging table where you log information about running scripts....
********************************************/

declare @result int,
@rowcount int,
@log_description varchar(100),
@jobname varchar(100),
@stepname varchar(100),
@comm1 varchar(500),
@comm2 varchar(500),
@comm3 varchar(500)

set @jobname = '[JOBNAME]'+@tablenum
set @stepname = '[JOBNAME][STEPNAME]' + @tablenum
set @comm1 = 'INSERT into [POLLINGTABLENAME] (proc_name, start_date) values (''[PROCPREFIX]' + @tablenum + ''', getdate())'
set @comm2 = 'exec [PROCPREFIX]' + @tablenum
set @comm3 = 'UPDATE [POLLINGTABLENAME] set finish_date = getdate() where proc_name = ''[PROCPREFIX]' + @tablenum+''''

---------------------------------------
-- create job
---------------------------------------
EXEC @result = msdb..sp_add_job @job_name = @jobname,
@enabled = 1,
@description = 'Runs [PROCPREFIX] stored procs. Is deleted automatically when complete.',
@owner_login_name = 'sa',
@delete_level = 3 -- 0=keep job, 3=delete the job when done

-- check results. typically the error (code 1) would be from creating when a job already exists.
-- but there a probably other reasons. even so, i guess if we can't create the job we don't
-- really want to continue...
if @result = 1
begin
set @rowcount = @@rowcount
set @log_description = 'Failure to create job: [PROCPREFIX]'+@tablenum

**[EXEC YOUR LOGGING ROUTINE TO WRITE @log_description TO LOG TABLE]**

return(1)
end

-----------------------------------------------------
-- add a server for the job (important this be done.)
-----------------------------------------------------
EXEC @result = msdb..sp_add_jobserver
@job_name= @jobname,
@server_name='[YOURSERVERNAME]'

if @result != 0
begin
set @rowcount = @@rowcount
set @log_description = 'Failure to create jobserver: [JOBNAME]'+@tablenum

**[EXEC YOUR LOGGING ROUTINE TO WRITE @log_description TO LOG TABLE]**

return(1)
end

-------------------------------------------------
-- add a step for writing to the polling table.
-------------------------------------------------
EXEC @result = msdb..sp_add_jobstep
@job_name = @jobname,
@step_name = 'logging: start',
@subsystem = 'TSQL',
@database_name = '[YOURDBNAME]',
@command = @comm1,
@retry_attempts = 0,
@retry_interval = 0,
@on_success_action = 3, -- on success, go to next step
@on_fail_action = 2, -- on fail, quit reporting failure
@output_file_name = 'C:\YOURoutput.txt',
@flags = 2 -- 2 = append

if @result != 0
begin
set @rowcount = @@rowcount
set @log_description = 'Failure to create jobstep: logging: start'

**[EXEC YOUR LOGGING ROUTINE TO WRITE @log_description TO LOG TABLE]**

return(1)

end

-------------------------------------------------
-- add a step for running the table1 procedure.
-------------------------------------------------
EXEC @result = msdb..sp_add_jobstep
@job_name = @jobname,
@step_name = @stepname,
@subsystem = 'TSQL',
@database_name = '[YOURDBNAME]',
@command = @comm2,
@retry_attempts = 0,
@retry_interval = 0,
@on_success_action = 3, -- on success, go to next step
@on_fail_action = 2, -- on fail, quit reporting failure
@output_file_name = 'C:\YOURoutput.txt',
@flags = 2 -- 2 = append

if @result != 0
begin

set @rowcount = @@rowcount
set @log_description = 'Failure to create jobstep: [STEPNAME]' + @tablenum

**[EXEC YOUR LOGGING ROUTINE TO WRITE @log_description TO LOG TABLE]**

return(1)

----------------------------------------------------
-- add a step for writing to the polling table.
----------------------------------------------------
EXEC @result = msdb..sp_add_jobstep @job_name = @jobname,
@step_name = 'logging: finished',
@subsystem = 'TSQL',
@database_name = '[YOURDBNAME]',
@command = @comm3,
@retry_attempts = 0,
@retry_interval = 0,
@on_success_action = 1, -- on success, go to next step
@on_fail_action = 2, -- on fail, quit reporting failure
@output_file_name = 'C:\YOURoutput.txt',
@flags = 2 -- 2 = append
if @result != 0
begin

set @rowcount = @@rowcount
set @log_description = 'Failure to create jobstep: logging: finished'

**[EXEC YOUR LOGGING ROUTINE TO WRITE @log_description TO LOG TABLE]**

return(1)
end

------------------------------------
-- run job
------------------------------------
EXEC @result = msdb..sp_start_job @job_name = @jobname, @server_name = '[YOURSERVERNAME]'
if @result != 0
begin

set @rowcount = @@rowcount
set @log_description = 'Failure to start job: grid_work'+ @tablenum

**[EXEC YOUR LOGGING ROUTINE TO WRITE @log_description TO LOG TABLE]**

return(-1)
end



--------------------------------------------

OK, Sorry I don't know how to do better formatting within these forums. Also, this may be like using a shotgun to kill a mosquito. But this is the first I've heard of someone else even possibly needing to do this kind of solution, and I am eager to share.

I hope you don't waste any time on this before taking yosiasz important advice above.


Go to Top of Page

vijayisonly
Master Smack Fu Yak Hacker

1836 Posts

Posted - 2009-02-19 : 11:30:11
Sure..I will look at creating indices on that table.

The table structure itself is screwed up to say the least...I will try my best to avoid those Substrings in WHERE.

Thanks for your time and help yosiaz and sqlforgirls.
Go to Top of Page

vijayisonly
Master Smack Fu Yak Hacker

1836 Posts

Posted - 2009-02-19 : 11:52:49
SQLforGirls, this is a nice idea to have a polling table and check if each of the SPs have completed. This never occurred to me.

I will definitely try this out and let you know if I was able to save something.

Thanks again.
Go to Top of Page

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2009-02-19 : 13:08:28
until you add those indexes for the items you are joining with those crazy SUBSTRING and all (even those sub-sub selects), any other stuff you do is just trying to put chewing gum on leaking pipes in a sinking submarine...yes it will hold for now...but you ahev to think long term...indexes are you friends...easy to create and worth the time!!!
Go to Top of Page

SQLforGirls
Starting Member

48 Posts

Posted - 2009-02-19 : 13:10:58
yosiasz, I agree completely.
Go to Top of Page

vijayisonly
Master Smack Fu Yak Hacker

1836 Posts

Posted - 2009-02-19 : 13:46:51
Yosiasz...I'm in the process of creating indices for the daily table. I totally believe this will reduce about 50 % of time taken...

I will post again with the performance improvements the indices have caused.
Go to Top of Page

vijayisonly
Master Smack Fu Yak Hacker

1836 Posts

Posted - 2009-02-20 : 14:52:49
Yosiasz...Adding indices turned out to be brilliant. The SP which was taking about 2.5 mins now runs in less than a minute.

Just wanted to let you know. Thanks for the suggestion.
Go to Top of Page

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2009-02-20 : 19:19:47
awesome! glad to hear your success sir!! and if you make those sub sub queries into views then you will be flying home with a smile
Go to Top of Page

yosiasz
Master Smack Fu Yak Hacker

1635 Posts

Posted - 2009-02-20 : 19:27:15
Like this subquery make all these kinds of subqueries into views or temp tables with indexes. if you don't like the performance then you can get rid of them..indexed views are even better depending on your environment...and it makes your own code readable too..2 cents
CREATE VIEW NALK_RELA_KEY
SELECT SUBSTRING(NALK_KEY0,34,8) AS NALK_ADDRESS_ID,
SUBSTRING(NALK_KEY0,1,8) AS NALK_RELA_KEY,
RANK() OVER (PARTITION BY NAME_ID ORDER BY EFFECTIVE_DATE DESC, ADDRESS_ID DESC) AS RANK1
FROM PNALK WITH (NOLOCK)
WHERE ADDRESS_ID <> 0 AND EFFECTIVE_DATE <= CONVERT(char, GETDATE(),112))
Go to Top of Page

X002548
Not Just a Number

15586 Posts

Posted - 2009-02-20 : 20:47:33
just so much wrong here

get rid of nolock

you insert, then do an update to the same table?

everything is logged?

Is this a repeatable batch process?



Brett

8-)

Hint: Want your questions answered fast? Follow the direction in this link
http://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspx

Add yourself!
http://www.frappr.com/sqlteam



Go to Top of Page

vijayisonly
Master Smack Fu Yak Hacker

1836 Posts

Posted - 2009-02-23 : 10:04:08
Brett,
Yes this is a repeatable batch process. The package runs every night to extract records for that particular day.

The table CMCHKEXT is a daily table that will be inserted with records from that day and whatever data transformations/conversions. This will then be extracted to a Flat File destination which will then be used by another system.

Is there something wrong with inserting into the table and then doing an update on the same table?

Yosiasz...am now into creating those views that you suggested.
Go to Top of Page

vijayisonly
Master Smack Fu Yak Hacker

1836 Posts

Posted - 2009-02-23 : 11:12:24
Yosiasz...I used this query to create an index

CREATE VIEW [dbo].[NALK_RELA_KEY] with schemabinding AS
(SELECT SUBSTRING(NALK_KEY0,34,8) AS NALK_ADDRESS_ID,
SUBSTRING(NALK_KEY0,1,8) AS NALK_RELA_KEY,
NAME_ID,
NALK_KEY0,
RANK() OVER (PARTITION BY NAME_ID ORDER BY EFFECTIVE_DATE DESC, ADDRESS_ID DESC) AS RANK1
FROM [dbo].[PNALK] WITH (NOLOCK)
WHERE ADDRESS_ID <> 0 AND EFFECTIVE_DATE <= CONVERT(char, GETDATE(),112));

CREATE UNIQUE CLUSTERED INDEX NALK_RELA_KEY on [dbo].[NALK_RELA_KEY](NALK_KEY0)

I get the following error..cannot create index "because the view contains a table hint. Consider removing the hint."

Not sure what this table hint is...any help
Go to Top of Page

SQLforGirls
Starting Member

48 Posts

Posted - 2009-02-23 : 11:51:45
the hint is "with (nolock)"
Go to Top of Page
    Next Page

- Advertisement -