Please start any new threads on our new
site at https://forums.sqlteam.com. We've got lots of great SQL Server
experts to answer whatever question you can come up with.
Author |
Topic |
mvandoni
Starting Member
12 Posts |
Posted - 2012-02-14 : 06:24:36
|
Hi all,I'm trying to use multiple INSERT statements to load data from a generic txt file into a generic table.The SQL statements is dinamically built using some config data and is something like:INSERT INTO [ita-sqlserver1].[Auditel].[dbo].[_TexFiles] ([cod],[Indice],[N],[Prog],[QST],[verba]) VALUES ('001507001','110000001','00','000003','DETIME ','12:19:06'),('001507001','110000002','00','000018','DETIME ','18:11:28'),('001507001','110000003','00','000033','DETIME ','12:21:04')Suppose that field Indice is primary key in my table and a record with Indice='110000002' is already in the table, the sql statements fail.Is there a way avoid it to fail and add only the two correct records?ThanksMichele |
|
DonAtWork
Master Smack Fu Yak Hacker
2167 Posts |
Posted - 2012-02-14 : 06:47:42
|
bcp the file into a staging table that has the same schema as your generic table, then merge them.http://weblogs.sqlteam.com/jeffs/archive/2008/05/13/question-needed-not-answer.aspxHow to ask: http://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspxFor ultra basic questions, follow these links.http://www.sql-tutorial.net/ http://www.firstsql.com/tutor.htm http://www.w3schools.com/sql/default.asp |
 |
|
mvandoni
Starting Member
12 Posts |
Posted - 2012-02-14 : 07:14:10
|
Thanks for answering.This solve only part of my problem.The tables I have to import data in are always staging tables used to merge data with final tables but generally the duplicate keys are in the file I have to import.My routine run N batches which import into the table a certain (variable depending on the configuration) amount of records.I'm doing this because adding records one by one is a never ending story in SQL2008 (really slower then with sql2000).So I can have 2 cases:1) a duplicate is already in the table because of a previous batch2) Duplicates are both in the same batchI need to find a way to ignore duplicates and import all the rest. |
 |
|
DonAtWork
Master Smack Fu Yak Hacker
2167 Posts |
Posted - 2012-02-14 : 11:36:28
|
You can do this...10 Import 1000 rows to staging table.20 Merge staging table with realTable (or cleanup then merge).30 truncate staging table.40 If rows are left to import, go to 10OR just import ALL rows to the staging table at once. Then do your merge by joining a select of distinct rows from your staging table.OR same as above but delete out dupes first from staging table, THEN merge directly.http://weblogs.sqlteam.com/jeffs/archive/2008/05/13/question-needed-not-answer.aspxHow to ask: http://weblogs.sqlteam.com/brettk/archive/2005/05/25/5276.aspxFor ultra basic questions, follow these links.http://www.sql-tutorial.net/ http://www.firstsql.com/tutor.htm http://www.w3schools.com/sql/default.asp |
 |
|
Lamprey
Master Smack Fu Yak Hacker
4614 Posts |
Posted - 2012-02-14 : 14:40:32
|
It would also depend on how you need handle ethe "duplicate" data. You might just be able to add a BatchID or something to identify how/when that row was BCPd into the staging table; Then do your processing. If the primary key is "Indice" you might be able to make use of the IGNORE DUP KEY setting to not insert duplicate rows. Another possibility is to not use BCP and usea different tool for loading your data, like SSIS. Again it all depends on your business rules that you need to follow. |
 |
|
|
|
|
|
|