I'm using SQL Server Import Wizard to import a 260GB flat text file into a table. I have about 25 fields/columns per row and endless rows. The large text file has vertical bars as a column delimiter and {CR}{LF} as a row delimiter.
My issue is this: some of the rows of data are not clean and missing a column delimiter or have an extra column delimiter. Thus, data is getting pushed into the wrong fields. This is problematic because each column has a very unique data type, thus creating the SQL Server data conversion failed error.
I don't mind incorrect data being pushed into the wrong fields - it seems to be happening for .01% of transactions and isn't a big issue. So I would like to override this issue and continue loading the data. Right now the problem is SQL Server Import wizard stops all together when it hits this error, meaning I have to cleanse the data and to reload each time (very painful for such a large text file).
Any help/advice/suggestions on strategy here?
Thanks
Using the wizard,In the review data type mapping set the on error (global) to ignore
User contributions licensed under CC BY-SA 3.0