Writing results from SQL query to CSV and avoiding extra line-breaks - python

I have to extract data from several different database engines. After this data is exported, I send the data to AWS S3 and copy that data to Redshift using a COPY command. Some of the tables contain lots of text, with line breaks and other characters present in the column fields. When I run the following code:
cursor.execute('''SELECT * FROM some_schema.some_message_log''')
rows = cursor.fetchall()
with open('data.csv', 'w', newline='') as fp:
a = csv.writer(fp, delimiter='|', quoting=csv.QUOTE_ALL, quotechar='"', doublequote=True, lineterminator='\n')
a.writerows(rows)
Some of the columns that have carriage returns/linebreaks will create new lines:
"2017-01-05 17:06:32.802700"|"SampleJob"|""|"Date"|"error"|"Job.py"|"syntax error at or near ""from"" LINE 34: select *, SYSDATE, from staging_tops.tkabsences;
^
-<class 'psycopg2.ProgrammingError'>"
which causes the import process to fail. I can work around this by hard-coding for exceptions:
cursor.execute('''SELECT * FROM some_schema.some_message_log''')
rows = cursor.fetchall()
with open('data.csv', 'w', newline='') as fp:
a = csv.writer(fp, delimiter='|', quoting=csv.QUOTE_ALL, quotechar='"', doublequote=True, lineterminator='\n')
for row in rows:
list_of_rows = []
for c in row:
if isinstance(c, str):
c = c.replace("\n", "\\n")
c = c.replace("|", "\|")
c = c.replace("\\", "\\\\")
list_of_rows.append(c)
else:
list_of_rows.append(c)
a.writerow([x.encode('utf-8') if isinstance(x, str) else x for x in list_of_rows])
But this takes a long time to process larger files, and seems like bad practice in general. Is there a faster way to export data from a SQL cursor to CSV that will not break when faced with text columns that contain carriage returns/line breaks?

If you're doing SELECT * FROM table without a WHERE clause, you could use COPY table TO STDOUT instead, with the right options:
copy_command = """COPY some_schema.some_message_log TO STDOUT
CSV QUOTE '"' DELIMITER '|' FORCE QUOTE *"""
with open('data.csv', 'w', newline='') as fp:
cursor.copy_expert(copy_command)
This, in my testing, results in literal '\n' instead of actual newlines, where writing through the csv writer gives broken lines.
If you do need a WHERE clause in production you could create a temporary table and copy it instead:
cursor.execute("""CREATE TEMPORARY TABLE copy_me AS
SELECT this, that, the_other FROM table_name WHERE conditions""")
(edit) Looking at your question again I see you mention "ever all different database engines". The above works with psyopg2 and postgresql but could probably be adapted for other databases or libraries.

I suspect the issue is as simple as making sure the Python CSV export library and Redshift's COPY import speak a common interface. In short, check your delimiters and quoting characters and make sure both the Python output and the Redshift COPY command agree.
With slightly more detail: the DB drivers will have already done the hard work of getting to Python in a well-understood form. That is, each row from the DB is a list (or tuple, generator, etc.), and each cell is individually accessible. And at the point you have a list-like structure, Python's CSV exporter can do the rest of the work and -- crucially -- Redshift will be able to COPY FROM the output, embedded newlines and all. In particular, you should not need to do any manual escaping; the .writerow() or .writerows() functions should be all you need do.
Redshift's COPY implementation understands the most common dialect of CSV by default, which is to
delimit cells by a comma (,),
quote cells with double quotes ("),
and to escape any embedded double quotes by doubling (" → "").
To back that up with documentation from Redshift FORMAT AS CSV:
... The default quote character is a double quotation mark ( " ). When the quote character is used within a field, escape the character with an additional quote character. ...
However, your Python CSV export code uses a pipe (|) as the delimiter and sets the quotechar to double quote ("). That, too, can work, but why stray from the defaults? Suggest using CSV's namesake and keeping your code simpler in the process:
cursor.execute('''SELECT * FROM some_schema.some_message_log''')
rows = cursor.fetchall()
with open('data.csv', 'w') as fp:
csvw = csv.writer( fp )
csvw.writerows(rows)
From there, tell COPY to use the CSV format (again with no need for non-default specifications):
COPY your_table FROM your_csv_file auth_code FORMAT AS CSV;
That should do it.

Why write to the database after every row?
cursor.execute('''SELECT * FROM some_schema.some_message_log''')
rows = cursor.fetchall()
with open('data.csv', 'w', newline='') as fp:
a = csv.writer(fp, delimiter='|', quoting=csv.QUOTE_ALL, quotechar='"', doublequote=True, lineterminator='\n')
list_of_rows = []
for row in rows:
for c in row:
if isinstance(c, basestring):
c = c.replace("\n", "\\n")
c = c.replace("|", "\|")
c = c.replace("\\", "\\\\")
list_of_rows.append(row)
a.writerows([x.encode('utf-8') if isinstance(x, str) else x for x in list_of_rows])

The problem is that you are using the Redshift COPY command with its default parameters, which use a pipe as a delimiter (see here and here) and require escaping of newlines and pipes within text fields (see here and here). However, the Python csv writer only knows how to do the standard thing with embedded newlines, which is to leave them as-is, inside a quoted string.
Fortunately, the Redshift COPY command can also use the standard CSV format. Adding the CSV option to your COPY command gives you this behavior:
Enables use of CSV format in the input data. To automatically escape delimiters, newline characters, and carriage returns, enclose the field in the character specified by the QUOTE parameter. The default quote character is a double quotation mark ( " ). When the quote character is used within a field, escape the character with an additional quote character."
This is exactly the approach used by the Python CSV writer, so it should take care of your problems. So my advice would be to create a standard csv file using code like this:
cursor.execute('''SELECT * FROM some_schema.some_message_log''')
rows = cursor.fetchall()
with open('data.csv', 'w', newline='') as fp:
a = csv.writer(fp) # no need for special settings
a.writerows(rows)
Then in Redshift, change your COPY command to something like this (note the added CSV tag):
COPY logdata
FROM 's3://mybucket/data/data.csv'
iam_role 'arn:aws:iam::0123456789012:role/MyRedshiftRole'
CSV;
Alternatively, you could continue manually converting your fields to match the default settings for Redshift's COPY command. Python's csv.writer won't do this for you on its own, but you may be able to speed up your code a bit, especially for big files, like this:
cursor.execute('''SELECT * FROM some_schema.some_message_log''')
rows = cursor.fetchall()
with open('data.csv', 'w', newline='') as fp:
a = csv.writer(
fp,
delimiter='|', quoting=csv.QUOTE_ALL,
quotechar='"', doublequote=True, lineterminator='\n'
)
a.writerows(
c.replace("\\", "\\\\").replace("\n", "\\\n").replace("|", "\\|").encode('utf-8')
if isinstance(c, str)
else c
for row in rows
for c in row
)
As another alternative, you could experiment with importing the query data into a pandas DataFrame with .from_sql, doing the replacements in the DataFrame (a whole column at a time), then writing the table out with .to_csv. Pandas has incredibly fast csv code, so this may give you a significant speedup.
Update: I just noticed that in the end I basically duplicated #hunteke's answer. The key point (which I missed the first time through) is that you probably haven't been using the CSV argument in your current Redshift COPY command; if you add that, this should get easy.

Related

remove double quotes in each row (csv writer)

I'm writing API results to CSV file in python 3.7. Problem is it adds double quotes ("") to each row when it writes to file.
I'm passing format as csv to API call, so that I get results in csv format and then I'm writing it to csv file, store to specific location.
Please suggest if there is any better way to do this.
Here is the sample code..
with open(target_file_path, 'w', encoding='utf8') as csvFile:
writer = csv.writer(csvFile, quoting=csv.QUOTE_NONE, escapechar='\"')
for line in rec.split('\r\n'):
writer.writerow([line])
when I use escapechar='\"' it adds (") at the of every column value.
here is sample records..
2264855868",42.38454",-71.01367",07/15/2019 00:00:00",07/14/2019 20:00:00"
2264855868",42.38454",-71.01367",07/15/2019 01:00:00",07/14/2019 21:00:00"
API gives string/bytes which you can write directly in file.
data = request.get(..).content
open(filename, 'wb').write(data)
With csv.writer you would have to convert string/bytes to Python's data using csv.reader and then convert it back to string/bytes with csv.writer - so there is no sense to do it.
The same method should work if API send any file: JSON, CSV, XML, PDF, images, audio, etc.
For bigger files you could use chunk/stream in requests. Doc: requests - Advanced Usage
Have you tried removing the backward-slash from escapechar='\"'? It shouldn't be necessary, since you are using single quotes for the string.
EDIT: From the documentation:
A one-character string used by the writer to escape the delimiter if quoting is set to QUOTE_NONE and the quotechar if doublequote is False. On reading, the escapechar removes any special meaning from the following character.
And the delimeter:
A one-character string used to separate fields. It defaults to ','
So it is going to escape the delimeter (,) with whatever you set as the escapechar, in this case ,
If you don't want any escape, try leaving it empty
Try:
import codecs
def find_replace(file, search_characters, replace_with):
text = codecs.open(file, "r", "utf-8-sig")
text = ''.join([i for i in text]).replace(
search_characters, replace_with)
x = codecs.open(file, "w", "utf-8-sig")
x.writelines(text)
x.close()
if __name__ == '__main__':
file = "target_file_path"
search_characters = '"'
replace_with = ''
find_replace(file, search_characters, replace_with)
output:
2264855868,42.38454,-71.01367,07/15/2019 00:00:00,07/14/2019 20:00:00
2264855868,42.38454,-71.01367,07/15/2019 01:00:00,07/14/2019 21:00:00

How to ETL postgresql columns which contain HTML code via csv copy

I need to ETL data between two postgresql data bases. Several of the columns needing to be transferred contain HTML code. The is several million rows and i need to move this via csv copy for speed reasons. While trying to csv.writer and copy_from the csv file the characters in the HTML code are causing errors with the transfer making it seemingly impossible to set a delimiter or handle quoting
I am running this ETL job via python and have used .replace() on the columns to make ';' work as a delimiter.
However, I am running into issues with page breaks and paragraphs in the HTML code ( and specifically) and quoting fields. When i encounter these i receive the error that there is 'missing data for column'
I have tried setting 'doublequote' and changing the escape character to '\'. I have also tried changing the quotechar to '|'
The code I am using to create the csv is:
filename = 'transfer.csv'
with open(filename, 'w') as csvfile:
csvwriter = csv.writer(csvfile, delimiter=';', quotechar='|',
quoting=csv.QUOTE_MINIMAL)
The code i am using to load the csv is:
f = open(file_path, "r")
print('File opened')
cur.copy_from(f, stage_table, sep=';', null="")
As mentioned above, the error message that i receive when i try to import the csv is: "Error: missing data for column"
I would love to be able to format my csv.writer and copy_from code in such a way that i do not have to use dozens or more nested replace() statements and transforms to ETL this data and can have an automated script run it on a schedule.

Python csv writer : "Unknown Dialect" Error

I have a very large string in the CSV format that will be written to a CSV file.
I try to write it to CSV using the simplest if the python script
results=""" "2013-12-03 23:59:52","/core/log","79.223.39.000","logging-4.0",iPad,Unknown,"1.0.1.59-266060",NA,NA,NA,NA,3,"1385593191.865",true,ERROR,"app_error","iPad/Unknown/webkit/537.51.1",NA,"Does+not",false
"2013-12-03 23:58:41","/core/log","217.7.59.000","logging-4.0",Win32,Unknown,"1.0.1.59-266060",NA,NA,NA,NA,4,"1385593120.68",true,ERROR,"app_error","Win32/Unknown/msie/9.0",NA,"Does+not,false
"2013-12-03 23:58:19","/core/client_log","79.240.195.000","logging-4.0",Win32,"5.1","1.0.1.59-266060",NA,NA,NA,NA,6,"1385593099.001",true,ERROR,"app_error","Win32/5.1/mozilla/25.0",NA,"Could+not:+{"url":"/all.json?status=ongoing,scheduled,conflict","code":0,"data":"","success":false,"error":true,"cached":false,"jqXhr":{"readyState":0,"responseText":"","status":0,"statusText":"error"}}",false"""
resultArray = results.split('\n')
with open(csvfile, 'wb') as f:
writer = csv.writer(f)
for row in resultArray:
writer.writerows(row)
The code returns
"Unknown Dialect"
Error
Is the error because of the script or is it due to the string that is being written?
EDIT
If the problem is bad input how do I sanitize it so that it can be used by the csv.writer() method?
You need to specify the format of your string:
with open(csvfile, 'wb') as f:
writer = csv.writer(f, delimiter=',', quotechar="'", quoting=csv.QUOTE_ALL)
You might also want to re-visit your writing loop; the way you have it written you will get one column in your file, and each row will be one character from the results string.
To really exploit the module, try this:
import csv
lines = ["'A','bunch+of','multiline','CSV,LIKE,STRING'"]
reader = csv.reader(lines, quotechar="'")
with open('out.csv', 'wb') as f:
writer = csv.writer(f)
writer.writerows(list(reader))
out.csv will have:
A,bunch+of,multiline,"CSV,LIKE,STRING"
If you want to quote all the column values, then add quoting=csv.QUOTE_ALL to the writer object; then you file will have:
"A","bunch+of","multiline","CSV,LIKE,STRING"
To change the quotes to ', add quotechar="'" to the writer object.
The above code does not give csv.writer.writerows input that it expects. Specifically:
resultArray = results.split('\n')
This creates a list of strings. Then, you pass each string to your writer and tell it to writerows with it:
for row in resultArray:
writer.writerows(row)
But writerows does not expect a single string. From the docs:
csvwriter.writerows(rows)
Write all the rows parameters (a list of row objects as described above) to the writer’s file object, formatted according to the current dialect.
So you're passing a string to a method that expects its argument to be a list of row objects, where a row object is itself expected to be a sequence of strings or numbers:
A row must be a sequence of strings or numbers for Writer objects
Are you sure your listed example code accurately reflects your attempt? While it certainly won't work, I would expect the exception produced to be different.
For a possible fix - if all you are trying to do is to write a big string to a file, you don't need the csv library at all. You can just write the string directly. Even splitting on newlines is unnecessary unless you need to do something like replacing Unix-style linefeeds with DOS-style linefeeds.
If you need to use the csv module after all, you need to give your writer something it understands - in this example, that would be something like writer.writerow(['A','bunch+of','multiline','CSV,LIKE,STRING']). Note that that's a true Python list of strings. If you need to turn your raw string "'A','bunch+of','multiline','CSV,LIKE,STRING'" into such a list, I think you'll find the csv library useful as a reader - no need to reinvent the wheel to handle the quoted commas in the substring 'CSV,LIKE,STRING'. And in that case you would need to care about your dialect.
you can use 'register_dialect':
for example for escaped formatting:
csv.register_dialect('escaped', escapechar='\\', doublequote=True, quoting=csv.QUOTE_ALL)

Insert whitespace after delimiter with CSV writer

f = open("file1.csv", "r")
g = open("file2.csv", "w")
a = csv.reader(f, delimiter=";", skipinitialspace=True)
b = csv.writer(g, delimiter=";")
for line in a:
b.writerow(line)
In the above code, I try to load file1.csv using the csv module in Python2.7, and then write it in file2.csv using a csv.writer.
My issue comes from existing whitespaces (a single space character) after the delimiter in the input file. I need to remove them in order to do some data manipulation later on, so I used the skipinitialspace=True argument for the reader. However, I cannot get the writer to print the space char after the delimiter, and therefore disturbing any subsequent diffing of the two files.
I tried to use the Sniffer class to auto-generate a Dialect but I guess my input files (coming from a large complex legacy system, with dozens of fields and poor quoting and escaping) are proving to be too complex for this.
In more simple terms I'm looking for the answers to the following questions:
How can I insert a space character after each delimiter in the writer?
Incidently, what are the reasons to prohibit the use of multi-character strings as delimiters? delimiter="; " would've solved my problem.
You can wrap your file objects in proxies that add the whitespace:
>>> class DelimitedFile(file):
... def write(self, value):
... super(DelimitedFile, self).write(value.replace(";", "; "))
...
>>> f = DelimitedFile("foo", "w")
>>> f.write("hello;world")
>>> f.close()
>>> open("foo").read()
'hello; world'
If you left the whitespace you want written in (removing/restoring it during processing), or put it back after processing but before writing, that would take care of it.
One solution would be to write to a StringIO object, and then to replace the semicolons with '; ', or to do so during processing of the lines, if you do any other processing.
As for the first, I would probably do something like this:
for k, line in enumerate(a):
if k == 0:
b.writerow(line)
else:
b.writerow(' ' + line) #assuming line is always a string, if not just use str() on it
As for the second, I have no idea.

How to convert tab separated, pipe separated to CSV file format in Python

I have a text file (.txt) which could be in tab separated format or pipe separated format, and I need to convert it into CSV file format. I am using python 2.6. Can any one suggest me how to identify the delimiter in a text file, read the data and then convert that into comma separated file.
Thanks in advance
I fear that you can't identify the delimiter without knowing what it is. The problem with CSV is, that, quoting ESR:
the Microsoft version of CSV is a textbook example of how not to design a textual file format.
The delimiter needs to be escaped in some way if it can appear in fields. Without knowing, how the escaping is done, automatically identifying it is difficult. Escaping could be done the UNIX way, using a backslash '\', or the Microsoft way, using quotes which then must be escaped, too. This is not a trivial task.
So my suggestion is to get full documentation from whoever generates the file you want to convert. Then you can use one of the approaches suggested in the other answers or some variant.
Edit:
Python provides csv.Sniffer that can help you deduce the format of your DSV. If your input looks like this (note the quoted delimiter in the first field of the second row):
a|b|c
"a|b"|c|d
foo|"bar|baz"|qux
You can do this:
import csv
csvfile = open("csvfile.csv")
dialect = csv.Sniffer().sniff(csvfile.read(1024))
csvfile.seek(0)
reader = csv.DictReader(csvfile, dialect=dialect)
for row in reader:
print row,
# => {'a': 'a|b', 'c': 'd', 'b': 'c'} {'a': 'foo', 'c': 'qux', 'b': 'bar|baz'}
# write records using other dialect
Your strategy could be the following:
parse the file with BOTH a tab-separated csv reader and a pipe-separated csv reader
calculate some statistics on resulting rows to decide which resultset is the one you want to write. An idea could be counting the total number of fields in the two recordset (expecting that tab and pipe are not so common). Another one (if your data is strongly structured and you expect the same number of fields in each line) could be measuring the standard deviation of number of fields per line and take the record set with the smallest standard deviation.
In the following example you find the simpler statistic (total number of fields)
import csv
piperows= []
tabrows = []
#parsing | delimiter
f = open("file", "rb")
readerpipe = csv.reader(f, delimiter = "|")
for row in readerpipe:
piperows.append(row)
f.close()
#parsing TAB delimiter
f = open("file", "rb")
readertab = csv.reader(f, delimiter = "\t")
for row in readerpipe:
tabrows.append(row)
f.close()
#in this example, we use the total number of fields as indicator (but it's not guaranteed to work! it depends by the nature of your data)
#count total fields
totfieldspipe = reduce (lambda x,y: x+ y, [len(f) for f in piperows])
totfieldstab = reduce (lambda x,y: x+ y, [len(f) for f in tabrows])
if totfieldspipe > totfieldstab:
yourrows = piperows
else:
yourrows = tabrows
#the var yourrows contains the rows, now just write them in any format you like
Like this
from __future__ import with_statement
import csv
import re
with open( input, "r" ) as source:
with open( output, "wb" ) as destination:
writer= csv.writer( destination )
for line in input:
writer.writerow( re.split( '[\t|]', line ) )
I would suggest taking some of the example code from the existing answers, or perhaps better use the csv module from python and change it to first assume tab separated, then pipe separated, and produce two output files which are comma separated. Then you visually examine both files to determine which one you want and pick that.
If you actually have lots of files, then you need to try to find a way to detect which file is which.
One of the examples has this:
if "|" in line:
This may be enough: if the first line of a file contains a pipe, then maybe the whole file is pipe separated, else assume a tab separated file.
Alternatively fix the file to contain a key field in the first line which is easily identified - or maybe the first line contains column headers which can be detected.
for line in open("file"):
line=line.strip()
if "|" in line:
print ','.join(line.split("|"))
else:
print ','.join(line.split("\t"))

Categories

Resources