I want to create a CSV string which is being sent by mail. I'd like to avoid temporary files because it's just not necessary for me to deal with the file system. As I said, the data is passed to a mail class.
I found this neat solution
output = io.StringIO()
writer = csv.writer(output, delimiter=';', dialect='excel-tab')
writer.writerows(data)
It works perfectly except that it creates a UTF-8 file which will have messed up special characters if you open it in Excel.
I tried to pass a BOM to the StringIO constructor, but nothing worked:
output = io.StringIO('\ufeff')
I tried to somehow set the encoding to utf-8-sig but I couldn't find a way except using a file...
Any ideas how to solve this problem?
Thanks!
StringIO is strings which means unicode. To do what you want, I think you need to use BytesIO instead.
After reading this and this I think the problem is that StringIO is strings only and therefore no encoding.
You could try this [extrapolated from the first link above]:
bio = io.BytesIO()
StreamWriter = codecs.getwriter('utf-8-sig')
wrapper_file = StreamWriter(bio)
csv.writer(wrapper_file,...
writer.writerows(data)
Python character encoding makes my head hurt...
Related
I'm attempting to extract article information using the python newspaper3k package and then write to a CSV file. While the info is downloaded correctly, I'm having issues with the output to CSV. I don't think I fully understand unicode, despite my efforts to read about it.
from newspaper import Article, Source
import csv
first_article = Article(url="http://www.bloomberg.com/news/articles/2016-09-07/asian-stock-futures-deviate-as-s-p-500-ends-flat-crude-tops-46")
first_article.download()
if first_article.is_downloaded:
first_article.parse()
first_article.nlp
article_array = []
collate = {}
collate['title'] = first_article.title
collate['content'] = first_article.text
collate['keywords'] = first_article.keywords
collate['url'] = first_article.url
collate['summary'] = first_article.summary
print(collate['content'])
article_array.append(collate)
keys = article_array[0].keys()
with open('bloombergtest.csv', 'w') as output_file:
csv_writer = csv.DictWriter(output_file, keys)
csv_writer.writeheader()
csv_writer.writerows(article_array)
output_file.close()
When I print collate['content'], which is first_article.text, the console outputs the article's content just fine. Everything shows up correctly, apostrophes and all. When I write to the CVS, the content cell text has odd characters in it. For example:
“At the end of the day, Europe’s economy isn’t in great shape, inflation doesn’t look exciting and there are a bunch of political risks to reckon with.
So far I have tried:
with open('bloombergtest.csv', 'w', encoding='utf-8') as output_file:
to no avail. I also tried utf-16 instead of 8, but that just resulted in the cells writing in an odd order. It didn't create the cells correctly in the CSV, although the output looked correct. I've also tried .encode('utf-8') are various variable but nothing has worked.
What's going on? Why would the console print the text correctly, while the CSV file has odd characters? How can I fix this?
Add encoding='utf-8-sig' to open(). Excel requires the UTF-8-encoded BOM code point (Byte Order Mark, U+FEFF) signature to interpret a file as UTF-8; otherwise, it assumes the default localized encoding.
Changing with open('bloombergtest.csv', 'w', encoding='utf-8') as output_file: to with open('bloombergtest.csv', 'w', encoding='utf-8-sig') as output_file:, worked, as recommended by Leon and Mark Tolonen.
That's most probably a problem with the software that you use to open or print the CSV file - it doesn't "understand" that CSV is encoded in UTF-8 and assumes ASCII, latin-1, ISO-8859-1 or a similar encoding for it.
You can aid that software in recognizing the CSV file's encoding by placing a BOM sequence in the beginning of your file (which, in general, is not recommended for UTF-8).
I have the task of converting utf-8 csv file to excel file, but it is not read properly in excel. Because there was no byte order mark (BOM) at the beginning of the file
I see how:
https://stackoverflow.com/a/38025106/6102332
with open('test.csv', 'w', newline='', encoding='utf-8-sig') as f:
w = csv.writer(f)
# Write Unicode strings.
w.writerow([u'English', u'Chinese'])
w.writerow([u'American', u'美国人'])
w.writerow([u'Chinese', u'中国人'])
But it seems like that only works with brand new files.
But not work for my file already has data.
Are there any easy ways to share?
Is there any other way than this? : https://stackoverflow.com/a/6488070/6102332
Save the exported file as a csv
Open Excel
Import the data using Data-->Import External Data --> Import Data
Select the file type of "csv" and browse to your file
In the import wizard change the File_Origin to "65001 UTF" (or choose correct language character identifier)
Change the Delimiter to comma
Select where to import to and Finish
Read the file in and write it back out with the encoding desired:
with open('input.csv','r',encoding='utf-8-sig') as fin:
with open('output.csv','w',encoding='utf-8-sig') as fout:
fout.write(fin.read())
utf-8-sig codec will remove BOM if present on read, and will add BOM on write, so the above can safely run on files with or without BOM originally.
You can convert in place by doing:
file = 'test.csv'
with open(file,'r',encoding='utf-8-sig') as f:
data = f.read()
with open(file,'w',encoding='utf-8-sig') as f:
f.write(data)
Note also that utf16 works as well. Some older Excels don't handle UTF-8 correctly.
Thank You!
I have found a way to automatically handle the missing BOM utf-8 signature.
In addition to the lack of BOM signature, there is another problem is that duplicate BOM signature is mixed in the file data. Excel does not show clearly and transparently. and make a mistake other data when compared, calculated. eg :
data -> Excel
Chinese -> Chinese
12 -> 12
If you compare it, obviously ChineseBOM will not be equal to Chinese.
Code python to solve the problem:
import codecs
bom_utf8 = codecs.BOM_UTF8
def fix_duplicate_bom_utf8(file, bom=bom_utf8):
with open(file, 'rb') as f:
data_f = f.read()
data_finish = bom + data_f.replace(bom, b'')
with open(file, 'wb') as f:
f.write(data_finish)
return
# Use:
file_csv = r"D:\data\d20200114.csv" # American, 美国人
fix_duplicate_bom_utf8(file_csv)
# file_csv -> American, 美国人
I'm having a curious issue with the csv package in Python 3.7.
I'm importing a csv file and able to access all the file as expected, with one exception - the header row, as stored in the "fieldnames" object, appears have the first column header (first item in fieldnames) malformed.
This first field always has the format: 'xxx"header"'
where:
xxx are garbage characters that always seem to be the same
header is the correct header text
See the following screenshot of my table <csv.DictReader> object from my debug window:
My code to open the file, follows. I added the headers[0] = table.fieldnames[0].split('"')[1] in order to extract the correct header and place it back into fieldnames`.
import csv
with self.inputfile.open() as self.inputfid:
table = csv.DictReader(self.inputfid, delimiter=',')
headers = table.fieldnames
headers[0] = table.fieldnames[0].split('"')[1]
(Note: self.inputfile is a pathlib.Path object)
I didn't notice this for a long time because I wasn't using the first column (with the # header) - I've been happily parsing with the rest of the columns for a while on multiple files.
If I look directly at the csv, there doesn't appear to be any issue:
Questions:
Does anyone know what the issue is? Is there anything I can try to correct the import issue?
If there isn't a fix, is there a better way to parse the garbage? I realize this could clear up in the future, but I think the split will still work even with just bare double quotes (the header should still be the 2nd item in the split, right?). Is there a better solution?
It looks like your csv file is encoded as utf-8-sig - a version of utf-8 used by some Windows applications, but it's being decoded as cp1252 - another encoding in common use on Windows.
>>> print('"#"'.encode('utf-8-sig').decode('cp1252'))
"#"
The "garbage" characters preceding the header are the byte-order-mark that utf-8-sig uses to tell Windows applications that a file is encoded as utf-8 rather than one of the historically more common 8-bit encodings.
To avoid the "garbage", specify utf-8-sig as the encoding when opening your file.
The code in the question could be modified to work like this:
import csv
encoding = 'utf-8-sig'
with self.inputfile.open(encoding=encoding, newline='') as self.inputfid:
table = csv.DictReader(self.inputfid, delimiter=',')
headers = table.fieldnames
...
If - as seems likely - the encoding of input files may vary, the value of encoding (or a best guess) must be determined by using a tool like chardet, as used in the comments.
I'm using pycharm and Python 3.7.
I would like to write data in a csv, but my code writes in the File just the first line of my data... someone knows why?
This is my code:
from pytrends.request import TrendReq
import csv
pytrend = TrendReq()
pytrend.build_payload(kw_list=['auto model A',
'auto model C'])
# Interest Over Time
interest_over_time_df = pytrend.interest_over_time()
print(interest_over_time_df.head(100))
writer=csv.writer(open("C:\\Users\\
Desktop\\Data\\c.csv", 'w', encoding='utf-8'))
writer.writerow(interest_over_time_df)
try using pandas,
import pandas as pd
interest_over_time_df.to_csv("file.csv")
Once i encountered the same problem and solve it like below:
with open("file.csv", "rb", encoding="utf-8) as fh:
precise details:
r = read mode
b = mode specifier in the open() states that the file shall be treated as binary,
so contents will remain a bytes. No decoding attempt will happen this way.
As we know python tries to convert a byte-array (a bytes which it assumes to be a utf-8-encoded string) to a unicode string (str). This process of course is a decoding according to utf-8 rules. When it tries this, it encounters a byte sequence which is not allowed in utf-8-encoded strings (namely this 0xff at position 0).
You could try something like:
import csv
with open(<path to output_csv>, "wb") as csv_file:
writer = csv.writer(csv_file, delimiter=',')
for line in interest_over_time_df:
writer.writerow(line)
Read more here: https://www.pythonforbeginners.com/files/with-statement-in-python
You need to loop over the data and write in line by line
Here's my use case: It's my job to clean CSV files which are often scrapped from web pages (most are english but some german and other weird non unicode characters sneak in there). Python 3 is "utf-8" by default and the usual
import csv
#open file
with open('input.csv','r',encoding = 'utf-8')
reader = csv.reader(f)
fails with UnicodeEncodeError even with try/catch blocks everywhere
I can't figure out how to clean the input if I can't even open it. My end goal is simply to read each line into a list I call text.
I'm out of ideas I've even tried the following:
for encoding in ('utf-8','latin-1',etc, etc):
try:
//open the file
I can't make any assumptions about the encoding as they may be written on a unix machine in another part of the world and I'm on a windows machine. The input are just simple strings otherwise example
test case: "This is an example of a test case and the test may wrap around to a new line when opened in a text processor"
Maybe try reading in the contents entirely, then using bytes.decode() in much the same way you mentioned:
#!python3
import csv
from io import StringIO
with open('input.csv', 'rb') as binfile:
csv_bytes = binfile.readall()
for enc in ('utf-8', 'utf-16', 'latin1'):
try:
csv_string = csv_bytes.decode(encoding=enc, errors='strict')
break
except UnicodeError as e:
last_err = e
else: #none worked
raise last_err
with StringIO(csv_string) as csvfile:
csv = csv.reader(csvfile)
for row in csv:
print(row[0])