I am cleaning the textual data that I scrawled from multiple urls. How can I remove non-English words/symbols from the data in a csv file?
I saved the data and read the data using the following codes:
To save the data as csv file:
df.to_csv("blogdata.csv", encoding = "utf-8")
After saving the data, the csv file shows as follows including non-English words and symbols (e.g., '\n\t\t\t', m’, etc.):
The symbols did not show in the original data and some of them even appear from the data that are in English. Take 'Ross Parker' in the 7th row as an example.
The data saved in the csv file says: ['\n\t\t\t', 'It’s about time I wrote an update on what we’ve been up to over the past few months. We’re about to...
Where in the original data scrawled from the url, it shows as follows:
Can anybody explain why this happens and help me solve this issue and clean the non-English data from the file?
Thank you so much in advance!
It looks like pilot error: the data is correct but you are looking at it in a tool which is configured or hard-coded to display the text as Latin-1 (or Windows code page 1252?) even though you saved it as UTF-8.
Some tools - epecially on Windows - will do whimsical things with UTF-8 which doesn't have a BOM. Maybe try adding one (and maybe file a bug report if this really helps; the tool should at the very least let you override its default encoding, without modifying the input data).
In other words, if the screen shot with the broken data is from Excel, you probably selected a DOS oode page (or the horribly mislabelled "ANSI") instead of UTF-8 when it asked how to import this CSV file. Perhaps the best fix would be to devise a workflow which does not involve a spreadsheet.
Or perhaps you used a tool which didn't ask you anything, and tried to "sniff" the data to determine its encoding, and it guessed wrong. Adding an invisible byte sequence called a BOM which is unique to UTF-8 should hopefully allow it to guess right; but this is buggy behavior - you should not be a hostage to its clearly imperfect heuristics. (See also "Bush hid the facts" for a related story.)
Related
I'm using the csv module in Python to write in a CSV file. It has to be like this and it has to also be viewable trough Excel. But when someone views it trough Excel, cyrillic characters show up as all sorts of weird symbols. What kind of encoding must be used exactly when importing in Excel to fix this? I don't have Excel right now and I don't have the option of playing around with Excel on the other guy's computer, so I need to know the proper encoding type in advance. There seem to be a LOT of encoding options in Excel, so it's hard to pick the right one. So...any suggestions?
I can export a CSV with openERP 7 , but it is encoded in ANSI. I would like to have it as a UTF-8 encoded file. How can I achieve this ? The default export option in openERP doesn"t have any extra options. What files should be modified ? Or is there an app for this matter ? Any help would be appreciated.
Encodings are a complicated thing, and it is difficult to answer an encoding-related question without precise facts. ANSI is not an encoding, I assume you actually mean ASCII. And ASCII itself can be seen as a subset of UTF-8, so technically ASCII is valid UTF-8.
OpenERP 7.0 only exports CSV files in UTF-8, so if you do not get the expected result you are probably facing a different kind of issue, for example:
The original data was imported using a wrong encoding (you can choose the encoding when you import, but again the default is UTF-8), so it is actually corrupted in the database, and OpenERP cannot do anything about it
The CSV file might be exported correctly in UTF-8 but you are opening it with a different encoding (for example on Windows most programs will assume your files are ISO-8859-1/Latin-1/Windows-1252 encoded). Double-check the settings of your program.
If you need more help you'll have to be much more specific: what result do you get (what does the data look like), what did you expect, etc.
As part of my Django app, I have to get the contents of a text file which a user uploads (which could be any charset) and save it to my DB. I keep running into issues (like having to remove UTF8's BOM manually, or having to figure out how to account for non-printable characters, or having to figure out how to make all unicode characters work - not just Latin ones, etc.) and each of these issues requires its own hack.
Is there a robust way to do this that doesn't require each of these case-by-case fixes? Right now I'm just using file.read() to get the contents, then doing all of those workarounds to clean the contents, and then using .save() to save it to the DB (I have a model for this).
What else can I be doing?
Causes some overhead, but you could base64 encode the entire string before persisting to the db. Then no escaping is required.
If you want to explicitly steer away from any issues with encoding and just see files as bunches of binary data (not strings of text in a specific encoding) you might want to use your database's binary format.
For MySQL this is BINARY and VARBINARY: http://dev.mysql.com/doc/refman/5.0/en/binary-varbinary.html
For a deeper understanding of unicode & utf-8 issues (recommended) this is a nice read on the subject:
http://www.tbray.org/ongoing/When/200x/2003/04/26/UTF
When you try opening a MS Word document or for that matter most Windows file formats, you will see gibberish as given below broken intermittently by the actual text. I need to extract the text that goes in and want to ignore the gibberish -- which is something like given below. How do I extract only the text that matters, and ignore rest of the stuff. Please advise.
Here's a sample of open("sample.doc",r").read() of a word doc. Thanks
00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00In an Interesting news,his is the first time we polled Indian channel community for their preferred memory supplier. Transcend came a close second, was seen to be more popular among class A city based resellers, was also the most recalled memory brand among customers according to resellers. However Transcend channels complained of parallel imports and constant unavailability of the products in grey x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x
The tool that seems the most viable, particularly if you need an all python solution is OleFileIO.
doc is a binary format, it's not a markup language or something.
Specs: http://www.microsoft.com/interop/docs/OfficeBinaryFormats.mspx
There is no generic why to extract
information from every file format.
You need to know the format to know
how to extract the information.
Just wanted to state that first. So what you should look for is libraries and software that can convert/extract the information you want. And as mentioned by Ofir MicroSoft have tools for that for their formats.
But if you can not do this and want to take the chance that there is text visible in the file that you think is interesting to read you could do a normal read and look for sequences of bytes that will build text. Then comes the question, what languages/charset should I support support in my hunt for text. Is it multi-byte text?
The easy start is to loop through the data and look for sequences of [a-zA-z0-9_- ] to find the text. But word is probably multi-byte. So you should scan double byte as one char.
Note: some of the new formats like open office and docx is multiple files in a compressed container. So you need to de-compress the file first, and scan XML documents after the text you looking for.
Word doc is a compressed format. You need to uncompress it first to get the real data (try open a doc file in a program like winrar, and you'll see it contains multiple files.
It even seems to be XML, so reading the format should not be that hard, although I'm not sure if you get all the data this way.
I had a similar problem, needing to query hundreds of Word documents. I converted the Word files to text files and used normal text parsing tools. Worked well.
I want to save some text to the database using the Django ORM wrappers. The problem is, this text is generated by scraping external websites and many times it seems they are listed with the wrong encoding. I would like to store the raw bytes so I can improve my encoding detection as time goes on without redoing the scrapes. But Django seems to want everything to be stored as unicode. Can I get around that somehow?
You can store data, encoded into base64, for example. Or try to analize HTTP headers from browser, may be it is simplier to get proper encoding from there.
Create a File with the data. Use a Django models.FileField to hold a reference to the file.
No it does not involve a ton of I/O. If your file is small it adds 2 or 3 I/O's (the directory read, the iNode read and the data read.)