Data reading - csv - python

I have some datas in a .dfx file and I trying to read it as a csv with pandas. But it has some special characters which are not read by pandas. They are separators as well.I attached one line from it
The "DC4" is being removed when I print the file. The SI is read as space, correctly. I tried some encoding (utf-8, latin1 etc), but no success.
I attached the printed first line as well. I marked the place where the characters should be.
My code is simple:
import pandas
file_log = pandas.read_csv("file_log.DFX", header=None)
print(file_log)
I hope I was clear and someone has an idea.
Thanks in advance!
EDIT:
The input. LINK: drive.google.com/open?id=0BxMDhep-LHOIVGcybmsya2JVM28
The expected output:
88.4373 0 12.07.2014/17:05:22 38.0366 38.5179 1.3448 31.9839
30.0070 0 12.07.2014/17:14:27 38.0084 38.5091 0.0056 0.0033

By examining the example.DFX in hex (with xxd), the two separators are 0x14 and 0x0f accordingly.
Read the csv with multiple separators using python engine:
import pandas
sep1 = chr(0x14) # the one shows dc4
sep2 = chr(0x0f) # the one shows si
file_log = pandas.read_csv('example.DFX', header=None, sep='{}|{}'.format(sep1, sep2), engine='python')
print file_log
And you get:
0 1 2 3 4 5 6 7
0 88.4373 0 12.07.2014/17:05:22 38.0366 38.5179 1.3448 31.9839 NaN
1 30.0070 0 12.07.2014/17:14:27 38.0084 38.5091 0.0056 0.0033 NaN
It seems it has an empty column at the end. But I'm sure you can handle that.

The encoding seems to be ASCII here. DC4 stands for "device control 4" and SI for "Shift In". These are control characters in an ASCII file and not printable. Thus you cannot see them when you issue a "print(file_log)", although it might do something depending on your terminal to view this (like \n would do a new-line).
Try typing file_log in your interpreter to get the representation of that variable and check if those special characters are included. Chances are that you'll see DC4 in the representation as '\x14' which means hexadecimal 14.
You may then further process these strings in your program by using string manipulation like replace.

Related

Python dataframe to_cvs() how to remove the tab at the beginning of the output text file

I want to save a pandas DataFrame into a text file and make it R-friendly for later analysis. I used dataframe.to_cvs(filename, sep = '\t'). But I noticed that the output file begins with a tab, which is not quite readable for read.table() in R.
I used od -c filename, and it showed like this:
\t 1 2 3 4 \t 5 6 7 8 \t 1 2 3 ...
Is there any way to remove the tab at the beginning? Thank you in advance.
Looking at the documentation, it seems that this has to do with the index.
Try this one on for size:
dataframe.to_csv(filename, sep = '\t', index_label=False)
The docs are stating this:
index_labelstr or sequence, or False, default None
Column label for index column(s) if desired. If None is given, and header and index are True, then the index names are used. A sequence should be given if the object uses MultiIndex. If False do not print fields for index names. Use index_label=False for easier importing in R.
I am on Windows so I cannot use the od command to check.

How to read csv files (with special characters) in Python? How can I decode the text data? Read encoded text from file and convert to string

I have used tweepy to store the text of tweets in a csv file using Python csv.writer(), but I had to encode the text in utf-8 before storing, otherwise tweepy throws a weird error.
import pandas as pd
data = pd.read_csv('C:\Users\Lenovo\Desktop\_Carabinieri_10_tweets.csv', delimiter=",", encoding="utf-8")
data.head()
print(data.head())
Now, the text data is stored like this:
OUTPUT
id … text
0 1228280254256623616 … b'RT #MinisteroDifesa: #14febbraio Il Ministro…
1 1228257366841405441 … b'\xe2\x80\x9cNon t\xe2\x80\x99ama chi amor ti…
2 1228235394954620928 … b'Eseguite dai #Carabinieri del Nucleo Investi…
3 1228219588589965316 … b'Il pianeta brucia\nConosci il black carbon?...
4 1228020579485261824 … b'RT #Coninews: Emozioni tricolore \xe2\x9c\xa…
Although I used "utf-8" to read the file into a DataFrame with the code shown below, the characters look very different in the output. the output looks like bytes. The language is italian.
I tried to decode this using this code (there is more data in other columns, text is in second column). But, it doesn't decode the text. I cannot use .decode('utf-8') as the csv reader reads data as strings i.e. type(row[2]) is 'str' and I can't seem to convert it into bytes, the data gets encoded once more!
How can I decode the text data?
I would be very happy if you can help with this, thank you in advance.
The problem is likely to come from the way you have written you csv file. I would bet a coin that when read as text (with a simple text editor like notepad, notepad++, or vi) is actually contains:
1228280254256623616,…,b'RT #MinisteroDifesa: #14febbraio Il Ministro...'
1228257366841405441,…,b'\xe2\x80\x9cNon t\xe2\x80\x99ama chi amor ti...'
...
or:
1228280254256623616,…,"b'RT #MinisteroDifesa: #14febbraio Il Ministro...'"
1228257366841405441,…,"b'\xe2\x80\x9cNon t\xe2\x80\x99ama chi amor ti...'"
...
Pandas read_csv then correctly reads the text representation of a byte string.
The correct fix would be to write true UTF-8 encoded strings, but as I do not know the code, I cannot propose a fix.
A possible workaround is to use ast.literal_eval to convert the text representation into a byte string and decode it:
df['text'] = df['text'].apply(lambda x: ast.literal_eval(x).decode('utf8'))
It should give:
id ... text
0 1228280254256623616 ... RT #MinisteroDifesa: #14febbraio Il Ministro...
1 1228257366841405441 ... “Non t’ama chi amor ti...
...

reading in rho delimited file

I'm trying to use Pandas to read in a delimited file. The separator is a greek character, lowercase rho (þ).
I'm struggling to define the correct read_table parameters so that the resulting data frame is correctly formatted.
Does anyone have any experience or suggestions with this?
An example of the file is below
TimeþUser-IDþAdvertiser-IDþOrder-IDþAd-IDþCreative-IDþCreative-VersionþCreative-Size-IDþSite-IDþPage-IDþCountry-IDþState/ProvinceþBrowser-IDþBrowser-VersionþOS-IDþDMA-IDþCity-IDþZip-CodeþSite-DataþTime-UTC-Sec
03-28-2016-00:50:03þ0þ3893600þ7786669þ298662779þ67802437þ1þ300x250þ1722397þ125754620þ68þþ30þ0.0þ501012þ0þ3711þþþ1459122603
03-28-2016-00:24:29þ0þ3893600þ7352234þ290743769þ55727503þ1þ1x1þ1602646þ117915815þ68þþ31þ0.0þ501012þ0þ3711þþþ1459121069
03-28-2016-00:13:42þ0þ3893600þ7352234þ290743769þ55727503þ1þ1x1þ1602646þ117915815þ68þþ31þ0.0þ501012þ0þ3711þþþ1459120422
03-28-2016-00:21:09þ0þ3893600þ7352234þ290743769þ55727503þ1þ1x1þ1602646þ117915815þ68þþ31þ0.0þ501012þ0þ3711þþþ1459120869
I think what's happening is that the C engine isn't working here. If we switch to the Python engine, which is more powerful but slower, it seems to behave. For example, with the default C engine:
>>> df = pd.read_csv("out.rsv",sep="þ")
>>> df.iloc[:,:5]
TimeþUser-IDþAdvertiser-IDþOrder-IDþAd-IDþCreative-IDþCreative-VersionþCreative-Size-IDþSite-IDþPage-IDþCountry-IDþState/ProvinceþBrowser-IDþBrowser-VersionþOS-IDþDMA-IDþCity-IDþZip-CodeþSite-DataþTime-UTC-Sec
0 03-28-2016-00:50:03þ0þ3893600þ7786669þ29866277...
1 03-28-2016-00:24:29þ0þ3893600þ7352234þ29074376...
2 03-28-2016-00:13:42þ0þ3893600þ7352234þ29074376...
3 03-28-2016-00:21:09þ0þ3893600þ7352234þ29074376...
But with Python:
>>> df = pd.read_csv("out.rsv",sep="þ", engine="python")
>>> df.iloc[:,:5]
Time User-ID Advertiser-ID Order-ID Ad-ID
0 03-28-2016-00:50:03 0 3893600 7786669 298662779
1 03-28-2016-00:24:29 0 3893600 7352234 290743769
2 03-28-2016-00:13:42 0 3893600 7352234 290743769
3 03-28-2016-00:21:09 0 3893600 7352234 290743769
.. but seriously, þ? You're using þ as a delimiter? The only search hits google gives me for "rho delimited file" are all related to this question!
Note that you say lowercase rho, but it looks like thorn to me.. Maybe it's a lowercase rho on your end and got confused in posting?

Python - Parsing Conundrum

I have searched high and low for a resolution to this situation, and tested a few different methods, but I haven't had any luck thus far. Basically, I have a file with data in the following format that I need to convert into a CSV:
(previously known as CyberWay Pte Ltd)
0 2019
01.com
0 1975
1 TRAVEL.COM
0 228
1&1 Internet
97 606
1&1 Internet AG
0 1347
1-800-HOSTING
0 8
1Velocity
0 28
1st Class Internet Solutions
0 375
2iC Systems
0 192
I've tried using re.sub and replacing the whitespace between the numbers on every other line with a comma, but haven't had any success so far. I admit that I normally parse from CSVs, so raw text has been a bit of a challenge for me. I would need to maintain the string formats that are above each respective set of numbers.
I'd prefer the CSV to be formatted as such:
foo bar
0,8
foo bar
0,9
foo bar
0,10
foo bar
0,11
There's about 50,000 entries, so manually editing this would take an obscene amount of time.
If anyone has any suggestions, I'd be most grateful.
Thank you very much.
If you just want to replace whitespace with comma, you can just do:
line = ','.join(line.split())
You'll have to do this only on every other line, but from your question it sounds like you already figured out how to work with every other line.
If I have correctly understood your requirement, you need a strip() on all lines and a split based on whitespace on even lines (lines starting from 1):
import re
fp = open("csv.txt", "r")
while True:
line = fp.readline()
if '' == line:
break
line = line.strip()
fields = re.split("\s+", fp.readline().strip())
print "\"%s\",%s,%s" % ( line, fields[0], fields[1] )
fp.close()
The output is a CSV (you might need to escape quotes if they occur in your input):
"Content of odd line",Number1,Number2
I do not understand the 'foo,bar' you place as header on your example's odd lines, though.

Convert binary data to web-safe text and back - Python

I want to convert a binary file (such as a jpg, mp3, etc) to web-safe text and then back into binary data. I've researched a few modules and I think I'm really close but I keep getting data corruption.
After looking at the documentation for binascii I came up with this:
from binascii import *
raw_bytes = open('test.jpg','rb').read()
text = b2a_qp(raw_bytes,quotetabs=True,header=False)
bytesback = a2b_qp(text,header=False)
f = open('converted.jpg','wb')
f.write(bytesback)
f.close()
When I try to open the converted.jpg I get data corruption :-/
I also tried using b2a_base64 with 57-long blocks of binary data. I took each block, converted to a string, concatenated them all together, and then converted back in a2b_base64 and got corruption again.
Can anyone help? I'm not super knowledgeable on all the intricacies of bytes and file formats. I'm using Python on Windows if that makes a difference with the \r\n stuff
Your code looks quite complicated. Try this:
#!/usr/bin/env python
from binascii import *
raw_bytes = open('28.jpg','rb').read()
i = 0
str_one = b2a_base64(raw_bytes) # 1
str_list = b2a_base64(raw_bytes).split("\n") #2
bytesBackAll = a2b_base64(''.join(str_list)) #2
print bytesBackAll == raw_bytes #True #2
bytesBackAll = a2b_base64(str_one) #1
print bytesBackAll == raw_bytes #True #1
Lines tagged with #1 and #2 represent alternatives to each other. #1 seems most straightforward to me - just make it one string, process it and convert it back.
You should use base64 encoding instead of quoted printable. Use b2a_base64() and a2b_base64().
Quoted printable is much bigger for binary data like pictures. In this encoding each binary (non alphanumeric character) code is changed into =HEX. It can be used for texts that consist mainly of alphanumeric like email subjects.
Base64 is much better for mainly binary data. It takes 6 bites of first byte, then last 2 bits of 1st byte and 4 bites from 2nd byte. etc. It can be recognized by = padding at the end of the encoded text (sometimes other character is used).
As an example I took .jpeg of 271 700 bytes. In qp it is 627 857 b while in base64 it is 362 269 bytes. Size of qp is dependent of data type: text which is letters only do not change. Size of base64 is orig_size * 8 / 6.
Your documentation reference is for Python 3.0.1. There is no good reason using Python 3.0. You should be using 3.2 or 2.7. What exactly are you using?
Suggestion: (1) change bytes to raw_bytes to avoid confusion with the bytes built-in (2) check for raw_bytes == bytes_back in your test script (3) while your test should work with quoted-printable, it is very inefficient for binary data; use base64 instead.
Update: Base64 encoding produces 4 output bytes for every 3 input bytes. Your base64 code doesn't work with 56-byte chunks because 56 is not an integral multiple of 3; each chunk is padded out to a multiple of 3. Then you join the chunks and attempt to decode, which is guaranteed not to work.
Your chunking loop would be much better written as:
output_string = ''.join(
b2a_base64(raw_bytes[i:i+57]) for i in xrange(0, xrange(len(raw_bytes), 57)
)
In any case, chunking is rather slow and pointless; just do b2a_base64(raw_bytes)
#PMC's answer copied from the question:
Here's what works:
from binascii import *
raw_bytes = open('28.jpg','rb').read()
str_list = []
i = 0
while i < len(raw_bytes):
byteSegment = raw_bytes[i:i+57]
str_list.append(b2a_base64(byteSegment))
i += 57
bytesBackAll = a2b_base64(''.join(str_list))
print bytesBackAll == raw_bytes #True
Thanks for the help guys. I'm not sure why this would fail with [0:56] instead of [0:57] but I'll leave that as an exercise for the reader :P

Categories

Resources