Read CSV with extra commas and no quotechar with Pandas? - python

Data:
from io import StringIO
import pandas as pd
s = '''ID,Level,QID,Text,ResponseID,responseText,date_key
375280046,S,D3M,Which is your favorite?,D5M0,option 1,2012-08-08 00:00:00
375280046,S,D3M,How often? (at home, at work, other),D3M0,Work,2010-03-31 00:00:00
375280046,M,A78,Do you prefer a, b, or c?,A78C,a,2010-03-31 00:00:00'''
df = pd.read_csv(StringIO(s))
Error received:
pandas.io.common.CParserError: Error tokenizing data. C error: Expected 7 fields in line 3, saw 9
It's very obvious why I'm receiving this error. The data contains text such as How often? (at home, at work, other) and Do you prefer a, b, or c?.
How does one read this type of data into a pandas DataFrame?

Of course, as I write the question, I figured it out. Rather than delete it, I'll share it with my future self when I forget how to do this.
Apparently, pandas default sep=',' can also be a regular expression.
The solution was to add sep=r',(?!\s)' to read_csv like so:
df = pd.read_csv(StringIO(s), sep=r',(?!\s)')
The (?!\s) part is a negative lookahead to match only commas that don't have a following space after them.
Result:
ID Level QID Text ResponseID \
0 375280046 S D3M Which is your favorite? D5M0
1 375280046 S D3M How often? (at home, at work, other) D3M0
2 375280046 M A78 Do you prefer a, b, or c? A78C
responseText date_key
0 option 1 2012-08-08 00:00:00
1 Work 2010-03-31 00:00:00
2 a 2010-03-31 00:00:00

Related

Unable to convert comma separated integers and non-integer values to float in a series column in Python

Loading in the data
in: import pandas as pd
in: df = pd.read_csv('name', sep = ';', encoding='unicode_escape')
in : df.dtypes
out: amount object
I have an object column with amounts like 150,01 and 43,69. Thee are about 5,000 rows.
df['amount']
0 31
1 150,01
2 50
3 54,4
4 32,79
...
4950 25,5
4951 39,5
4952 75,56
4953 5,9
4954 43,69
Name: amount, Length: 4955, dtype: object
Naturally, I tried to convert the series into the locale format, which suppose to turn it into a float format. I came back with the following error:
In: import locale
setlocale(LC_NUMERIC, 'en_US.UTF-8')
Out: 'en_US.UTF-8'
In: df['amount'].apply(locale.atof)
Out: ValueError: could not convert string to float: ' - '
Now that I'm aware that there are non-numeric values in the list, I tried to use isnumeric methods to turn the non-numeric values to become NaN.
Unfortunately, due to the comma separated structure, all the values would turn into -1.
0 -1
1 -1
2 -1
3 -1
4 -1
..
4950 -1
4951 -1
4952 -1
4953 -1
4954 -1
Name: amount, Length: 4955, dtype: int64
How do I turn the "," values to "." by first removing the "-" values? I tried .drop() or .truncate it does not help. If I replace the str",", " ", it would also cause trouble since there is a non-integer value.
Please help!
Documentation that I came across
-https://stackoverflow.com/questions/21771133/finding-non-numeric-rows-in-dataframe-in-pandas
-https://stackoverflow.com/questions/56315468/replace-comma-and-dot-in-pandas
p.s. This is my first post, please be kind
Sounds like you have a European-style CSV similar to the following. Provide actual sample data as many comments asked for if your format is different:
data.csv
thing;amount
thing1;31
thing2;150,01
thing3;50
thing4;54,4
thing5;1.500,22
To read it, specify the column, decimal and thousands separator as needed:
import pandas as pd
df = pd.read_csv('data.csv',sep=';',decimal=',',thousands='.')
print(df)
Output:
thing amount
0 thing1 31.00
1 thing2 150.01
2 thing3 50.00
3 thing4 54.40
4 thing5 1500.22
Posting as an answer since it contains multi-line code, despite not truly answering your question (yet):
Try using chardet. pip install chardet to get the package, then in your import block, add import chardet.
When importing the file, do something like:
with open("C:/path/to/file.csv", 'r') as f:
data = f.read()
result = chardet.detect(data.encode())
charencode = result['encoding']
# now re-set the handler to the beginning and re-read the file:
f.seek(0, 0)
data = pd.read_csv(f, delimiter=';', encoding=charencode)
Alternatively, for reasons I cannot fathom, passing engine='python' as a parameter works often. You'd just do
data = pd.read_csv('C:/path/to/file.csv', engine='python')
#Mark Tolonen has a more elegant approach to standardizing the actual data, but my (hacky) way of doing it was to just write a function:
def stripThousands(self, df_column):
df_column.replace(',', '', regex=True, inplace=True)
df_column = df_column.apply(pd.to_numeric, errors='coerce')
return df_column
If you don't care about the entries that are just hyphens, you could use a function like
def screw_hyphens(self, column):
column.replace(['-'], np.nan, inplace=True)
or if np.nan values will be a problem, you can just replace it with column.replace('-', '', inplace=True)
**EDIT: there was a typo in the block outlining the usage of chardet. it should be correct now (previously the end of the last line was encoding=charenc)

Python automatically converting specific data to date format and losing data

I'm cleaning a csv file with pandas, mainly removing special characters such as ('/', '#', etc). The file has 7 columns (none of which are dates).
In some of the columns, there's just numerical data such as '11/6/1980'.
I've noticed that directly after reading the csv file,
df = pd.read_csv ('report16.csv', encoding ='ANSI')
this data becomes '11/6/80', after cleaning it becomes '11 6 80' (it's the same result in the output file). So wherever the data has ' / ', it's being interpreted as a date and python is eliminating the first 2 digits from the data.
Data
Expected result
Actual Result
11/6/1980
11 6 1980
11 6 80
12/8/1983
12 8 1983
12 8 83
Both of the above results are wrong because in the Actual Result column, I'm losing 2 digits towards the end.
The data looks like this
Org Name
Code
Code copy
ABC
11/6/1980
11/6/1980
DEF
12/8/1983
12/8/1983
GH
11/5/1987
11/5/1987
OrgName, Code, Code copy
ABC, 11/6/1980, 11/6/1980
DEF, 12/8/1983, 12/8/1983
GH, 11/5/1987, 11/5/1987
KC, 9000494, 9000494
It's worth mentioning that the column contains other data such as '900490', strings, etc but in these instances, there aren't any problems.
What could be done to not allow this conversion?
Not an answer, but comments do not allow to include well presented code and data.
Here is what I call a minimal reproducible example:
Content of the sample.csv file:
Data,Expected result,Actual Result
11/6/1980,11 6 1980,11 6 80
12/8/1983,12 8 1983,12 8 83
Code:
df = pd.read_csv('sample.csv')
print(df)
s = df['Data'].str.replace('/', ' ')
print((df['Expected result'] == s).all())
It gives :
Data Expected result Actual Result
0 11/6/1980 11 6 1980 11 6 80
1 12/8/1983 12 8 1983 12 8 83
True
This proves that read_csv has correctly read the file and has not changed anything.
PLEASE SHOW THE CONTENT OF YOUR CSV FILE AS TEXT, along with enough code to reproduce your problem.
How about trying string operation?! First select the column that you would like to modify and replace "/" or "#" with whitespace : column.str.replace("/", " "). I hope this is gonna work !
The behavior of converting dates is not strictly a python issue. You are using pandas read_csv.
Try to explicitly declare a separator. If sep not declared, it makes guesses.
df = pd.read_csv ('report16.csv', encoding ='ANSI', sep =',')

apply function takes a long time to run

I'm working with a dataset of about ~ 32.000.000 rows:
RangeIndex: 32084542 entries, 0 to 32084541
df.head()
time device kpi value
0 2020-10-22 00:04:03+00:00 1-xxxx chassis.routing-engine.0.cpu-idle 100
1 2020-10-22 00:04:06+00:00 2-yyyy chassis.routing-engine.0.cpu-idle 97
2 2020-10-22 00:04:07+00:00 3-zzzz chassis.routing-engine.0.cpu-idle 100
3 2020-10-22 00:04:10+00:00 4-dddd chassis.routing-engine.0.cpu-idle 93
4 2020-10-22 00:04:10+00:00 5-rrrr chassis.routing-engine.0.cpu-idle 99
My goal is to create one aditional columns named role, filled with regard a regex
This is my approach
def router_role(row):
if row["device"].startswith("1"):
row["role"] = '1'
if row["device"].startswith("2"):
row["role"] = '2'
if row["device"].startswith("3"):
row["role"] = '3'
if row["device"].startswith("4"):
row["role"] = '4'
return row
then,
df = df.apply(router_role,axis=1)
However it's taking a lot of time ... any idea about other possible approach ?
Thanks
Apply is very slow and never very good. Try something like this instead:
df['role'] = df['device'].str[0]
Using apply is notoriously slow because it doesn't take advantage of multithreading (see, for example, pandas multiprocessing apply). Instead, use built-ins:
>>> import pandas as pd
>>> df = pd.DataFrame([["some-data", "1-xxxx"], ["more-data", "1-yyyy"], ["other-data", "2-xxxx"]])
>>> df
0 1
0 some-data 1-xxxx
1 more-data 1-yyyy
2 other-data 2-xxxx
>>> df["Derived Column"] = df[1].str.split("-", expand=True)[0]
>>> df
0 1 Derived Column
0 some-data 1-xxxx 1
1 more-data 1-yyyy 1
2 other-data 2-xxxx 2
Here, I'm assuming that you might have multiple digits before the hyphen (e.g. 42-aaaa), hence the extra work to split the column and get the first value of the split. If you're just getting the first character, do what #teepee did in their answer with just indexing into the string.
You can trivially convert your code to use np.vectorize().
See here:
Performance of Pandas apply vs np.vectorize to create new column from existing columns

Pandas read_csv - How to handle a comma inside double quotes that are themselves inside double quotes

This is not the same question as double quoted elements in csv cant read with pandas.
The difference is that in that question: "ABC,DEF" was breaking the code.
Here, "ABC "DE" ,F" is breaking the code.
The whole string should be parsed in as 'ABC "DE", F'. Instead the inside double quotes are leading to the below-mentioned issue.
I am working with a csv file that contains the following type of entries:
header1, header2, header3,header4
2001-01-01,123456,"abc def",V4
2001-01-02,789012,"ghi "jklm" n,op",V4
The second row of data is breaking the code, with the following error:
ParserError: Error tokenizing data. C error: Expected 4 fields in line 1234, saw 5
I have tried playing with various sep, delimiter & quoting etc. arguments but nothing seems to work.
Can someone please help with this? Thank you!
Based on the two rows you have provided here is an option where the text file is read into a Series object and then regex extract is used via Series.str.extract() get the information you want in a DataFrame:
with open('so.txt') as f:
contents = f.readlines()
s = pd.Series(contents)
s now looks like the following:
0 header1, header2, header3,header4\n
1 \n
2 2001-01-01,123456,"abc def",V4\n
3 \n
4 2001-01-02,789012,"ghi "jklm" n,op",V4
Now you can use regex extract to get what you want into a DataFrame:
df = s.str.extract('^([0-9]{4}-[0-9]{2}-[0-9]{2}),([0-9]+),(.+),(\w{2})$')
# remove empty rows
df = df.dropna(how='all')
df looks like the following:
0 1 2 3
2 2001-01-01 123456 "abc def" V4
4 2001-01-02 789012 "ghi "jklm" n,op" V4
and you can set your columns names with df.columns = ['header1', 'header2', 'header3', 'header4']

python pass string to pandas dataframe in a specific format

I am not entirely sure if this is possible but I thought I would go ahead and ask. I currently have a string that looks like the following:
myString =
"{"Close":175.30,"DownTicks":122973,"DownVolume":18639140,"High":177.47,"Low":173.66,"Open":177.32,"Status":29,"TimeStamp":"\/Date(1521489600000)\/","TotalTicks":245246,"TotalVolume":33446771,"UnchangedTicks":0,"UnchangedVolume":0,"UpTicks":122273,"UpVolume":14807630,"OpenInterest":0}
{"Close":175.24,"DownTicks":69071,"DownVolume":10806836,"High":176.80,"Low":174.94,"Open":175.24,"Status":536870941,"TimeStamp":"\/Date(1521576000000)\/","TotalTicks":135239,"TotalVolume":19649350,"UnchangedTicks":0,"UnchangedVolume":0,"UpTicks":66168,"UpVolume":8842514,"OpenInterest":0}"
The datasets can be varying lengths (this example has 2 datasets but it could have more), however the parameters will always be the same, (close, downticks, downvolume, etc).
Is there a way to create a dataframe from this string that takes the parameters as the index, and the numbers as the values in the column? So the dataframe would look something like this:
df =
0 1
index
Close 175.30 175.24
DownTicks 122973 69071
DownVolume 18639140 10806836
High 177.47 176.80
Low 173.66 174.94
Open 177.32 175.24
(etc)...
It looks like there are some issues with your input. As mentioned by #lmiguelvargasf, there's a missing comma at the end of the first dictionary. Additionally, there's a \n which you can simply use a str.replace to fix.
Once those issues have been solved, the process it pretty simple.
myString = '''{"Close":175.30,"DownTicks":122973,"DownVolume":18639140,"High":177.47,"Low":173.66,"Open":177.32,"Status":29,"TimeStamp":"\/Date(1521489600000)\/","TotalTicks":245246,"TotalVolume":33446771,"UnchangedTicks":0,"UnchangedVolume":0,"UpTicks":122273,"UpVolume":14807630,"OpenInterest":0}
{"Close":175.24,"DownTicks":69071,"DownVolume":10806836,"High":176.80,"Low":174.94,"Open":175.24,"Status":536870941,"TimeStamp":"\/Date(1521576000000)\/","TotalTicks":135239,"TotalVolume":19649350,"UnchangedTicks":0,"UnchangedVolume":0,"UpTicks":66168,"UpVolume":8842514,"OpenInterest":0}'''
myString = myString.replace('\n', ',')
import ast
list_of_dicts = list(ast.literal_eval(myString))
df = pd.DataFrame.from_dict(list_of_dicts).T
df
0 1
Close 175.3 175.24
DownTicks 122973 69071
DownVolume 18639140 10806836
High 177.47 176.8
Low 173.66 174.94
Open 177.32 175.24
OpenInterest 0 0
Status 29 536870941
TimeStamp \/Date(1521489600000)\/ \/Date(1521576000000)\/
TotalTicks 245246 135239
TotalVolume 33446771 19649350
UnchangedTicks 0 0
UnchangedVolume 0 0
UpTicks 122273 66168
UpVolume 14807630 8842514

Categories

Resources