Pandas error reading csv with double quotes - python

I've read all related topics - like this, this and this - but couldn't get a solution to work.
I have an input csv file like this:
ItemId,Content
i0000008,{"Title":"Edison Kinetoscopic Record of a Sneeze","Year":"1894","Rated":"N/A"}
i0000010,{"Title":"Employees, Leaving the Lumiére, Factory","Year":"1895","Rated":"N/A"}
I've tried several different approaches but couldn't get it to work. I want to read this csv file into a Dataframe like this:
ItemId Content
-------- -------------------------------------------------------------------------------
i0000008 {"Title":"Edison Kinetoscopic Record of a Sneeze","Year":"1894","Rated":"N/A"}
i0000010 {"Title":"Employees, Leaving the Lumiére, Factory","Year":"1895","Rated":"N/A"}
With following code (Python 3.9):
df = pd.read_csv('test.csv', sep=',', skipinitialspace = True, quotechar = '"')
As far as I understand, commas inside dictionary column and commas inside quotation marks are being treated as regular separators, so it raises following error:
pandas.errors.ParserError: Error tokenizing data. C error: Expected 4 fields in line 3, saw 6
Is it possible to produce desired result? Thanks.

The problem is that the comma's in the Content column are interpreted as separators. You can solve this by using pd.read_fwf to manually set the number of characters on which to split:
df = pd.read_fwf('test.csv', colspecs=[(0, 8),(9,100)], header=0, names=['ItemId', 'Content'])
Result:
ItemId
Content
0
i0000008
{"Title":"Edison Kinetoscopic Record of a Sneeze","Year":"1894","Rated":"N/A"}
1
i0000010
{"Title":"Employees, Leaving the Lumiére, Factory","Year":"1895","Rated":"N/A"}

I don't think you'll be able to read it normally with pandas because it has the delimiter used multiple times for a single value; however, reading it with python and doing some processing, you should be able to convert it to pandas dataframe:
def splitValues(x):
index = x.find(',')
return x[:index], x[index+1:].strip()
import pandas as pd
data = open('file.csv')
columns = next(data)
columns = columns.strip().split(',')
df = pd.DataFrame(columns=columns, data=(splitValues(row) for row in data))
OUTPUT:
ItemId Content
0 i0000008 {"Title":"Edison Kinetoscopic Record of a Sneeze","Year":"1894","Rated":"N/A"}
1 i0000010 {"Title":"Employees, Leaving the Lumiére, Factory","Year":"1895","Rated":"N/A"}

Related

pandas reading csv with one row spanning multiple lines

My csv starts out like this:
,index,spotify_id,artist_name,track_name,album_name,duration_ms,lyrics,lyrics_bert_embeddings
0,0,5Jk0vfT81ltt2rYyrWDzZ5,Hundred Waters,Xtalk - Kodak to Graph Remix,The Moon Rang Like a Bell,285327,not fetched,"[ 0.00722605 -0.23726921 0.15163635 -0.28774077 0.07081255 0.26606813
each row ends like this in a new line:
0.03439684 -0.29289168 0.13590978 0.2332756 -0.24305075 0.2034984 ]"
These values are from a big numpy array encoded with np.array2string() and span multiple lines in the csv.
When using pd.read_csv it throws an "ParserError: Error tokenizing data. C error: EOF inside string starting at row 90607". When using the parameter engine="python" it throws an "ParserError: unexpected end of data". When using the seperator sep='\t+' it just puts each line in a new row in the dataframe. When using csv.reader by using with open(file_path) and then iterating through each line, the same happens as with the sep='\t+'.
Is there a way to automatically append each row to the original row it belongs to or do I have to preprocess this by hand?
I tried to use your csv data to check it. I pasted the code along with the answer below,
import pandas as pd
import csv
data_path = 'dt.csv'
df = pd.read_csv(data_path, header = None, quoting=csv.QUOTE_NONE, encoding='utf-8')
dt_json = pd.DataFrame.to_json(df)
print(dt_json)
For an example, I just tried to change the data format from CSV to JSON using pandas dataframe.
{"0":{"0":null,"1":0.0},
"1":{"0":"index","1":"0"},
"2":{"0":"spotify_id","1":"5Jk0vfT81ltt2rYyrWDzZ5"},
"3":{"0":"artist_name","1":"Hundred Waters"},
"4":{"0":"track_name","1":"Xtalk - Kodak to Graph Remix"},
"5":{"0":"album_name","1":"The Moon Rang Like a Bell"},
"6":{"0":"duration_ms","1":"285327"},
"7":{"0":"lyrics","1":"not fetched"},
"8":{"0":"lyrics_bert_embeddings","1":"[ 0.00722605 -0.23726921 0.15163635 -0.28774077 0.07081255 0.26606813\r\n 0.03439684 -0.29289168 0.13590978 0.2332756 -0.24305075 0.2034984 ]"}}
https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html?highlight=read_csv

ParserError: Expected 1 fields in line 4, saw 2 [duplicate]

I'm trying to use pandas to manipulate a .csv file but I get this error:
pandas.parser.CParserError: Error tokenizing data. C error: Expected 2 fields in line 3, saw 12
I have tried to read the pandas docs, but found nothing.
My code is simple:
path = 'GOOG Key Ratios.csv'
#print(open(path).read())
data = pd.read_csv(path)
How can I resolve this? Should I use the csv module or another language ?
File is from Morningstar
you could also try;
data = pd.read_csv('file1.csv', on_bad_lines='skip')
Do note that this will cause the offending lines to be skipped.
Edit
For Pandas < 1.3.0 try
data = pd.read_csv("file1.csv", error_bad_lines=False)
as per pandas API reference.
It might be an issue with
the delimiters in your data
the first row, as #TomAugspurger noted
To solve it, try specifying the sep and/or header arguments when calling read_csv. For instance,
df = pandas.read_csv(filepath, sep='delimiter', header=None)
In the code above, sep defines your delimiter and header=None tells pandas that your source data has no row for headers / column titles. Thus saith the docs: "If file contains no header row, then you should explicitly pass header=None". In this instance, pandas automatically creates whole-number indices for each field {0,1,2,...}.
According to the docs, the delimiter thing should not be an issue. The docs say that "if sep is None [not specified], will try to automatically determine this." I however have not had good luck with this, including instances with obvious delimiters.
Another solution may be to try auto detect the delimiter
# use the first 2 lines of the file to detect separator
temp_lines = csv_file.readline() + '\n' + csv_file.readline()
dialect = csv.Sniffer().sniff(temp_lines, delimiters=';,')
# remember to go back to the start of the file for the next time it's read
csv_file.seek(0)
df = pd.read_csv(csv_file, sep=dialect.delimiter)
The parser is getting confused by the header of the file. It reads the first row and infers the number of columns from that row. But the first two rows aren't representative of the actual data in the file.
Try it with data = pd.read_csv(path, skiprows=2)
This is definitely an issue of delimiter, as most of the csv CSV are got create using sep='/t' so try to read_csv using the tab character (\t) using separator /t. so, try to open using following code line.
data=pd.read_csv("File_path", sep='\t')
I had this problem, where I was trying to read in a CSV without passing in column names.
df = pd.read_csv(filename, header=None)
I specified the column names in a list beforehand and then pass them into names, and it solved it immediately. If you don't have set column names, you could just create as many placeholder names as the maximum number of columns that might be in your data.
col_names = ["col1", "col2", "col3", ...]
df = pd.read_csv(filename, names=col_names)
Your CSV file might have variable number of columns and read_csv inferred the number of columns from the first few rows. Two ways to solve it in this case:
1) Change the CSV file to have a dummy first line with max number of columns (and specify header=[0])
2) Or use names = list(range(0,N)) where N is the max number of columns.
I had this problem as well but perhaps for a different reason. I had some trailing commas in my CSV that were adding an additional column that pandas was attempting to read. Using the following works but it simply ignores the bad lines:
data = pd.read_csv('file1.csv', error_bad_lines=False)
If you want to keep the lines an ugly kind of hack for handling the errors is to do something like the following:
line = []
expected = []
saw = []
cont = True
while cont == True:
try:
data = pd.read_csv('file1.csv',skiprows=line)
cont = False
except Exception as e:
errortype = e.message.split('.')[0].strip()
if errortype == 'Error tokenizing data':
cerror = e.message.split(':')[1].strip().replace(',','')
nums = [n for n in cerror.split(' ') if str.isdigit(n)]
expected.append(int(nums[0]))
saw.append(int(nums[2]))
line.append(int(nums[1])-1)
else:
cerror = 'Unknown'
print 'Unknown Error - 222'
if line != []:
# Handle the errors however you want
I proceeded to write a script to reinsert the lines into the DataFrame since the bad lines will be given by the variable 'line' in the above code. This can all be avoided by simply using the csv reader. Hopefully the pandas developers can make it easier to deal with this situation in the future.
The following worked for me (I posted this answer, because I specifically had this problem in a Google Colaboratory Notebook):
df = pd.read_csv("/path/foo.csv", delimiter=';', skiprows=0, low_memory=False)
You can try;
data = pd.read_csv('file1.csv', sep='\t')
I came across the same issue. Using pd.read_table() on the same source file seemed to work. I could not trace the reason for this but it was a useful workaround for my case. Perhaps someone more knowledgeable can shed more light on why it worked.
Edit:
I found that this error creeps up when you have some text in your file that does not have the same format as the actual data. This is usually header or footer information (greater than one line, so skip_header doesn't work) which will not be separated by the same number of commas as your actual data (when using read_csv). Using read_table uses a tab as the delimiter which could circumvent the users current error but introduce others.
I usually get around this by reading the extra data into a file then use the read_csv() method.
The exact solution might differ depending on your actual file, but this approach has worked for me in several cases
I've had this problem a few times myself. Almost every time, the reason is that the file I was attempting to open was not a properly saved CSV to begin with. And by "properly", I mean each row had the same number of separators or columns.
Typically it happened because I had opened the CSV in Excel then improperly saved it. Even though the file extension was still .csv, the pure CSV format had been altered.
Any file saved with pandas to_csv will be properly formatted and shouldn't have that issue. But if you open it with another program, it may change the structure.
Hope that helps.
I've had a similar problem while trying to read a tab-delimited table with spaces, commas and quotes:
1115794 4218 "k__Bacteria", "p__Firmicutes", "c__Bacilli", "o__Bacillales", "f__Bacillaceae", ""
1144102 3180 "k__Bacteria", "p__Firmicutes", "c__Bacilli", "o__Bacillales", "f__Bacillaceae", "g__Bacillus", ""
368444 2328 "k__Bacteria", "p__Bacteroidetes", "c__Bacteroidia", "o__Bacteroidales", "f__Bacteroidaceae", "g__Bacteroides", ""
import pandas as pd
# Same error for read_table
counts = pd.read_csv(path_counts, sep='\t', index_col=2, header=None, engine = 'c')
pandas.io.common.CParserError: Error tokenizing data. C error: out of memory
This says it has something to do with C parsing engine (which is the default one). Maybe changing to a python one will change anything
counts = pd.read_table(path_counts, sep='\t', index_col=2, header=None, engine='python')
Segmentation fault (core dumped)
Now that is a different error.
If we go ahead and try to remove spaces from the table, the error from python-engine changes once again:
1115794 4218 "k__Bacteria","p__Firmicutes","c__Bacilli","o__Bacillales","f__Bacillaceae",""
1144102 3180 "k__Bacteria","p__Firmicutes","c__Bacilli","o__Bacillales","f__Bacillaceae","g__Bacillus",""
368444 2328 "k__Bacteria","p__Bacteroidetes","c__Bacteroidia","o__Bacteroidales","f__Bacteroidaceae","g__Bacteroides",""
_csv.Error: ' ' expected after '"'
And it gets clear that pandas was having problems parsing our rows. To parse a table with python engine I needed to remove all spaces and quotes from the table beforehand. Meanwhile C-engine kept crashing even with commas in rows.
To avoid creating a new file with replacements I did this, as my tables are small:
from io import StringIO
with open(path_counts) as f:
input = StringIO(f.read().replace('", ""', '').replace('"', '').replace(', ', ',').replace('\0',''))
counts = pd.read_table(input, sep='\t', index_col=2, header=None, engine='python')
tl;dr
Change parsing engine, try to avoid any non-delimiting quotes/commas/spaces in your data.
Use delimiter in parameter
pd.read_csv(filename, delimiter=",", encoding='utf-8')
It will read.
The dataset that I used had a lot of quote marks (") used extraneous of the formatting. I was able to fix the error by including this parameter for read_csv():
quoting=3 # 3 correlates to csv.QUOTE_NONE for pandas
As far as I can tell, and after taking a look at your file, the problem is that the csv file you're trying to load has multiple tables. There are empty lines, or lines that contain table titles. Try to have a look at this Stackoverflow answer. It shows how to achieve that programmatically.
Another dynamic approach to do that would be to use the csv module, read every single row at a time and make sanity checks/regular expressions, to infer if the row is (title/header/values/blank). You have one more advantage with this approach, that you can split/append/collect your data in python objects as desired.
The easiest of all would be to use pandas function pd.read_clipboard() after manually selecting and copying the table to the clipboard, in case you can open the csv in excel or something.
Irrelevant:
Additionally, irrelevant to your problem, but because no one made mention of this: I had this same issue when loading some datasets such as seeds_dataset.txt from UCI. In my case, the error was occurring because some separators had more whitespaces than a true tab \t. See line 3 in the following for instance
14.38 14.21 0.8951 5.386 3.312 2.462 4.956 1
14.69 14.49 0.8799 5.563 3.259 3.586 5.219 1
14.11 14.1 0.8911 5.42 3.302 2.7 5 1
Therefore, use \t+ in the separator pattern instead of \t.
data = pd.read_csv(path, sep='\t+`, header=None)
Error tokenizing data. C error: Expected 2 fields in line 3, saw 12
The error gives a clue to solve the problem " Expected 2 fields in line 3, saw 12", saw 12 means length of the second row is 12 and first row is 2.
When you have data like the one shown below, if you skip rows then most of the data will be skipped
data = """1,2,3
1,2,3,4
1,2,3,4,5
1,2
1,2,3,4"""
If you dont want to skip any rows do the following
#First lets find the maximum column for all the rows
with open("file_name.csv", 'r') as temp_f:
# get No of columns in each line
col_count = [ len(l.split(",")) for l in temp_f.readlines() ]
### Generate column names (names will be 0, 1, 2, ..., maximum columns - 1)
column_names = [i for i in range(max(col_count))]
import pandas as pd
# inside range set the maximum value you can see in "Expected 4 fields in line 2, saw 8"
# here will be 8
data = pd.read_csv("file_name.csv",header = None,names=column_names )
Use range instead of manually setting names as it will be cumbersome when you have many columns.
Additionally you can fill up the NaN values with 0, if you need to use even data length. Eg. for clustering (k-means)
new_data = data.fillna(0)
For those who are having similar issue with Python 3 on linux OS.
pandas.errors.ParserError: Error tokenizing data. C error: Calling
read(nbytes) on source failed. Try engine='python'.
Try:
df.read_csv('file.csv', encoding='utf8', engine='python')
In my case the separator was not the default "," but Tab.
pd.read_csv(file_name.csv, sep='\\t',lineterminator='\\r', engine='python', header='infer')
Note: "\t" did not work as suggested by some sources. "\\t" was required.
I believe the solutions,
,engine='python'
, error_bad_lines = False
will be good if it is dummy columns and you want to delete it.
In my case, the second row really had more columns and I wanted those columns to be integrated and to have the number of columns = MAX(columns).
Please refer to the solution below that I could not read anywhere:
try:
df_data = pd.read_csv(PATH, header = bl_header, sep = str_sep)
except pd.errors.ParserError as err:
str_find = 'saw '
int_position = int(str(err).find(str_find)) + len(str_find)
str_nbCol = str(err)[int_position:]
l_col = range(int(str_nbCol))
df_data = pd.read_csv(PATH, header = bl_header, sep = str_sep, names = l_col)
Although not the case for this question, this error may also appear with compressed data. Explicitly setting the value for kwarg compression resolved my problem.
result = pandas.read_csv(data_source, compression='gzip')
Simple resolution: Open the csv file in excel & save it with different name file of csv format. Again try importing it spyder, Your problem will be resolved!
The issue is with the delimiter. Find what kind of delimiter is used in your data and specify it like below:
data = pd.read_csv('some_data.csv', sep='\t')
I came across multiple solutions for this issue. Lot's of folks have given the best explanation for the answers also. But for the beginners I think below two methods will be enough :
import pandas as pd
#Method 1
data = pd.read_csv('file1.csv', error_bad_lines=False)
#Note that this will cause the offending lines to be skipped.
#Method 2 using sep
data = pd.read_csv('file1.csv', sep='\t')
Sometimes the problem is not how to use python, but with the raw data.
I got this error message
Error tokenizing data. C error: Expected 18 fields in line 72, saw 19.
It turned out that in the column description there were sometimes commas. This means that the CSV file needs to be cleaned up or another separator used.
An alternative that I have found to be useful in dealing with similar parsing errors uses the CSV module to re-route data into a pandas df. For example:
import csv
import pandas as pd
path = 'C:/FileLocation/'
file = 'filename.csv'
f = open(path+file,'rt')
reader = csv.reader(f)
#once contents are available, I then put them in a list
csv_list = []
for l in reader:
csv_list.append(l)
f.close()
#now pandas has no problem getting into a df
df = pd.DataFrame(csv_list)
I find the CSV module to be a bit more robust to poorly formatted comma separated files and so have had success with this route to address issues like these.
following sequence of commands works (I lose the first line of the data -no header=None present-, but at least it loads):
df = pd.read_csv(filename,
usecols=range(0, 42))
df.columns = ['YR', 'MO', 'DAY', 'HR', 'MIN', 'SEC', 'HUND',
'ERROR', 'RECTYPE', 'LANE', 'SPEED', 'CLASS',
'LENGTH', 'GVW', 'ESAL', 'W1', 'S1', 'W2', 'S2',
'W3', 'S3', 'W4', 'S4', 'W5', 'S5', 'W6', 'S6',
'W7', 'S7', 'W8', 'S8', 'W9', 'S9', 'W10', 'S10',
'W11', 'S11', 'W12', 'S12', 'W13', 'S13', 'W14']
Following does NOT work:
df = pd.read_csv(filename,
names=['YR', 'MO', 'DAY', 'HR', 'MIN', 'SEC', 'HUND',
'ERROR', 'RECTYPE', 'LANE', 'SPEED', 'CLASS',
'LENGTH', 'GVW', 'ESAL', 'W1', 'S1', 'W2', 'S2',
'W3', 'S3', 'W4', 'S4', 'W5', 'S5', 'W6', 'S6',
'W7', 'S7', 'W8', 'S8', 'W9', 'S9', 'W10', 'S10',
'W11', 'S11', 'W12', 'S12', 'W13', 'S13', 'W14'],
usecols=range(0, 42))
CParserError: Error tokenizing data. C error: Expected 53 fields in line 1605634, saw 54
Following does NOT work:
df = pd.read_csv(filename,
header=None)
CParserError: Error tokenizing data. C error: Expected 53 fields in line 1605634, saw 54
Hence, in your problem you have to pass usecols=range(0, 2)
use
pandas.read_csv('CSVFILENAME',header=None,sep=', ')
when trying to read csv data from the link
http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data
I copied the data from the site into my csvfile. It had extra spaces so used sep =', ' and it worked :)
I had a similar case as this and setting
train = pd.read_csv('input.csv' , encoding='latin1',engine='python')
worked
Check if you are loading the csv with the correct separator.
df = pd.read_csv(csvname, header=0, sep=",")
I had a dataset with prexisting row numbers, I used index_col:
pd.read_csv('train.csv', index_col=0)

Reading data with more columns than expected into a dataframe

I have a number of .csv files that I download into a directory.
Each .csv is suppose to have 3 columns of information. The head of one of these files looks like:
17/07/2014,637580,10.755
18/07/2014,61996,10.8497
21/07/2014,126758,10.8208
22/07/2014,520926,10.8201
23/07/2014,370843,9.2883
The code that I am using to read the .csv into a dataframe (df) is:
df = pd.read_csv(adj_directory+'\\'+filename, error_bad_lines=False,names=['DATE', 'PX', 'RAW'])
Where I name the three columns (DATE, PX and RAW).
This works fine when the file is formatted correctly. However I have noticed that sometimes the .csv has a slightly different format and can look like for example:
09/07/2014,26268315,,
10/07/2014,6601181,16.3857
11/07/2014,916651,12.5879
14/07/2014,213357,,
15/07/2014,205019,10.8607
where there is a column value missing and an extra comma appears in the values place. This means that the file fails to load into the dataframe (the df dataframe is empty).
Is there a way to read the data into a dataframe with the extra comma (ignoring the offending row) so the df would look like:
09/07/2014,26268315,NaN
10/07/2014,6601181,16.3857
11/07/2014,916651,12.5879
14/07/2014,213357,NaN
15/07/2014,205019,10.8607
Probably best to fix the file upstream so that missing values aren't filled with a ,. But if necessary you can correct the file in python, by replacing ,, with just , (line-by-line). Taking your bad file as test.csv:
import re
import csv
patt = re.compile(r",,")
with open('corrected.csv', 'w') as f2:
with open('test.csv') as f:
for line in csv.reader(map(lambda s: patt.sub(',', s), f)):
f2.write(','.join(str(x) for x in line))
f2.write('\n')
f2.close()
f.close()
Output: corrected.csv
09/07/2014,26268315,
10/07/2014,6601181,16.3857
11/07/2014,916651,12.5879
14/07/2014,213357,
15/07/2014,205019,10.8607
Then you should be able to read in this file without issue
import pandas as pd
df = pd.read_csv('corrected.csv', names=['DATE', 'PX', 'RAW'])
DATE PX RAW
0 09/07/2014 26268315 NaN
1 10/07/2014 6601181 16.3857
2 11/07/2014 916651 12.5879
3 14/07/2014 213357 NaN
4 15/07/2014 205019 10.8607
Had this problem yesterday.
Have you tried:
pd.read_csv(adj_directory+'\\'+filename,
error_bad_lines=False,names=['DATE', 'PX', 'RAW'],
keep_default_na=False,
na_values=[''])

Python - Parsing text in tsv files [duplicate]

I'm trying to use pandas to manipulate a .csv file but I get this error:
pandas.parser.CParserError: Error tokenizing data. C error: Expected 2 fields in line 3, saw 12
I have tried to read the pandas docs, but found nothing.
My code is simple:
path = 'GOOG Key Ratios.csv'
#print(open(path).read())
data = pd.read_csv(path)
How can I resolve this? Should I use the csv module or another language ?
File is from Morningstar
you could also try;
data = pd.read_csv('file1.csv', on_bad_lines='skip')
Do note that this will cause the offending lines to be skipped.
Edit
For Pandas < 1.3.0 try
data = pd.read_csv("file1.csv", error_bad_lines=False)
as per pandas API reference.
It might be an issue with
the delimiters in your data
the first row, as #TomAugspurger noted
To solve it, try specifying the sep and/or header arguments when calling read_csv. For instance,
df = pandas.read_csv(filepath, sep='delimiter', header=None)
In the code above, sep defines your delimiter and header=None tells pandas that your source data has no row for headers / column titles. Thus saith the docs: "If file contains no header row, then you should explicitly pass header=None". In this instance, pandas automatically creates whole-number indices for each field {0,1,2,...}.
According to the docs, the delimiter thing should not be an issue. The docs say that "if sep is None [not specified], will try to automatically determine this." I however have not had good luck with this, including instances with obvious delimiters.
Another solution may be to try auto detect the delimiter
# use the first 2 lines of the file to detect separator
temp_lines = csv_file.readline() + '\n' + csv_file.readline()
dialect = csv.Sniffer().sniff(temp_lines, delimiters=';,')
# remember to go back to the start of the file for the next time it's read
csv_file.seek(0)
df = pd.read_csv(csv_file, sep=dialect.delimiter)
The parser is getting confused by the header of the file. It reads the first row and infers the number of columns from that row. But the first two rows aren't representative of the actual data in the file.
Try it with data = pd.read_csv(path, skiprows=2)
This is definitely an issue of delimiter, as most of the csv CSV are got create using sep='/t' so try to read_csv using the tab character (\t) using separator /t. so, try to open using following code line.
data=pd.read_csv("File_path", sep='\t')
I had this problem, where I was trying to read in a CSV without passing in column names.
df = pd.read_csv(filename, header=None)
I specified the column names in a list beforehand and then pass them into names, and it solved it immediately. If you don't have set column names, you could just create as many placeholder names as the maximum number of columns that might be in your data.
col_names = ["col1", "col2", "col3", ...]
df = pd.read_csv(filename, names=col_names)
Your CSV file might have variable number of columns and read_csv inferred the number of columns from the first few rows. Two ways to solve it in this case:
1) Change the CSV file to have a dummy first line with max number of columns (and specify header=[0])
2) Or use names = list(range(0,N)) where N is the max number of columns.
I had this problem as well but perhaps for a different reason. I had some trailing commas in my CSV that were adding an additional column that pandas was attempting to read. Using the following works but it simply ignores the bad lines:
data = pd.read_csv('file1.csv', error_bad_lines=False)
If you want to keep the lines an ugly kind of hack for handling the errors is to do something like the following:
line = []
expected = []
saw = []
cont = True
while cont == True:
try:
data = pd.read_csv('file1.csv',skiprows=line)
cont = False
except Exception as e:
errortype = e.message.split('.')[0].strip()
if errortype == 'Error tokenizing data':
cerror = e.message.split(':')[1].strip().replace(',','')
nums = [n for n in cerror.split(' ') if str.isdigit(n)]
expected.append(int(nums[0]))
saw.append(int(nums[2]))
line.append(int(nums[1])-1)
else:
cerror = 'Unknown'
print 'Unknown Error - 222'
if line != []:
# Handle the errors however you want
I proceeded to write a script to reinsert the lines into the DataFrame since the bad lines will be given by the variable 'line' in the above code. This can all be avoided by simply using the csv reader. Hopefully the pandas developers can make it easier to deal with this situation in the future.
The following worked for me (I posted this answer, because I specifically had this problem in a Google Colaboratory Notebook):
df = pd.read_csv("/path/foo.csv", delimiter=';', skiprows=0, low_memory=False)
You can try;
data = pd.read_csv('file1.csv', sep='\t')
I came across the same issue. Using pd.read_table() on the same source file seemed to work. I could not trace the reason for this but it was a useful workaround for my case. Perhaps someone more knowledgeable can shed more light on why it worked.
Edit:
I found that this error creeps up when you have some text in your file that does not have the same format as the actual data. This is usually header or footer information (greater than one line, so skip_header doesn't work) which will not be separated by the same number of commas as your actual data (when using read_csv). Using read_table uses a tab as the delimiter which could circumvent the users current error but introduce others.
I usually get around this by reading the extra data into a file then use the read_csv() method.
The exact solution might differ depending on your actual file, but this approach has worked for me in several cases
I've had this problem a few times myself. Almost every time, the reason is that the file I was attempting to open was not a properly saved CSV to begin with. And by "properly", I mean each row had the same number of separators or columns.
Typically it happened because I had opened the CSV in Excel then improperly saved it. Even though the file extension was still .csv, the pure CSV format had been altered.
Any file saved with pandas to_csv will be properly formatted and shouldn't have that issue. But if you open it with another program, it may change the structure.
Hope that helps.
I've had a similar problem while trying to read a tab-delimited table with spaces, commas and quotes:
1115794 4218 "k__Bacteria", "p__Firmicutes", "c__Bacilli", "o__Bacillales", "f__Bacillaceae", ""
1144102 3180 "k__Bacteria", "p__Firmicutes", "c__Bacilli", "o__Bacillales", "f__Bacillaceae", "g__Bacillus", ""
368444 2328 "k__Bacteria", "p__Bacteroidetes", "c__Bacteroidia", "o__Bacteroidales", "f__Bacteroidaceae", "g__Bacteroides", ""
import pandas as pd
# Same error for read_table
counts = pd.read_csv(path_counts, sep='\t', index_col=2, header=None, engine = 'c')
pandas.io.common.CParserError: Error tokenizing data. C error: out of memory
This says it has something to do with C parsing engine (which is the default one). Maybe changing to a python one will change anything
counts = pd.read_table(path_counts, sep='\t', index_col=2, header=None, engine='python')
Segmentation fault (core dumped)
Now that is a different error.
If we go ahead and try to remove spaces from the table, the error from python-engine changes once again:
1115794 4218 "k__Bacteria","p__Firmicutes","c__Bacilli","o__Bacillales","f__Bacillaceae",""
1144102 3180 "k__Bacteria","p__Firmicutes","c__Bacilli","o__Bacillales","f__Bacillaceae","g__Bacillus",""
368444 2328 "k__Bacteria","p__Bacteroidetes","c__Bacteroidia","o__Bacteroidales","f__Bacteroidaceae","g__Bacteroides",""
_csv.Error: ' ' expected after '"'
And it gets clear that pandas was having problems parsing our rows. To parse a table with python engine I needed to remove all spaces and quotes from the table beforehand. Meanwhile C-engine kept crashing even with commas in rows.
To avoid creating a new file with replacements I did this, as my tables are small:
from io import StringIO
with open(path_counts) as f:
input = StringIO(f.read().replace('", ""', '').replace('"', '').replace(', ', ',').replace('\0',''))
counts = pd.read_table(input, sep='\t', index_col=2, header=None, engine='python')
tl;dr
Change parsing engine, try to avoid any non-delimiting quotes/commas/spaces in your data.
Use delimiter in parameter
pd.read_csv(filename, delimiter=",", encoding='utf-8')
It will read.
The dataset that I used had a lot of quote marks (") used extraneous of the formatting. I was able to fix the error by including this parameter for read_csv():
quoting=3 # 3 correlates to csv.QUOTE_NONE for pandas
As far as I can tell, and after taking a look at your file, the problem is that the csv file you're trying to load has multiple tables. There are empty lines, or lines that contain table titles. Try to have a look at this Stackoverflow answer. It shows how to achieve that programmatically.
Another dynamic approach to do that would be to use the csv module, read every single row at a time and make sanity checks/regular expressions, to infer if the row is (title/header/values/blank). You have one more advantage with this approach, that you can split/append/collect your data in python objects as desired.
The easiest of all would be to use pandas function pd.read_clipboard() after manually selecting and copying the table to the clipboard, in case you can open the csv in excel or something.
Irrelevant:
Additionally, irrelevant to your problem, but because no one made mention of this: I had this same issue when loading some datasets such as seeds_dataset.txt from UCI. In my case, the error was occurring because some separators had more whitespaces than a true tab \t. See line 3 in the following for instance
14.38 14.21 0.8951 5.386 3.312 2.462 4.956 1
14.69 14.49 0.8799 5.563 3.259 3.586 5.219 1
14.11 14.1 0.8911 5.42 3.302 2.7 5 1
Therefore, use \t+ in the separator pattern instead of \t.
data = pd.read_csv(path, sep='\t+`, header=None)
Error tokenizing data. C error: Expected 2 fields in line 3, saw 12
The error gives a clue to solve the problem " Expected 2 fields in line 3, saw 12", saw 12 means length of the second row is 12 and first row is 2.
When you have data like the one shown below, if you skip rows then most of the data will be skipped
data = """1,2,3
1,2,3,4
1,2,3,4,5
1,2
1,2,3,4"""
If you dont want to skip any rows do the following
#First lets find the maximum column for all the rows
with open("file_name.csv", 'r') as temp_f:
# get No of columns in each line
col_count = [ len(l.split(",")) for l in temp_f.readlines() ]
### Generate column names (names will be 0, 1, 2, ..., maximum columns - 1)
column_names = [i for i in range(max(col_count))]
import pandas as pd
# inside range set the maximum value you can see in "Expected 4 fields in line 2, saw 8"
# here will be 8
data = pd.read_csv("file_name.csv",header = None,names=column_names )
Use range instead of manually setting names as it will be cumbersome when you have many columns.
Additionally you can fill up the NaN values with 0, if you need to use even data length. Eg. for clustering (k-means)
new_data = data.fillna(0)
For those who are having similar issue with Python 3 on linux OS.
pandas.errors.ParserError: Error tokenizing data. C error: Calling
read(nbytes) on source failed. Try engine='python'.
Try:
df.read_csv('file.csv', encoding='utf8', engine='python')
In my case the separator was not the default "," but Tab.
pd.read_csv(file_name.csv, sep='\\t',lineterminator='\\r', engine='python', header='infer')
Note: "\t" did not work as suggested by some sources. "\\t" was required.
I believe the solutions,
,engine='python'
, error_bad_lines = False
will be good if it is dummy columns and you want to delete it.
In my case, the second row really had more columns and I wanted those columns to be integrated and to have the number of columns = MAX(columns).
Please refer to the solution below that I could not read anywhere:
try:
df_data = pd.read_csv(PATH, header = bl_header, sep = str_sep)
except pd.errors.ParserError as err:
str_find = 'saw '
int_position = int(str(err).find(str_find)) + len(str_find)
str_nbCol = str(err)[int_position:]
l_col = range(int(str_nbCol))
df_data = pd.read_csv(PATH, header = bl_header, sep = str_sep, names = l_col)
Although not the case for this question, this error may also appear with compressed data. Explicitly setting the value for kwarg compression resolved my problem.
result = pandas.read_csv(data_source, compression='gzip')
Simple resolution: Open the csv file in excel & save it with different name file of csv format. Again try importing it spyder, Your problem will be resolved!
The issue is with the delimiter. Find what kind of delimiter is used in your data and specify it like below:
data = pd.read_csv('some_data.csv', sep='\t')
I came across multiple solutions for this issue. Lot's of folks have given the best explanation for the answers also. But for the beginners I think below two methods will be enough :
import pandas as pd
#Method 1
data = pd.read_csv('file1.csv', error_bad_lines=False)
#Note that this will cause the offending lines to be skipped.
#Method 2 using sep
data = pd.read_csv('file1.csv', sep='\t')
Sometimes the problem is not how to use python, but with the raw data.
I got this error message
Error tokenizing data. C error: Expected 18 fields in line 72, saw 19.
It turned out that in the column description there were sometimes commas. This means that the CSV file needs to be cleaned up or another separator used.
An alternative that I have found to be useful in dealing with similar parsing errors uses the CSV module to re-route data into a pandas df. For example:
import csv
import pandas as pd
path = 'C:/FileLocation/'
file = 'filename.csv'
f = open(path+file,'rt')
reader = csv.reader(f)
#once contents are available, I then put them in a list
csv_list = []
for l in reader:
csv_list.append(l)
f.close()
#now pandas has no problem getting into a df
df = pd.DataFrame(csv_list)
I find the CSV module to be a bit more robust to poorly formatted comma separated files and so have had success with this route to address issues like these.
following sequence of commands works (I lose the first line of the data -no header=None present-, but at least it loads):
df = pd.read_csv(filename,
usecols=range(0, 42))
df.columns = ['YR', 'MO', 'DAY', 'HR', 'MIN', 'SEC', 'HUND',
'ERROR', 'RECTYPE', 'LANE', 'SPEED', 'CLASS',
'LENGTH', 'GVW', 'ESAL', 'W1', 'S1', 'W2', 'S2',
'W3', 'S3', 'W4', 'S4', 'W5', 'S5', 'W6', 'S6',
'W7', 'S7', 'W8', 'S8', 'W9', 'S9', 'W10', 'S10',
'W11', 'S11', 'W12', 'S12', 'W13', 'S13', 'W14']
Following does NOT work:
df = pd.read_csv(filename,
names=['YR', 'MO', 'DAY', 'HR', 'MIN', 'SEC', 'HUND',
'ERROR', 'RECTYPE', 'LANE', 'SPEED', 'CLASS',
'LENGTH', 'GVW', 'ESAL', 'W1', 'S1', 'W2', 'S2',
'W3', 'S3', 'W4', 'S4', 'W5', 'S5', 'W6', 'S6',
'W7', 'S7', 'W8', 'S8', 'W9', 'S9', 'W10', 'S10',
'W11', 'S11', 'W12', 'S12', 'W13', 'S13', 'W14'],
usecols=range(0, 42))
CParserError: Error tokenizing data. C error: Expected 53 fields in line 1605634, saw 54
Following does NOT work:
df = pd.read_csv(filename,
header=None)
CParserError: Error tokenizing data. C error: Expected 53 fields in line 1605634, saw 54
Hence, in your problem you have to pass usecols=range(0, 2)
use
pandas.read_csv('CSVFILENAME',header=None,sep=', ')
when trying to read csv data from the link
http://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data
I copied the data from the site into my csvfile. It had extra spaces so used sep =', ' and it worked :)
I had a similar case as this and setting
train = pd.read_csv('input.csv' , encoding='latin1',engine='python')
worked
Check if you are loading the csv with the correct separator.
df = pd.read_csv(csvname, header=0, sep=",")
I had a dataset with prexisting row numbers, I used index_col:
pd.read_csv('train.csv', index_col=0)

Python: Load a csv file in a DataFrame with exception for string and list comma separator

I have a particular problem with a very badly formated csv file
structured with comma as separator but also in some column field strings and lists that contain commas also.
Is there a way in Python with or without Pandas to load in a DataFrame a csv
with such statement? : separate in a new column whenever there is a ,(comma) unless it is between "[]" or ""
Or a method to replace the comma between [ ] and "" by a space this should work as well.
Here is an example of the structure (with of course many more rows and columns in the original one):
What I have:
column1,column2,column3,column.4
375,"string1","string.2",["string3"]
320,"string4, string5","string6",500
345,"string7",["string8", "string9"],"string10"
What I want:
column1 column2 column3 column.4
1 375 "string1" "string.2" ["string3"]
2 320 "string4, string5" "string6" 500
3 345 "string7" ["string8", "string9"] "string10"
Here is what I have tried:
df = pd.read_csv("test.csv", quotechar='"')
df = pd.read_csv("test.csv", sep=(','), quotechar='"', doublequote=True)
But each time the same error pop up:
pandas.parser.CParserError: Error tokenizing data. C error: Expected 18 fields in line 3, saw 19
As it is still separating for commas in lists
Any help would be greatly appreciated
The whole trick that makes this work on your toy dataset is the assumption that if an apostrophe appears and there's a space after it, don't replace that apostrophe with a temporary delimiter |. For example, take "string4, string5" - if this can ever be "string4,string5" (with space removed), the below 'trick' might not be suitable. Hope it helps though.
import pandas as pd
import re
## data in 'myfile.csv' looks like this##
##column1,column2,column3,column.4
##375,"string1","string.2",["string3"]
##320,"string4, string5","string6",500
##345,"string7",["string8", "string9"],"string10"
new_data = []
with open(r'C:\Users\Owner\Downloads\myfile.csv') as f:
f_as_string = f.read()
res = re.sub(',(?!\s)', '|', f_as_string)
for i in res.split('\n'):
new_data.append(i.split('|'))
df = pd.DataFrame(data=new_data[1:], columns=new_data[0])
## Produces this dataframe literally retaining quotes
## column1 column2 column3 column.4
##0 375 "string1" "string.2" ["string3"]
##1 320 "string4, string5" "string6" 500
##2 345 "string7" ["string8", "string9"] "string10"

Categories

Resources