import csv with different number of columns per row using Pandas - python

What is the best approach for importing a CSV that has a different number of columns for each row using Pandas or the CSV module into a Pandas DataFrame.
"H","BBB","D","Ajxxx Dxxxs"
"R","1","QH","DTR"," "," ","spxxt rixxls, raxxxd","1"
Using this code:
import pandas as pd
data = pd.read_csv("smallsample.txt",header = None)
the following error is generated
Error tokenizing data. C error: Expected 4 fields in line 2, saw 8

Supplying a list of columns names in the read_csv() should do the trick.
ex: names=['a', 'b', 'c', 'd', 'e']
https://github.com/pydata/pandas/issues/2981
Edit: if you don't want to supply column names then do what Nicholas suggested

You can dynamically generate column names as simple counters (0, 1, 2, etc).
Dynamically generate column names
# Input
data_file = "smallsample.txt"
# Delimiter
data_file_delimiter = ','
# The max column count a line in the file could have
largest_column_count = 0
# Loop the data lines
with open(data_file, 'r') as temp_f:
# Read the lines
lines = temp_f.readlines()
for l in lines:
# Count the column count for the current line
column_count = len(l.split(data_file_delimiter)) + 1
# Set the new most column count
largest_column_count = column_count if largest_column_count < column_count else largest_column_count
# Generate column names (will be 0, 1, 2, ..., largest_column_count - 1)
column_names = [i for i in range(0, largest_column_count)]
# Read csv
df = pandas.read_csv(data_file, header=None, delimiter=data_file_delimiter, names=column_names)
# print(df)
Missing values will be assigned to the columns which your CSV lines don't have a value for.

Polished version of P.S. answer is as follows. It works.
Remember we have inserted lot of missing values in the dataframe.
### Loop the data lines
with open("smallsample.txt", 'r') as temp_f:
# get No of columns in each line
col_count = [ len(l.split(",")) for l in temp_f.readlines() ]
### Generate column names (names will be 0, 1, 2, ..., maximum columns - 1)
column_names = [i for i in range(0, max(col_count))]
### Read csv
df = pd.read_csv("smallsample.txt", header=None, delimiter=",", names=column_names)

If you want something really concise without explicitly giving column names, you could do this:
Make a one column DataFrame with each row being a line in the .csv file
Split each row on commas and expand the DataFrame
df = pd.read_fwf('<filename>.csv', header=None)
df[0].str.split(',', expand=True)

Error tokenizing data. C error: Expected 4 fields in line 2, saw 8
The error gives a clue to solve the problem "Expected 4 fields in line 2", saw 8 means length of the second row is 8 and first row is 4.
import pandas as pd
# inside range set the maximum value you can see in "Expected 4 fields in line 2, saw 8"
# here will be 8
data = pd.read_csv("smallsample.txt",header = None,names=range(8))
Use range instead of manually setting names as it will be cumbersome when you have many columns.
You can use shantanu pathak's method to find longest row length in your data.
Additionally you can fill up the NaN values with 0, if you need to use even data length. Eg. for clustering (k-means)
new_data = data.fillna(0)

We could even use pd.read_table() method to read csv file which converts it into type DataFrame of single columns which can be read and split by ','

Manipulate your csv and in the first row, put the row that has the most elements, so that all next rows have less elements. Pandas will create as much columns as the first row has.

Related

Converting every other csv file column from python list to value

I have several large csv filess each 100 columns and 800k rows. Starting from the first column, every other column has cells that are like python list, for example: in cell A2, I have [1000], in cell A3: I have [2300], and so forth. Column 2 is fine and are numbers, but columns 1, 3, 5, 7, etc, ...99 are similar to the column 1, their values are inside list. Is there an efficient way to remove the sign of the list [] from those columns and make their cells like normal numbers?
files_directory: r":D\my_files"
dir_files =os.listdir(r"D:\my_files")
for file in dir_files:
edited_csv = pd.read_csv("%s\%s"%(files_directory, file))
for column in list(edited_csv.columns):
if (column % 2) != 0:
edited_csv[column] = ?
Please try:
import pandas as pd
df = pd.read_csv('file.csv', header=None)
df.columns = df.iloc[0]
df = df[1:]
for x in df.columns[::2]:
df[x] = df[x].apply(lambda x: float(x[1:-1]))
print(df)
When reading the cells, for example column_1[3], which in this case is [4554.8433], python will read them as arrays. To read the numerical value inside the array, simply read the values like so:
value = column_1[3]
print(value[0]) #prints 4554.8433 instead of [4554.8433]

Add new columns and new column names in python

I have a CSV file in the following format:
Date,Time,Open,High,Low,Close,Volume
09/22/2003,00:00,1024.5,1025.25,1015.75,1022.0,720382.0
09/23/2003,00:00,1022.0,1035.5,1019.25,1022.0,22441.0
10/22/2003,00:00,1035.0,1036.75,1024.25,1024.5,663229.0
I would like to add 20 new columns to this file, the value of each new column is synthetically created by simply randomizing a set of numbers.
It would be something like this:
import pandas as pd
df = pd.read_csv('dataset.csv')
print(len(df))
input()
for i in range(len(df)):
#Data that already exist
date = df.values[i][0]
time = df.values[i][1]
open_value= df.values[i][2]
high_value=df.values[i][3]
low_value=df.values[i][4]
close_value=df.values[i][5]
volume=df.values[i][6]
#This is the new data
prediction_1=randrange(3)
prediction_2=randrange(3)
prediction_3=randrange(3)
prediction_4=randrange(3)
prediction_5=randrange(3)
prediction_6=randrange(3)
prediction_7=randrange(3)
prediction_8=randrange(3)
prediction_9=randrange(3)
prediction_10=randrange(3)
prediction_11=randrange(3)
prediction_12=randrange(3)
prediction_13=randrange(3)
prediction_14=randrange(3)
prediction_15=randrange(3)
prediction_16=randrange(3)
prediction_17=randrange(3)
prediction_18=randrange(3)
prediction_19=randrange(3)
prediction_20=randrange(3)
#How to concatenate these data row by row in a matrix?
#How to add new column names and save the file?
I would like to concatenate them (old+synthetic data) and, after that, I would like to add 20 new columns named 'synthetic1', 'synthetic2', ..., 'synthetic20', to the existing column names and then save the resulting new dataset in a new text file.
I could do that easily with NumPy, but here, we have no numeric data and, therefore, I don't know how to do (or if it is possible to do) that. Is possible to do that with Pandas or another library?
Here's a way you can do:
import numpy as np
# set nrow and col, nrow should match the number of rows in existing df
n_row = 100
n_col = 20
f = pd.DataFrame(np.random.randint(100, size=(n_row, n_col)), columns=['synthetic' + str(x) for x in range(1,n_col+1)])
df = pd.concat([df, f])

Problem when processing from CSV to CSV with a row count

I am trying to process a CSV file into a new CSV file with only columns of interest and remove rows with unfit values of -1. Unfortunately I get unexpected results, as it automatically includes column 0 (old ID) into the new CSV file without explicitly asking the script to do it (as it is not defined in cols = [..]).
How could I change these values for the new row count. That for, when for example we remove row 9 with an id=9, the dataset id goes currently as [..7,8,10...] instead of a new id count as [..7,8,9,10...]. I hope anyone got a solution for it.
import pandas as pd
# take only specific columns from dataset
cols = [1, 5, 6]
data = pd.read_csv('data_sample.csv', usecols=cols, header=None) data.columns = ["url", "gender", "age"]
# remove rows from dataset with undefined values of -1
data = data[data['gender'] != -1]
data = data[data['age'] != -1]
""" Additional working solution
indexGender = data[data['gender'] == -1].index
indexAge = data[data['age'] == -1].index
# Delete the rows indexes from dataFrame
data.drop(indexGender,inplace=True)
data.drop(indexAge, inplace=True)
"""
data.to_csv('data_test.csv')
Thank you in advance.
I solved the problem via simple line after the data drop:
data.reset_index(drop=True, inplace=True)

Pulling two cols from csv

I have a csv file with 330k+ rows and 12 columns. I need to put column 1 (numeric ID) and column 3 (text string) into a list or array so I can analyze the data in column 3.
this code worked for me to pull out the third col:
for row in csv_strings:
string1.append(row[2])
Can someone point me to the correct class of commands that I can research to get the job done?
Thanks.
Pandas is the best tool for this.
import pandas as pd
df = pd.read_csv("filename.csv", usecols=[ 0, 2 ])
points = []
for row in csv_strings:
points.append({id: row[0], text: row[2]})
You can pull them out into a list of key value pairs.
A different answer, using tuples, which ensure immutability and are pretty fast, but less convenient than dictionaries:
# build results
results = []
for row in csv_lines:
results.append((row[0], row[2]))
# Read results
for result in results:
result[0] # id
result[1] # string
import csv
x,z = [],[]
csv_reader = csv.reader(open('Data.csv'))
for line in csv_reader:
x.append(line[0])
z.append(line[2])
This can help u getting data from 1st and 3rd column

Read CSV without string formatting in python

I have a CSV file and I would like to read this cell-by-cell so that I can write it into excel. I am using csv.reader and enumerating the result so that I can put values into corresponding cells in Excel.
With the current code, once I enumerate the values turn into strings. If I write to excel with sheet.write(rowi,coli,value), all cells are formatted as text. I can't have this, because I need to sum columns afterward and they need to be treated as numbers
For example, my text file will have: 1, a, 3, 4.0, 5, 6, 7
After first enumeration, the first row: (0, '1, a, 3, 4.0, 5, 6, 7')
After second enumeration, first column of first row: (0, 0, '1')
QUESTION: How can I read this csv file to yield (0, 0, 1) (etc.)?
Here's some code I'm working with:
import csv, xlwt
with open('file.csv', 'rb') as csvfile:
data = csv.reader ((csvfile), delimiter=",")
wbk= xlwt.Workbook()
sheet = wbk.add_sheet("file")
for rowi, row in enumerate(data):
for coli, value in enumerate(row):
sheet.write(rowi,coli,value)
#print(rowi,coli,value) gives (rowi, coli, 'value')
import csv, xlwt
with open('file.csv', 'rb') as csvfile:
data = csv.reader ((csvfile), delimiter=",")
wbk= xlwt.Workbook()
sheet = wbk.add_sheet("file")
for rowi, row in enumerate(data):
for coli, value in enumerate(row):
sheet.write(rowi,coli,value)
wbk.save("workbook_file")
Even though print(rowi,coli,value) shows 'value', the cell in the outputted file should show it without quotes.
If your data is in the format 1, 2, 3 and not 1,2,3 include this after your for coli, value in enumerate(row): line:
value = value.lstrip(" ")
Well I think the csv module of python is still lacking a crystal ball ... More seriously, in the csv file there is no indication of the type of the variable, integer, float, string or date. By default, the Reader transforms a row in an list of strings.
If you want some columns to be integer, you can add to your script a list of boolean. Say you have 4 columns and the third is integer
int_col = [ false, false, true, false ]
...
for rowi, row in enumerate(data):
for coli, value in enumerate(row):
val = int(value) if int_col(coli) else value
sheet.write(rowi,coli,val)
You can also try to guess what columns are integer, reading n rows (for example n = 10) and saying that for each column where you found n integers you treat that column as integer.
Or you can even imagine a 2 pass operation : first pass determine the type of the columns and second does the inserts.
I find Python's standard library functions a bit lacking for processing CSV files. I prefer to work with pandas when possible.
import xlwt
from pandas.io.parsers import read_csv
df = read_csv('file.csv')
#number the columns sequentially
df.columns = [i for i, e in enumerate(df.columns)]
#unstack the columns to make 2 indices plus a column, make row come before col,
#sort row major order, and then unset the indices to get a DataFrame
newDf = df.unstack().swaplevel(0,1).sort_index().reset_index()
#rename the cols to reflect the types of data
newDf.columns = ['row', 'col', 'value']
#write to excel
newDf.to_excel('output.xls', index=False)
This will also keep the row and column numbers as integer values values. I took an example csv file and row and col both were integer valued, not string.

Categories

Resources