Read CSV file with limit and offset - python

I am using the following code for reading CSV file to a dictionary.
file_name = path+'/'+file.filename
with open(file_name, newline='') as csv_file:
csv_dict = [{k: v for k, v in row.items()}
for row in csv.DictReader(csv_file)]
for item in csv_dict:
call_api(item)
Now this is reading the files and calling the function for each of the row. As the number of rows increases, the number of calls also will increase. Also it is not possible to load all the contents to memory and split and call API from there as the size of the data is big. So I would like to follow an approach, so that the file will be read using limit and offset as in the case of SQL queries. But how can this be done in Python ? I am not seeing any option to specify the number of rows and skip rows in the csv documentation. Is someone can suggest a better approach also that will be fine.

You can call your api directly just with 1 line in memory:
with open(file_name, newline='') as csv_file:
for row in csv.DictReader(csv_file):
call_api(row) # call api with row-dictionary, don't persist all to memory
You can skip lines using next(row) before the for loop:
with open(file_name, newline='') as csv_file:
for _ in range(10): # skip first 10 rows
next(csv_file)
for row in csv.DictReader(csv_file):
You can skip lines in between using continue:
with open(file_name, newline='') as csv_file:
for (i,row) in enumerate(csv.DictReader(csv_file)):
if i%2 == 0: continue # skip every other row
You can simply count parsed lines and break after n lines are done:
n = 0
with open(file_name, newline='') as csv_file:
for row in csv.DictReader(csv_file):
if n == 50:
break
n += 1
and you can combine those approaches to skip 100 rows and take 200, only taking every 2th one - this mimics limit and offset and hacks using modulo on the line number.
Or you use something thats great with csv, like pandas:
Reading a part of csv file
Read random lines from huge CSV file in Python
Read a small random sample from a big CSV file into a Python data frame

A solution could be to use pandas to read the csv:
import pandas as pd
file_name = 'data.csv'
OFFSET = 10
LIMIT = 24
CHSIZE = 6
header = list('ABC')
reader = pd.read_csv(file_name, sep=',',
header=None, names=header, # Header 'A', 'B', 'C'
usecols=[0, 1, 4], # Select some columns
skiprows=lambda idx: idx < OFFSET, # Skip lines
chunksize=CHSIZE, # Chunk reading
nrows=LIMIT)
for df_chunk in reader:
# Each df_chunk is a DataFrame, so
# an adapted api may be needed to
# call_api(item)
for row in df_chunk.itertuples():
print(row._asdict())

Related

Read csv file with empty lines

Analysis software I'm using outputs many groups of results in 1 csv file and separates the groups with 2 empty lines.
I would like to break the results in groups so that I can then analyse them separately.
I'm sure there is a built-in function in python (or one of it's libraries) that does this, I tried this piece of code that I found somewhere but it doesn't seem to work.
import csv
results = open('03_12_velocity_y.csv').read().split("\n\n")
# Feed first csv.reader
first_csv = csv.reader(results[0], delimiter=',')
# Feed second csv.reader
second_csv = csv.reader(results[1], delimiter=',')
Update:
The original code actually works, but my python skills are pretty limited and I did not implement it properly.
.split(\n\n\n) method does work but the csv.reader is an object and to get the data in a list (or something similar), it needs to iterate through all the rows and write them to the list.
I then used Pandas to remove the header and convert the scientific notated values to float. Code is bellow. Thanks everyone for help.
import csv
import pandas as pd
# Open the csv file, read it and split it when it encounters 2 empty lines (\n\n\n)
results = open('03_12_velocity_y.csv').read().split('\n\n\n')
# Create csv.reader objects that are used to iterate over rows in a csv file
# Define the output - create an empty multi-dimensional list
output1 = [[],[]]
# Iterate through the rows in the csv file and append the data to the empty list
# Feed first csv.reader
csv_reader1 = csv.reader(results[0].splitlines(), delimiter=',')
for row in csv_reader1:
output1.append(row)
df = pd.DataFrame(output1)
# remove first 7 rows of data (the start position of the slice is always included)
df = df.iloc[7:]
# Convert all data from string to float
df = df.astype(float)
If your row counts are inconsistent across groups, you'll need a little state machine to check when you're between groups and do something with the last group.
#!/usr/bin/env python3
import csv
def write_group(group, i):
with open(f"group_{i}.csv", "w", newline="") as out_f:
csv.writer(out_f).writerows(group)
with open("input.csv", newline="") as f:
reader = csv.reader(f)
group_i = 1
group = []
last_row = []
for row in reader:
if row == [] and last_row == [] and group != []:
write_group(group, group_i)
group = []
group_i += 1
continue
if row == []:
last_row = row
continue
group.append(row)
last_row = row
# flush remaining group
if group != []:
write_group(group, group_i)
I mocked up this sample CSV:
g1r1c1,g1r1c2,g1r1c3
g1r2c1,g1r2c2,g1r2c3
g1r3c1,g1r3c2,g1r3c3
g2r1c1,g2r1c2,g2r1c3
g2r2c1,g2r2c2,g2r2c3
g3r1c1,g3r1c2,g3r1c3
g3r2c1,g3r2c2,g3r2c3
g3r3c1,g3r3c2,g3r3c3
g3r4c1,g3r4c2,g3r4c3
g3r5c1,g3r5c2,g3r5c3
And when I run the program above I get three CSV files:
group_1.csv
g1r1c1,g1r1c2,g1r1c3
g1r2c1,g1r2c2,g1r2c3
g1r3c1,g1r3c2,g1r3c3
group_2.csv
g2r1c1,g2r1c2,g2r1c3
g2r2c1,g2r2c2,g2r2c3
group_3.csv
g3r1c1,g3r1c2,g3r1c3
g3r2c1,g3r2c2,g3r2c3
g3r3c1,g3r3c2,g3r3c3
g3r4c1,g3r4c2,g3r4c3
g3r5c1,g3r5c2,g3r5c3
If your row counts are consistent, you can do this with fairly vanilla Python or using the Pandas library.
Vanilla Python
Define your group size and the size of the break (in "rows") between groups.
Loop over all the rows adding each row to a group accumulator.
When the group accumulator reaches the pre-defined group size, do something with it, reset the accumulator, and then skip break-size rows.
Here, I'm writing each group to its own numbered file:
import csv
group_sz = 5
break_sz = 2
def write_group(group, i):
with open(f"group_{i}.csv", "w", newline="") as f_out:
csv.writer(f_out).writerows(group)
with open("input.csv", newline="") as f_in:
reader = csv.reader(f_in)
group_i = 1
group = []
for row in reader:
group.append(row)
if len(group) == group_sz:
write_group(group, group_i)
group_i += 1
group = []
for _ in range(break_sz):
try:
next(reader)
except StopIteration: # gracefully ignore an expected StopIteration (at the end of the file)
break
group_1.csv
g1r1c1,g1r1c2,g1r1c3
g1r2c1,g1r2c2,g1r2c3
g1r3c1,g1r3c2,g1r3c3
g1r4c1,g1r4c2,g1r4c3
g1r5c1,g1r5c2,g1r5c3
With Pandas
I'm new to Pandas, and learning this as I go, but it looks like Pandas will automatically trim blank rows/records from a chunk of data^1.
With that in mind, all you need to do is specify the size of your group, and tell Pandas to read your CSV file in "iterator mode", where you can ask for a chunk (your group size) of records at a time:
import pandas as pd
group_sz = 5
with pd.read_csv("input.csv", header=None, iterator=True) as reader:
i = 1
while True:
try:
df = reader.get_chunk(group_sz)
except StopIteration:
break
df.to_csv(f"group_{i}.csv")
i += 1
Pandas add an "ID" column and default header when it writes out the CSV:
group_1.csv
,0,1,2
0,g1r1c1,g1r1c2,g1r1c3
1,g1r2c1,g1r2c2,g1r2c3
2,g1r3c1,g1r3c2,g1r3c3
3,g1r4c1,g1r4c2,g1r4c3
4,g1r5c1,g1r5c2,g1r5c3
TRY this out with your output:
import pandas as pd
# csv file name to be read in
in_csv = 'input.csv'
# get the number of lines of the csv file to be read
number_lines = sum(1 for row in (open(in_csv)))
# size of rows of data to write to the csv,
# you can change the row size according to your need
rowsize = 500
# start looping through data writing it to a new file for each set
for i in range(1,number_lines,rowsize):
df = pd.read_csv(in_csv,
header=None,
nrows = rowsize,#number of rows to read at each loop
skiprows = i)#skip rows that have been read
#csv to write data to a new file with indexed name. input_1.csv etc.
out_csv = 'input' + str(i) + '.csv'
df.to_csv(out_csv,
index=False,
header=False,
mode='a', #append data to csv file
)
I updated the question with the last details that answer my question.

Reshaping CSV columns in python

I have my data in this form
and the required form of data is
Can anybody help me in this regard?
The content of the initial CSV file as text is:
var1,var2,col1,col2,col3
a,f,1,2,3
b,g,4,5,6
c,h,7,8,9
d,i,10,11,12
You can do it directly with the csv module. You just read from the initial file, and write up to 3 rows per initial row into the resulting file:
with open('in.csv') as fdin, open('out.csv', 'w', newline='') as fdout:
rd = csv.reader(fdin)
wr = csv.writer(fdout)
header = next(rd) # read and process header
_ = wr.writerow(header[:2] + ['columns',''])
for row in rd: # loop on rows
for i in range(3): # loop on the 3 columns
try:
row2 = row[:2] + ['col{}'.format(i+1), row[2 + i]]
_ = wr.writerow(row2)
except IndexError: # prevent error on shorter line
break
If you intend to do heavy data processing, you should contemplate using the Pandas module.
With the data sample, it gives:
var1,var2,columns,
a,f,col1,1
a,f,col2,2
a,f,col3,3
b,g,col1,4
b,g,col2,5
b,g,col3,6
c,h,col1,7
c,h,col2,8
c,h,col3,9
d,i,col1,10
d,i,col2,11
d,i,col3,12

writing the rows of a csv file to another csv file

I want to write the rows of a csv file to another csv file. I want to change the content of each row as well in a way that if the row is empty, it remains empty and if it is not, any spaces at the beginning and end of the string are omitted. The original csv file has one column and 65422771 rows.
I have written the following to write the rows of the original csv file to the new one:
import csv
csvfile = open('data.csv', 'r')
with open('data 2.csv', "w+") as csv_file1:
writer = csv.writer(csv_file1)
count = 0
for row in csvfile:
row = row.replace('"', '')
count+= 1
print(count)
if row.strip() == '':
writer.writerow('\n')
else:
writer.writerow(row)
However, when the new csv file is made, it is shown that it has 130845543 rows (= count)! The size of the new csv file is also 2 times the size of the original one. How can I create the new csv file with exactly the same number of rows but with the mentioned changes made to them?
Try this:
import csv
with open('data.csv', 'r') as file:
rows = [[row[0].strip()] for row in csv.reader(file)]
with open('data_out.csv', "w", newline = "") as file:
writer = csv.writer(file)
writer.writerows(rows)
Also, as #tripleee mentioned, your file is quite large so you may want to read / write it in chunks. You can use pandas for that.
import pandas as pd
chunksize = 10_000
for chunk in pd.read_csv('data.csv', chunksize = chunksize, header = None):
chunk[0] = chunk[0].str.strip()
chunk.to_csv("data_out.csv", mode="a", header = False, index = False)

How to Filter specific rows from a huge CSV file using Python Script

Is there an efficient way in python to load only specific rows from a huge csv file into the memory (for further processing) without burdening the memory?
E.g: Let's say I want to filter the rows from specific date out of a file in the following format, and let's say this file is tens or hundreds of gigs (dates are not ordered)
Date event_type country
2015/03/01 impression US
2015/03/01 impression US
2015/03/01 impression CA
2015/03/01 click CA
2015/03/02 impression FR
2015/03/02 click FR
2015/03/02 impression US
2015/03/02 click US
import csv
filter_countries = {'US': 1}
with open('data.tsv', 'r') as f_name:
for line in csv.DictReader(f_name, delimiter='\t'):
if line['country'] not in filter_countries:
print(line)
You still need to process every row in the file in order to check your clause. However, it's unnecessary to load all file into memory so you can use streams as following:
import csv
with open('huge.csv', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='"')
for row in spamreader:
if row[0] == '2015/03/01':
continue
# Process data here
If you need just to have a list of matched rows it's faster and even simpler to use list comprehension as follow:
import csv
with open('huge.csv', 'rb') as csvfile:
spamreader = csv.reader(csvfile, delimiter=' ', quotechar='"')
rows = [row for row in spamreader if row[0] == '2015/03/01']
If the dates can appear anywhere you will have to parse the whole file:
import csv
def get_rows(k, fle):
with open(fle) as f:
next(f)
for row in csv.reader(f, delimiter=" ", skipinitialspace=1):
if row[0] == k:
yield row
for row in get_rows("2015/03/02", "in.txt"):
print(row)
You could use the multiprocessing to speed up the parsing splitting the data into chunks. There are some ideas here
I prefer a pandas-only approach to this that allows you to use all the features of read.csv(). This approach envisions a situation where you may need to filter on different dates at different times, so it is worth a little overhead to create a date registry that can be saved to disk and re-used.
First, create a registry holding just the date data for your csv:
my_date_registry = pd.read_csv('data.csv', usecols=['Date'], engine='c')
(Note, in newer version of pandas, you can use engine = 'pyarrow', which will be faster.)
There are two ways of using this registry and the skiprows parameter to filter out the rows you don't want. You may wish to experiment as to which one is faster for your specific data.
Option 1: Build a list of integer indexes
filter_date = '2017-03-09'
my_rows = my_date_registry['Date'] == filter_date
skip_rows = ~my_rows
my_skip_indexes = my_data[skip_rows].index
my_skip_list = [x + 1 for x in my_skip_indexes]
my_selected_rows = pd.read_csv('data.csv', engine='c', skiprows=my_skip_list)
N.B. Since your data has header rows, you must add 1 to every index in my_skip_indexes to make up for the header row.
Option 2: Create a Callable function
filter_date = '2017-03-09'
my_rows = my_data[my_data['Date'] == my_date]
my_row_indexes = my_rows.index
my_row_indexes = set([0] + [x + 1 for x in my_row_indexes])
my_filter = lambda x: x not in my_row_indexes
my_selected_rows = pd.read_csv('data.csv', engine='c', skiprows=my_filter)

How to perform a simple calculation in a CSV and append the results to the file

I have a csv which contains 38 colums of data, all I want to find our how to do is, divide column 11 by column by column 38 and append this data tot he end of each row. Missing out the title row of the csv (row 1.)
If I am able to get a snippet of code that can do this, I will be able to manipulate the same code to perform lots of similar functions.
My attempt involved editing some code that was designed for something else.
See below:
from collections import defaultdict
class_col = 11
data_col = 38
# Read in the data
with open('test.csv', 'r') as f:
# if you have a header on the file
# header = f.readline().strip().split(',')
data = [line.strip().split(',') for line in f]
# Append the relevant sum to the end of each row
for row in xrange(len(data)):
data[row].append(int(class_col)/int(data_col))
# Write the results to a new csv file
with open('testMODIFIED2.csv', 'w') as nf:
nf.write('\n'.join(','.join(row) for row in data))
Any help will be greatly appreciated. Thanks SMNALLY
import csv
with open('test.csv', 'rb') as old_csv:
csv_reader = csv.reader(old_csv)
with open('testMODIFIED2.csv', 'wb') as new_csv:
csv_writer = csv.writer(new_csv)
for i, row in enumerate(csv_reader):
if i != 0:
row.append(float(row[10]) / float(row[37]))
csv_writer.writerow(row)
Use pandas:
import pandas
df = pandas.read_csv('test.csv') #assumes header row exists
df['FRACTION'] = 1.0*df['CLASS']/df['DATA'] #by default new columns are appended to the end
df.to_csv('out.csv')

Categories

Resources