read a full excel file chunk by chunk using pandas - python

what is quickest way to read a file chunk by chunk in pandas:
I am doing something like this which I found on stackoverflow as well . but how I will keep track of skiprow and skip footer if my file rows are for example like 1000 ?
# if the file contains 300 rows, this will read the middle 100
df = pd.read_excel('/path/excel.xlsx', skiprows=100, skip_footer=100)

You could use range. Assuming you want to process chunks of 100 lines in in 1000 lines excel file:
total = 1000
chunksize = 100
for skip in range(0, total, chunksize):
df = pd.read_excel('/path/excel.xlsx', skiprows=skip, nrows=chunksize)
# process df
...

The read_excel does not have a chunk size argument. You can read the file first then split it manually:
df = pd.read_excel(file_name) # you have to read the whole file in total first
import numpy as np
chunksize = df.shape[0] // 1000 # set the number to whatever you want
for chunk in np.split(df, chunksize):
# process the data
Unfortunately for Excel, there is no escaping reading the whole file in memory so you will have to do it this way.
If you want to do this with skiprows and skipfooter, you need to know the size of the df (you can read it first).
df = pd.read_excel('/path/excel.xlsx')
df_size = df.shape[0]
columns = df.columns.values
chunksize = 1000
for i in range(0, df_size - chunksize, chunksiz):
df_chunk = pd.read_excel('/path/excel.xlsx', skiprows=i, skip_footer= (df_size - chunksize*(i+1)), names=columns)

Related

Python Pandas: read_csv with chunksize and concat still throws MemoryError

I am trying to extract certain rows from a 10GB ~35mil rows csv file into a new csv based on condition (value of a column (Geography = Ontario)). It runs for a few minutes and I can see my free hard drive space getting drained from 14GB to basically zero and then get the MemoryError. I thought chunksize would help here but it did not :( Please advise.
import pandas as pd
df = pd.read_csv("Data.csv", chunksize = 10000)
result = pd.concat(df)
output=result[result['Geography']=='Ontario']
rowcount=len(output)
print(output)
print(rowcount)
output.to_csv('data2.csv')
You can try writing in chunks. Roughly:
df = pd.read_csv("Data.csv", chunksize = 10000)
header = True
for chunk in df:
chunk=chunk[chunk['Geography']=='Ontario']
chunk.to_csv(outfilename, header=header, mode='a')
header = False
Idea from here.

How to read a large csv file without opening it to have a sum of numbers for each row

I'm working with a large csv file (>500 000 columns x 4033 rows) and my goal is to have the sum of all numbers per row, with the exception of the three first cells first of the row that are only descriptive of my samples. I'd like to use the pandas package.
The dataset is something like this:
label Group numOtus Otu0000001 Otu0000002 Otu0000003 Otu0000004 ... Otu0518246 sum
00.03 1.118234 518246 0 62 275 0 ... 5 ?
I've tried a couple of different things, none of them worked.
I can't simply use read_csv from pandas and then work with it because the file is too large (4 Gb). So, I tried a for loop, opening one row at a time, but I'm not getting what I was expecting. The final output should be a column with the sum per line.
Any ideas?
lst = []
for line in range(4033):
l = pd.read_csv("doc.csv", sep = "\t", nrows=1, low_memory=false)
l = l.drop(columns=['label', 'Group', "numOtus"])
x = l[list(l.columns)].sum(axis=1, numeric_only=float)
lst.append(x)
One other solution besides dask is to use chunksize parameter in pd.read_csv, then pd.concat your chunks.
A quick example:
chunksize = 1000
l = pd.read_csv('doc.csv', chunksize=chunksize, iterator=True)
df = pd.concat(l, ignore_index=True)
Addition:
To do something with the chunks one by one you can use:
chunksize = 1000
for chunk in pd.read_csv('doc.csv', chunksize=chunksize, iterator=True):
# do something with individual chucks there
To see the progress you can consider using tqdm.
from tqdm import tqdm
chunksize = 1000
for chunk in tqdm(pd.read_csv('doc.csv', chunksize=chunksize, iterator=True)):
# do something with individual chucks there
You could use dask, which is specially built for this.
import dask.dataframe as dd
dd.read_csv("doc.csv", sep = "\t").sum().compute()
u can use pandas.Series.append and pandas.DataFrame.sum along with pandas.DataFrame.iloc, while reading the data in chunks,
row_sum = pd.Series([])
for chunk in pd.read_csv('doc.csv',sep = "\t" ,chunksize=50000):
row_sum = row_sum.append(chunk.iloc[:,3:].sum(axis = 1, skipna = True))

How do I combine large csv files in python?

I have 18 csv files, each is approximately 1.6Gb and each contain approximately 12 million rows. Each file represents one years' worth of data. I need to combine all of these files, extract data for certain geographies, and then analyse the time series. What is the best way to do this?
I have tired using pd.read_csv but i hit a memory limit. I have tried including a chunk size argument but this gives me a TextFileReader object and I don't know how to combine these to make a dataframe. I have also tried pd.concat but this does not work either.
Here is the elegant way of using pandas to combine a very large csv files.
The technique is to load number of rows (defined as CHUNK_SIZE) to memory per iteration until completed. These rows will be appended to output file in "append" mode.
import pandas as pd
CHUNK_SIZE = 50000
csv_file_list = ["file1.csv", "file2.csv", "file3.csv"]
output_file = "./result_merge/output.csv"
for csv_file_name in csv_file_list:
chunk_container = pd.read_csv(csv_file_name, chunksize=CHUNK_SIZE)
for chunk in chunk_container:
chunk.to_csv(output_file, mode="a", index=False)
But If your files contain headers than it makes sense to skip the header in the upcoming files except the first one. As repeating header is unexpected. In this case the solution is as the following:
import pandas as pd
CHUNK_SIZE = 50000
csv_file_list = ["file1.csv", "file2.csv", "file3.csv"]
output_file = "./result_merge/output.csv"
first_one = True
for csv_file_name in csv_file_list:
if not first_one: # if it is not the first csv file then skip the header row (row 0) of that file
skip_row = [0]
else:
skip_row = []
chunk_container = pd.read_csv(csv_file_name, chunksize=CHUNK_SIZE, skiprows = skip_row)
for chunk in chunk_container:
chunk.to_csv(output_file, mode="a", index=False)
first_one = False
The memory limit is hit because you are trying to load the whole csv in memory. An easy solution would be to read the files line by line (assuming your files all have the same structure), control it, then write it to the target file:
filenames = ["file1.csv", "file2.csv", "file3.csv"]
sep = ";"
def check_data(data):
# ... your tests
return True # << True if data should be written into target file, else False
with open("/path/to/dir/result.csv", "a+") as targetfile:
for filename in filenames :
with open("/path/to/dir/"+filename, "r") as f:
next(f) # << only if the first line contains headers
for line in f:
data = line.split(sep)
if check_data(data):
targetfile.write(line)
Update: An example of the check_data method, following your comments:
def check_data(data):
return data[n] == 'USA' # < where n is the column holding the country
You can convert the TextFileReader object using pd.DataFrame like so: df = pd.DataFrame(chunk), where chunk is of type TextFileReader. You can then use pd.concat to concatenate the individual dataframes.

Reading random rows of a large csv file, python, pandas

could you help me, I faced a problem of reading random rows from the large csv file using 0.18.1 pandas and 2.7.10 Python on Windows (8 Gb RAM).
In Read a small random sample from a big CSV file into a Python data frame
I saw an approach, however, it occured for my PC to be very memory consuming, namely, part of the code:
n = 100
s = 10
skip = sorted(rnd.sample(xrange(1, n), n-s))# skip n-s random rows from *.csv
data = pd.read_csv(path, usecols = ['Col1', 'Col2'],
dtype = {'Col1': 'int32', 'Col2':'int32'}, skiprows = skip)
so, if I want to take some random rows from the file considering not only 100 rows, but 100 000, it becomes hard, however taking not random rows from the file is almost alright:
skiprows = xrange(100000)
data = pd.read_csv(path, usecols = ['Col1', 'Col2'],
dtype = {'Col1': 'int32', 'Col2':'int32'}, skiprows = skip, nrows = 10000)
So the question how can I deal with reading large number of random rows from the large csv file with pandas, i.e. since I can't read the entire csv file, even with chunking it, I'm interested exactly in random rows.
Thanks
If memory is the biggest issue, a possible solution might be to use chunks, and randomly select from the chunks
n = 100
s = 10
factor = 1 # should be integer
chunksize = int(s/factor)
reader = pd.read_csv(path, usecols = ['Col1', 'Col2'],dtype = {'Col1': 'int32', 'Col2':'int32'}, chunksize=chunksize)
out = []
tot = 0
for df in reader:
nsample = random.randint(factor,chunksize)
tot += nsample
if tot > s:
nsample = s - (tot - nsample)
out.append(df.sample(nsample))
if tot >= s:
break
data = pd.concat(out)
And you can use factor to control the sizes of the chunks.
I think this is faster than other methods showed here and may be worth trying.
Say, we have already chosen rows to be skipped in a list skipped. First, I convert it to a lookup bool table.
# Some preparation:
skipped = np.asarray(skipped)
# MAX >= number of rows in the file
bool_skipped = np.zeros(shape(MAX,), dtype=bool)
bool_skipped[skipped] = True
Main stuff:
from io import StringIO
# in Python 2 use
# from StringIO import StringIO
def load_with_buffer(filename, bool_skipped, **kwargs):
s_buf = StringIO()
with open(filename) as file:
count = -1
for line in file:
count += 1
if bool_skipped[count]:
continue
s_buf.write(line)
s_buf.seek(0)
df = pd.read_csv(s_buf, **kwargs)
return df
I tested it as follows:
df = pd.DataFrame(np.random.rand(100000, 100))
df.to_csv('test.csv')
df1 = load_with_buffer('test.csv', bool_skipped, index_col=0)
with 90% of rows skipped. It performs comparably to
pd.read_csv('test.csv', skiprows=skipped, index_col=0)
and is about 3-4 times faster than using dask or reading in chunks.

Read a small random sample from a big CSV file into a Python data frame

The CSV file that I want to read does not fit into main memory. How can I read a few (~10K) random lines of it and do some simple statistics on the selected data frame?
Assuming no header in the CSV file:
import pandas
import random
n = 1000000 #number of records in file
s = 10000 #desired sample size
filename = "data.txt"
skip = sorted(random.sample(range(n),n-s))
df = pandas.read_csv(filename, skiprows=skip)
would be better if read_csv had a keeprows, or if skiprows took a callback func instead of a list.
With header and unknown file length:
import pandas
import random
filename = "data.txt"
n = sum(1 for line in open(filename)) - 1 #number of records in file (excludes header)
s = 10000 #desired sample size
skip = sorted(random.sample(range(1,n+1),n-s)) #the 0-indexed header will not be included in the skip list
df = pandas.read_csv(filename, skiprows=skip)
#dlm's answer is great but since v0.20.0, skiprows does accept a callable. The callable receives as an argument the row number.
Note also that their answer for unknown file length relies on iterating through the file twice -- once to get the length, and then another time to read the csv. I have three solutions here which only rely on iterating through the file once, though they all have tradeoffs.
Solution 1: Approximate Percentage
If you can specify what percent of lines you want, rather than how many lines, you don't even need to get the file size and you just need to read through the file once. Assuming a header on the first row:
import pandas as pd
import random
p = 0.01 # 1% of the lines
# keep the header, then take only 1% of lines
# if random from [0,1] interval is greater than 0.01 the row will be skipped
df = pd.read_csv(
filename,
header=0,
skiprows=lambda i: i>0 and random.random() > p
)
As pointed out in the comments, this only gives approximately the right number of lines, but I think it satisfies the desired usecase.
Solution 2: Every Nth line
This isn't actually a random sample, but depending on how your input is sorted and what you're trying to achieve, this may meet your needs.
n = 100 # every 100th line = 1% of the lines
df = pd.read_csv(filename, header=0, skiprows=lambda i: i % n != 0)
Solution 3: Reservoir Sampling
(Added July 2021)
Reservoir sampling is an elegant algorithm for selecting k items randomly from a stream whose length is unknown, but that you only see once.
The big advantage is that you can use this without having the full dataset on disk, and that it gives you an exactly-sized sample without knowing the full dataset size. The disadvantage is that I don't see a way to implement it in pure pandas, I think you need to drop into python to read the file and then construct the dataframe afterwards. So you may lose some functionality from read_csv or need to reimplement it, since we're not using pandas to actually read the file.
Taking an implementation of the algorithm from Oscar Benjamin here:
from math import exp, log, floor
from random import random, randrange
from itertools import islice
from io import StringIO
def reservoir_sample(iterable, k=1):
"""Select k items uniformly from iterable.
Returns the whole population if there are k or fewer items
from https://bugs.python.org/issue41311#msg373733
"""
iterator = iter(iterable)
values = list(islice(iterator, k))
W = exp(log(random())/k)
while True:
# skip is geometrically distributed
skip = floor( log(random())/log(1-W) )
selection = list(islice(iterator, skip, skip+1))
if selection:
values[randrange(k)] = selection[0]
W *= exp(log(random())/k)
else:
return values
def sample_file(filepath, k):
with open(filepath, 'r') as f:
header = next(f)
result = [header] + sample_iter(f, k)
df = pd.read_csv(StringIO(''.join(result)))
The reservoir_sample function returns a list of strings, each of which is a single row, so we just need to turn it into a dataframe at the end. This assumes there is exactly one header row, I haven't thought about how to extend it to other situations.
I tested this locally and it is much faster than the other two solutions. Using a 550 MB csv (January 2020 "Yellow Taxi Trip Records" from the NYC TLC), solution 3 runs in about 1 second, while the other two take ~3-4 seconds.
In my test this is even slightly (~10-20%) faster than #Bar's answer using shuf, which surprises me.
This is not in Pandas, but it achieves the same result much faster through bash, while not reading the entire file into memory:
shuf -n 100000 data/original.tsv > data/sample.tsv
The shuf command will shuffle the input and the and the -n argument indicates how many lines we want in the output.
Relevant question: https://unix.stackexchange.com/q/108581
Benchmark on a 7M lines csv available here (2008):
Top answer:
def pd_read():
filename = "2008.csv"
n = sum(1 for line in open(filename)) - 1 #number of records in file (excludes header)
s = 100000 #desired sample size
skip = sorted(random.sample(range(1,n+1),n-s)) #the 0-indexed header will not be included in the skip list
df = pandas.read_csv(filename, skiprows=skip)
df.to_csv("temp.csv")
Timing for pandas:
%time pd_read()
CPU times: user 18.4 s, sys: 448 ms, total: 18.9 s
Wall time: 18.9 s
While using shuf:
time shuf -n 100000 2008.csv > temp.csv
real 0m1.583s
user 0m1.445s
sys 0m0.136s
So shuf is about 12x faster and importantly does not read the whole file into memory.
Here is an algorithm that doesn't require counting the number of lines in the file beforehand, so you only need to read the file once.
Say you want m samples. First, the algorithm keeps the first m samples. When it sees the i-th sample (i > m), with probability m/i, the algorithm uses the sample to randomly replace an already selected sample.
By doing so, for any i > m, we always have a subset of m samples randomly selected from the first i samples.
See code below:
import random
n_samples = 10
samples = []
for i, line in enumerate(f):
if i < n_samples:
samples.append(line)
elif random.random() < n_samples * 1. / (i+1):
samples[random.randint(0, n_samples-1)] = line
The following code reads first the header, and then a random sample on the other lines:
import pandas as pd
import numpy as np
filename = 'hugedatafile.csv'
nlinesfile = 10000000
nlinesrandomsample = 10000
lines2skip = np.random.choice(np.arange(1,nlinesfile+1), (nlinesfile-nlinesrandomsample), replace=False)
df = pd.read_csv(filename, skiprows=lines2skip)
class magic_checker:
def __init__(self,target_count):
self.target = target_count
self.count = 0
def __eq__(self,x):
self.count += 1
return self.count >= self.target
min_target=100000
max_target = min_target*2
nlines = randint(100,1000)
seek_target = randint(min_target,max_target)
with open("big.csv") as f:
f.seek(seek_target)
f.readline() #discard this line
rand_lines = list(iter(lambda:f.readline(),magic_checker(nlines)))
#do something to process the lines you got returned .. perhaps just a split
print rand_lines
print rand_lines[0].split(",")
something like that should work I think
No pandas!
import random
from os import fstat
from sys import exit
f = open('/usr/share/dict/words')
# Number of lines to be read
lines_to_read = 100
# Minimum and maximum bytes that will be randomly skipped
min_bytes_to_skip = 10000
max_bytes_to_skip = 1000000
def is_EOF():
return f.tell() >= fstat(f.fileno()).st_size
# To accumulate the read lines
sampled_lines = []
for n in xrange(lines_to_read):
bytes_to_skip = random.randint(min_bytes_to_skip, max_bytes_to_skip)
f.seek(bytes_to_skip, 1)
# After skipping "bytes_to_skip" bytes, we can stop in the middle of a line
# Skip current entire line
f.readline()
if not is_EOF():
sampled_lines.append(f.readline())
else:
# Go to the begginig of the file ...
f.seek(0, 0)
# ... and skip lines again
f.seek(bytes_to_skip, 1)
# If it has reached the EOF again
if is_EOF():
print "You have skipped more lines than your file has"
print "Reduce the values of:"
print " min_bytes_to_skip"
print " max_bytes_to_skip"
exit(1)
else:
f.readline()
sampled_lines.append(f.readline())
print sampled_lines
You'll end up with a sampled_lines list. What kind of statistics do you mean?
use subsample
pip install subsample
subsample -n 1000 file.csv > file_1000_sample.csv
You can also create a sample with the 10000 records before bringing it into the Python environment.
Using Git Bash (Windows 10) I just ran the following command to produce the sample
shuf -n 10000 BIGFILE.csv > SAMPLEFILE.csv
To note: If your CSV has headers this is not the best solution.
TL;DR
If you know the size of the sample you want, but not the size of the input file, you can efficiently load a random sample out of it with the following pandas code:
import pandas as pd
import numpy as np
filename = "data.csv"
sample_size = 10000
batch_size = 200
rng = np.random.default_rng()
sample_reader = pd.read_csv(filename, dtype=str, chunksize=batch_size)
sample = sample_reader.get_chunk(sample_size)
for chunk in sample_reader:
chunk.index = rng.integers(sample_size, size=len(chunk))
sample.loc[chunk.index] = chunk
Explanation
It's not always trivial to know the size of the input CSV file.
If there are embedded line breaks, tools like wc or shuf will give you the wrong answer or just make a mess out of your data.
So, based on desktable's answer, we can treat the first sample_size lines of the file as the initial sample and then, for each subsequent line in the file, randomly replace a line in the initial sample.
To do that efficiently, we load the CSV file using a TextFileReader by passing the chunksize= parameter:
sample_reader = pd.read_csv(filename, dtype=str, chunksize=batch_size)
First, we get the initial sample:
sample = sample_reader.get_chunk(sample_size)
Then, we iterate over the remaining chunks of the file, replacing the index of each chunk with a sequence of random integers as long as size of the chunk, but where each integer is in the range of the index of the initial sample (which happens to be the same as range(sample_size)):
for chunk in sample_reader:
chunk.index = rng.integers(sample_size, size=len(chunk))
And use this reindexed chunk to replace (some of the) lines in the sample:
sample.loc[chunk.index] = chunk
After the for loop, you'll have a dataframe at most sample_size rows long, but with random lines selected from the big CSV file.
To make the loop more efficient, you can make batch_size as large as your memory allows (and yes, even larger than sample_size if you can).
Notice that, while creating the new chunk index with np.random.default_rng().integers(), we use len(chunk) as the new chunk index size instead of simply batch_size because the last chunk in the loop could be smaller.
On the other hand, we use sample_size instead of len(sample) as the "range" of the random integers, even though there could be less lines in the file than sample_size. This is because there won't be any chunks left to loop over in this case so that will never be a problem.
read the data file
import pandas as pd
df = pd.read_csv('data.csv', 'r')
First check the shape of df
df.shape()
create the small sample of 1000 raws from df
sample_data = df.sample(n=1000, replace='False')
#check the shape of sample_data
sample_data.shape()
For example, you have the loan.csv, you can use this script to easily load the specified number of random items.
data = pd.read_csv('loan.csv').sample(10000, random_state=44)
Let's say that you want to load a 20% sample of the dataset:
import pandas as pd
df = pd.read_csv(filepath).sample(frac = 0.20)

Categories

Resources