I have a 10 GB text file, from which I want to find and delete a multi-line chunk. This chunk is given as another 10 MB text file, constituting a contentious section appearing once in the large file and spanning complete lines. Assuming I do not have enough memory to process the whole 10 GB in memory, what would be the easiest way to do so in some scripting language?
Example:
big.txt:
...
I have a 10 GB text file, from which I want to find and delete a multi-line chunk.
This chunk is given as another 10 MB text file,
constituting a contentious section appearing once in the large file and spanning complete lines.
Assuming I do not have enough memory to process the whole 10 GB in memory,
what would be the easiest way to do so in some scripting language?
...
chunk.txt:
This chunk is given as another 10 MB text file,
constituting a contentious section appearing once in the large file and spanning complete lines.
result.txt:
...
I have a 10 GB text file, from which I want to find and delete a multi-line chunk.
Assuming I do not have enough memory to process the whole 10 GB in memory,
what would be the easiest way to do so in some scripting language?
...
Following this comment, I implemented a python script to solve my issue using mmap, and it also works in more general conditions:
does not require complete lines
deals with multiple non-overlapping matches
deal with multiple chunk files by decreasing file size
works with bytes
chunks can be very large themselves
Code:
"""Usage: python3 delchunk.py BIGFILE CHUNK_FILE_OR_FOLDER [OUTFILE]
Given a large file BIGFILE, delete all complete non-overlapping possibly large chunks given by CHUNK_FILE_OR_FOLDER
Multiple chunks will be deleted from the largest to the smallest
If OUTFILE is not given, result will be saved to BIGFILE.delchunk
"""
import mmap
import os
import shutil
import sys
if len(sys.argv) < 3:
print(__doc__)
sys.exit(1)
output = sys.argv[3] if len(sys.argv) > 3 else sys.argv[1] + '.delchunk'
if sys.argv[1] != output:
shutil.copy(sys.argv[1], output)
if os.path.isdir(sys.argv[2]):
chunks = sorted([os.path.join(sys.argv[2], chunk) for chunk in os.listdir(sys.argv[2]) if os.path.isfile(os.path.join(sys.argv[2], chunk))], key=os.path.getsize, reverse=True)
else:
chunks = [sys.argv[2]]
with open(output, 'r+b') as bigfile, mmap.mmap(bigfile.fileno(), 0) as bigmap:
for chunk in chunks:
with open(chunk, 'rb') as chunkfile, mmap.mmap(chunkfile.fileno(), 0, access=mmap.ACCESS_READ) as chunkmap:
i = 0
while True:
start = bigmap.rfind(chunkmap)
if start == -1:
break
i += 1
end = start + len(chunkmap)
print('Deleting chunk %s (%d) at %d:%d' % (chunk, i, start, end))
bigmap.move(start, end, len(bigmap) - end)
bigmap.resize(len(bigmap) - len(chunkmap))
if not i:
print('Chunk %s not found' % chunk)
else:
bigmap.flush()
https://gist.github.com/eyaler/971efea29648af023e21902b9fa56f08
Related
I have a BZ2 file of more than 10GB. I'd like to read it without decompressing it into a temporary file (it would be more than 50GB).
With this method:
import bz2, time
t0 = time.time()
time.sleep(0.001) # to avoid / by 0
with bz2.open("F:\test.bz2", 'rb') as f:
for i, l in enumerate(f):
if i % 100000 == 0:
print('%i lines/sec' % (i/(time.time() - t0)))
I can only read ~ 250k lines per second. On a similar file, first decompressed, I get ~ 3M lines per second, i.e. a x10 factor:
with open("F:\test.txt", 'rb') as f:
I think it's not only due to the intrinsic decompression CPU time (because the total time of decompression into a temp file + the reading as uncompressed file is much smaller than the method described here), but maybe a lack of buffering, or other reasons. Are there other faster Python implementations of bz2.open?
How to speed up the reading of a BZ2 file, in binary mode, and loop over "lines"? (separated by \n)
Note: currently time to decompress test.bz2 into test.tmp + time to iterate over lines of test.tmp is far smaller than time to iterate over lines of bz2.open('test.bz2'), and this probably should not be the case.
Linked topic: https://discuss.python.org/t/non-optimal-bz2-reading-speed/6869
You can use BZ2Decompressor to deal with huge files. It decompresses blocks of data incrementally, just out of the box:
t0 = time.time()
time.sleep(0.000001)
with open('temp.bz2', 'rb') as fi:
decomp = bz2.BZ2Decompressor()
residue = b''
total_lines = 0
for data in iter(lambda: fi.read(100 * 1024), b''):
raw = residue + decomp.decompress(data) # process the raw data and concatenate residual of the previous block to the beginning of the current raw data block
residue = b''
# process_data(current_block) => do the processing of the current data block
current_block = raw.split(b'\n')
if raw[-1] != b'\n':
residue = current_block.pop() # last line could be incomplete
total_lines += len(current_block)
print('%i lines/sec' % (total_lines / (time.time() - t0)))
# process_data(residue) => now finish processing the last line
total_lines += 1
print('Final: %i lines/sec' % (total_lines / (time.time() - t0)))
Here I read a chunk of binary file, feed it into a decompressor and receive a chunk of decompressed data. Be aware, the decompressed data chunks have to be concatenated to restore the original data. This is why last entry needs special treatment.
In my experiments it runs a little faster then your solution with io.BytesIO(). bz2 is known to be slow, so if it bothers you consider migration to snappy or zstandard.
Regarding the time it takes to process bz2 in Python. It might be fastest to decompress the file into temporary one using Linux utility and then process a normal text file. Otherwise you will be dependent on Python's implementation of bz2.
This method already gives a x2 improvement over native bz2.open.
import bz2, time, io
def chunked_readlines(f):
s = io.BytesIO()
while True:
buf = f.read(1024*1024)
if not buf:
return s.getvalue()
s.write(buf)
s.seek(0)
L = s.readlines()
yield from L[:-1]
s = io.BytesIO()
s.write(L[-1]) # very important: the last line read in the 1 MB chunk might be
# incomplete, so we keep it to be processed in the next iteration
# TODO: check if this is ok if f.read() stopped in the middle of a \r\n?
t0 = time.time()
i = 0
with bz2.open("D:\test.bz2", 'rb') as f:
for l in chunked_readlines(f): # 500k lines per second
# for l in f: # 250k lines per second
i += 1
if i % 100000 == 0:
print('%i lines/sec' % (i/(time.time() - t0)))
It is probably possible to do even better.
We could have a x4 improvement if we could use s as a a simple bytes object instead of a io.BytesIO. But unfortunately in this case, splitlines() does not behave as expected: splitlines() and iterating over an opened file give different results.
I have a very large big-endian binary file. I know how many numbers in this file. I found a solution how to read big-endian file using struct and it works perfect if file is small:
data = []
file = open('some_file.dat', 'rb')
for i in range(0, numcount)
data.append(struct.unpack('>f', file.read(4))[0])
But this code works very slow if file size is more than ~100 mb.
My current file has size 1.5gb and contains 399.513.600 float numbers. The above code works with this file an about 8 minutes.
I found another solution, that works faster:
datafile = open('some_file.dat', 'rb').read()
f_len = ">" + "f" * numcount #numcount = 399513600
numbers = struct.unpack(f_len, datafile)
This code runs in about ~1.5 minute, but this is too slow for me. Earlier I wrote the same functional code in Fortran and it run in about 10 seconds.
In Fortran I open the file with flag "big-endian" and I can simply read file in REAL array without any conversion, but in python I have to read file as a string and convert every 4 bites in float using struct. Is it possible to make the program run faster?
You can use numpy.fromfile to read the file, and specify that the type is big-endian specifying > in the dtype parameter:
numpy.fromfile(filename, dtype='>f')
There is an array.fromfile method too, but unfortunately I cannot see any way in which you can control endianness, so depending on your use case this might avoid the dependency on a third party library or be useless.
The following approach gave a good speed up for me:
import struct
import random
import time
block_size = 4096
start = time.time()
with open('some_file.dat', 'rb') as f_input:
data = []
while True:
block = f_input.read(block_size * 4)
data.extend(struct.unpack('>{}f'.format(len(block)/4), block))
if len(block) < block_size * 4:
break
print "Time taken: {:.2f}".format(time.time() - start)
print "Length", len(data)
Rather than using >fffffff you can specify a count e.g. >1000f. It reads the file 4096 chunks at a time. If the amount read is less than this it adjusts the block size and exits.
From the struct - Format Characters documentation:
A format character may be preceded by an integral repeat count. For
example, the format string '4h' means exactly the same as 'hhhh'.
def read_big_endian(filename):
all_text = ""
with open(filename, "rb") as template:
try:
template.read(2) # first 2 bytes are FF FE
while True:
dchar = template.read(2)
all_text += dchar[0]
except:
pass
return all_text
def save_big_endian(filename, text):
with open(filename, "wb") as fic:
fic.write(chr(255) + chr(254)) # first 2 bytes are FF FE
for letter in text:
fic.write(letter + chr(0))
Used to read .rdp files
I have a large CSV file that I would like to split into a number that is equal to the number of CPU cores in the system. I want to then use multiprocess to have all the cores work on the file together. However, I am having trouble even splitting the file into parts. I've looked all over google and I found some sample code that appears to do what I want. Here is what I have so far:
def split(infilename, num_cpus=multiprocessing.cpu_count()):
READ_BUFFER = 2**13
total_file_size = os.path.getsize(infilename)
print total_file_size
files = list()
with open(infilename, 'rb') as infile:
for i in xrange(num_cpus):
files.append(tempfile.TemporaryFile())
this_file_size = 0
while this_file_size < 1.0 * total_file_size / num_cpus:
files[-1].write(infile.read(READ_BUFFER))
this_file_size += READ_BUFFER
files[-1].write(infile.readline()) # get the possible remainder
files[-1].seek(0, 0)
return files
files = split("sample_simple.csv")
print len(files)
for ifile in files:
reader = csv.reader(ifile)
for row in reader:
print row
The two prints show the correct file size and that it was split into 4 pieces (my system has 4 CPU cores).
However, the last section of the code that prints all the rows in each of the pieces gives the error:
for row in reader:
_csv.Error: line contains NULL byte
I tried printing the rows without running the split function and it prints all the values correctly. I suspect the split function has added some NULL bytes to the resulting 4 file pieces but I'm not sure why.
Does anyone know if this a correct and fast method to split the file? I just want resulting pieces that can be read successfully by csv.reader.
As I said in a comment, csv files would need to be split on row (or line) boundaries. Your code doesn't do this and potentially breaks them up somewhere in the middle of one — which I suspect is the cause of your _csv.Error.
The following avoids doing that by processing the input file as a series of lines. I've tested it and it seems to work standalone in the sense that it divided the sample file up into approximately equally size chunks because it's unlikely that an whole number of rows will fit exactly into a chunk.
Update
This it is a substantially faster version of the code than I originally posted. The improvement is because it now uses the temp file's own tell() method to determine the constantly changing length of the file as it's being written instead of calling os.path.getsize(), which eliminated the need to flush() the file and call os.fsync() on it after each row is written.
import csv
import multiprocessing
import os
import tempfile
def split(infilename, num_chunks=multiprocessing.cpu_count()):
READ_BUFFER = 2**13
in_file_size = os.path.getsize(infilename)
print 'in_file_size:', in_file_size
chunk_size = in_file_size // num_chunks
print 'target chunk_size:', chunk_size
files = []
with open(infilename, 'rb', READ_BUFFER) as infile:
for _ in xrange(num_chunks):
temp_file = tempfile.TemporaryFile()
while temp_file.tell() < chunk_size:
try:
temp_file.write(infile.next())
except StopIteration: # end of infile
break
temp_file.seek(0) # rewind
files.append(temp_file)
return files
files = split("sample_simple.csv", num_chunks=4)
print 'number of files created: {}'.format(len(files))
for i, ifile in enumerate(files, start=1):
print 'size of temp file {}: {}'.format(i, os.path.getsize(ifile.name))
print 'contents of file {}:'.format(i)
reader = csv.reader(ifile)
for row in reader:
print row
print ''
I have this method:
def get_chunksize(path):
"""
Breaks a file into chunks and yields the chunk sizes.
Number of chunks equals the number of available cores.
Ensures that each chunk ends at an EOL.
"""
size = os.path.getsize(path)
cores = mp.cpu_count()
chunksize = size/cores # gives truncated integer
f = open(path)
while 1:
start = f.tell()
f.seek(chunksize, 1) # Go to the next chunk
s = f.readline() # Ensure the chunk ends at the end of a line
yield start, f.tell()-start
if not s:
break
It is supposed to break a file into chunks and return the start of the chunk (in bytes) and the chunk size.
Crucially, the end of a chunk should correspond to the end of a line (which is why the f.readline() behaviour is there), but I am finding that my chunks are not seeking to an EOL at all.
The purpose of the method is to then read chunks which can be passed to a csv.reader instance (via StringIO) for further processing.
I've been unable to spot anything obviously wrong with the function...any ideas why it is not moving to the EOL?
I came up with this rather clunky alternative:
def line_chunker(path):
size = os.path.getsize(path)
cores = mp.cpu_count()
chunksize = size/cores # gives truncated integer
f = open(path)
while True:
part = f.readlines(chunksize)
yield csv.reader(StringIO("".join(part)))
if not part:
break
This will split the file into chunks with a csv reader for each chunk, but the last chunk is always empty (??) and having to join the list of strings back together is rather clunky.
if not s:
break
Instead of looking at s to see if you're at the end of the file, you should look if you've reached the end of the file by using:
if size == f.tell(): break
this should fix it. I wouldn't depend on a CSV file having a single record per line though. I've worked with several CSV files that have strings with new-lines:
first,last,message
sue,ee,hello
bob,builder,"hello,
this is some text
that I entered"
jim,bob,I'm not so creative...
Notice the 2nd record (bob) spans across 3 lines. csv.reader can handle this. If the idea is to do some cpu intensive work on a csv. I'd create an array of threads, each with a buffer of n records. have the csv.reader pass a record to each thread using round-robin, skipping a thread if its buffer is full.
Hope this helps - enjoy.
This question already has answers here:
How should I read a file line-by-line in Python?
(3 answers)
Closed 7 months ago.
The community reviewed whether to reopen this question 6 months ago and left it closed:
Original close reason(s) were not resolved
I want to read a large file (>5GB), line by line, without loading its entire contents into memory. I cannot use readlines() since it creates a very large list in memory.
Use a for loop on a file object to read it line-by-line. Use with open(...) to let a context manager ensure that the file is closed after reading:
with open("log.txt") as infile:
for line in infile:
print(line)
All you need to do is use the file object as an iterator.
for line in open("log.txt"):
do_something_with(line)
Even better is using context manager in recent Python versions.
with open("log.txt") as fileobject:
for line in fileobject:
do_something_with(line)
This will automatically close the file as well.
Please try this:
with open('filename','r',buffering=100000) as f:
for line in f:
print line
An old school approach:
fh = open(file_name, 'rt')
line = fh.readline()
while line:
# do stuff with line
line = fh.readline()
fh.close()
You are better off using an iterator instead.
Relevant: fileinput — Iterate over lines from multiple input streams.
From the docs:
import fileinput
for line in fileinput.input("filename", encoding="utf-8"):
process(line)
This will avoid copying the whole file into memory at once.
Here's what you do if you dont have newlines in the file:
with open('large_text.txt') as f:
while True:
c = f.read(1024)
if not c:
break
print(c,end='')
I couldn't believe that it could be as easy as #john-la-rooy's answer made it seem. So, I recreated the cp command using line by line reading and writing. It's CRAZY FAST.
#!/usr/bin/env python3.6
import sys
with open(sys.argv[2], 'w') as outfile:
with open(sys.argv[1]) as infile:
for line in infile:
outfile.write(line)
The blaze project has come a long way over the last 6 years. It has a simple API covering a useful subset of pandas features.
dask.dataframe takes care of chunking internally, supports many parallelisable operations and allows you to export slices back to pandas easily for in-memory operations.
import dask.dataframe as dd
df = dd.read_csv('filename.csv')
df.head(10) # return first 10 rows
df.tail(10) # return last 10 rows
# iterate rows
for idx, row in df.iterrows():
...
# group by my_field and return mean
df.groupby(df.my_field).value.mean().compute()
# slice by column
df[df.my_field=='XYZ'].compute()
Heres the code for loading text files of any size without causing memory issues.
It support gigabytes sized files
https://gist.github.com/iyvinjose/e6c1cb2821abd5f01fd1b9065cbc759d
download the file data_loading_utils.py and import it into your code
usage
import data_loading_utils.py.py
file_name = 'file_name.ext'
CHUNK_SIZE = 1000000
def process_lines(data, eof, file_name):
# check if end of file reached
if not eof:
# process data, data is one single line of the file
else:
# end of file reached
data_loading_utils.read_lines_from_file_as_data_chunks(file_name, chunk_size=CHUNK_SIZE, callback=self.process_lines)
process_lines method is the callback function. It will be called for all the lines, with parameter data representing one single line of the file at a time.
You can configure the variable CHUNK_SIZE depending on your machine hardware configurations.
How about this?
Divide your file into chunks and then read it line by line, because when you read a file, your operating system will cache the next line. If you are reading the file line by line, you are not making efficient use of the cached information.
Instead, divide the file into chunks and load the whole chunk into memory and then do your processing.
def chunks(file,size=1024):
while 1:
startat=fh.tell()
print startat #file's object current position from the start
fh.seek(size,1) #offset from current postion -->1
data=fh.readline()
yield startat,fh.tell()-startat #doesnt store whole list in memory
if not data:
break
if os.path.isfile(fname):
try:
fh=open(fname,'rb')
except IOError as e: #file --> permission denied
print "I/O error({0}): {1}".format(e.errno, e.strerror)
except Exception as e1: #handle other exceptions such as attribute errors
print "Unexpected error: {0}".format(e1)
for ele in chunks(fh):
fh.seek(ele[0])#startat
data=fh.read(ele[1])#endat
print data
Thank you! I have recently converted to python 3 and have been frustrated by using readlines(0) to read large files. This solved the problem. But to get each line, I had to do a couple extra steps. Each line was preceded by a "b'" which I guess that it was in binary format. Using "decode(utf-8)" changed it ascii.
Then I had to remove a "=\n" in the middle of each line.
Then I split the lines at the new line.
b_data=(fh.read(ele[1]))#endat This is one chunk of ascii data in binary format
a_data=((binascii.b2a_qp(b_data)).decode('utf-8')) #Data chunk in 'split' ascii format
data_chunk = (a_data.replace('=\n','').strip()) #Splitting characters removed
data_list = data_chunk.split('\n') #List containing lines in chunk
#print(data_list,'\n')
#time.sleep(1)
for j in range(len(data_list)): #iterate through data_list to get each item
i += 1
line_of_data = data_list[j]
print(line_of_data)
Here is the code starting just above "print data" in Arohi's code.
The best solution I found regarding this, and I tried it on 330 MB file.
lineno = 500
line_length = 8
with open('catfour.txt', 'r') as file:
file.seek(lineno * (line_length + 2))
print(file.readline(), end='')
Where line_length is the number of characters in a single line. For example "abcd" has line length 4.
I have added 2 in line length to skip the '\n' character and move to the next character.
I realise this has been answered quite some time ago, but here is a way of doing it in parallel without killing your memory overhead (which would be the case if you tried to fire each line into the pool). Obviously swap the readJSON_line2 function out for something sensible - its just to illustrate the point here!
Speedup will depend on filesize and what you are doing with each line - but worst case scenario for a small file and just reading it with the JSON reader, I'm seeing similar performance to the ST with the settings below.
Hopefully useful to someone out there:
def readJSON_line2(linesIn):
#Function for reading a chunk of json lines
'''
Note, this function is nonsensical. A user would never use the approach suggested
for reading in a JSON file,
its role is to evaluate the MT approach for full line by line processing to both
increase speed and reduce memory overhead
'''
import json
linesRtn = []
for lineIn in linesIn:
if lineIn.strip() != 0:
lineRtn = json.loads(lineIn)
else:
lineRtn = ""
linesRtn.append(lineRtn)
return linesRtn
# -------------------------------------------------------------------
if __name__ == "__main__":
import multiprocessing as mp
path1 = "C:\\user\\Documents\\"
file1 = "someBigJson.json"
nBuffer = 20*nCPUs # How many chunks are queued up (so cpus aren't waiting on processes spawning)
nChunk = 1000 # How many lines are in each chunk
#Both of the above will require balancing speed against memory overhead
iJob = 0 #Tracker for SMP jobs submitted into pool
iiJob = 0 #Tracker for SMP jobs extracted back out of pool
jobs = [] #SMP job holder
MTres3 = [] #Final result holder
chunk = []
iBuffer = 0 # Buffer line count
with open(path1+file1) as f:
for line in f:
#Send to the chunk
if len(chunk) < nChunk:
chunk.append(line)
else:
#Chunk full
#Don't forget to add the current line to chunk
chunk.append(line)
#Then add the chunk to the buffer (submit to SMP pool)
jobs.append(pool.apply_async(readJSON_line2, args=(chunk,)))
iJob +=1
iBuffer +=1
#Clear the chunk for the next batch of entries
chunk = []
#Buffer is full, any more chunks submitted would cause undue memory overhead
#(Partially) empty the buffer
if iBuffer >= nBuffer:
temp1 = jobs[iiJob].get()
for rtnLine1 in temp1:
MTres3.append(rtnLine1)
iBuffer -=1
iiJob+=1
#Submit the last chunk if it exists (as it would not have been submitted to SMP buffer)
if chunk:
jobs.append(pool.apply_async(readJSON_line2, args=(chunk,)))
iJob +=1
iBuffer +=1
#And gather up the last of the buffer, including the final chunk
while iiJob < iJob:
temp1 = jobs[iiJob].get()
for rtnLine1 in temp1:
MTres3.append(rtnLine1)
iiJob+=1
#Cleanup
del chunk, jobs, temp1
pool.close()
This might be useful when you want to work in parallel and read only chunks of data but keep it clean with new lines.
def readInChunks(fileObj, chunkSize=1024):
while True:
data = fileObj.read(chunkSize)
if not data:
break
while data[-1:] != '\n':
data+=fileObj.read(1)
yield data