I have written this very simple script to create some random data for machine learning.
from random import randint
f = open('2014-07-17-1M_testdata_1Mx500.cvs', 'w', 50000000) #50MB write buffer
for i in range(1000000): #num rows
for i2 in range(500): #entries per row
f.write(str(randint(0,1000000))) #Return a random integer N such that a <= N <= b.
if(i2 != 499): #entries per row - 1
f.write(",")
f.write("\n")
if(i != 0 and i % 100000 == 0):
print(str(i) + " lines written")
f.close
However, I've noticed that one CPU core is used with 100% load and the creation of data takes much longer than disk speed allows.
For creating large datasets (100+ GB), is there an easy way to speed this up? Some faster random library perhaps?
Pure Python is a tough one, but luckily there are efficient Python libraries that can help speed things up. numpy is a good one:
import numpy
import numpy.random
f = open('2014-07-17-1M_testdata_1Mx500.csv', 'w', 50000000)
for i in range(1000):
m = numpy.random.random_integers(0, 1000000, (1000, 500))
numpy.savetxt(f, m, delimiter=',')
f.close()
Running on my MacBook Pro, the code is definitely bound by writing to the disk instead of CPU,
so this seems to do the trick.
Related
Is there a limit to memory for python? I've been using a python script to calculate the average values from a file which is a minimum of 150mb big.
Depending on the size of the file I sometimes encounter a MemoryError.
Can more memory be assigned to the python so I don't encounter the error?
EDIT: Code now below
NOTE: The file sizes can vary greatly (up to 20GB) the minimum size of the a file is 150mb
file_A1_B1 = open("A1_B1_100000.txt", "r")
file_A2_B2 = open("A2_B2_100000.txt", "r")
file_A1_B2 = open("A1_B2_100000.txt", "r")
file_A2_B1 = open("A2_B1_100000.txt", "r")
file_write = open ("average_generations.txt", "w")
mutation_average = open("mutation_average", "w")
files = [file_A2_B2,file_A2_B2,file_A1_B2,file_A2_B1]
for u in files:
line = u.readlines()
list_of_lines = []
for i in line:
values = i.split('\t')
list_of_lines.append(values)
count = 0
for j in list_of_lines:
count +=1
for k in range(0,count):
list_of_lines[k].remove('\n')
length = len(list_of_lines[0])
print_counter = 4
for o in range(0,length):
total = 0
for p in range(0,count):
number = float(list_of_lines[p][o])
total = total + number
average = total/count
print average
if print_counter == 4:
file_write.write(str(average)+'\n')
print_counter = 0
print_counter +=1
file_write.write('\n')
(This is my third answer because I misunderstood what your code was doing in my original, and then made a small but crucial mistake in my second—hopefully three's a charm.
Edits: Since this seems to be a popular answer, I've made a few modifications to improve its implementation over the years—most not too major. This is so if folks use it as template, it will provide an even better basis.
As others have pointed out, your MemoryError problem is most likely because you're attempting to read the entire contents of huge files into memory and then, on top of that, effectively doubling the amount of memory needed by creating a list of lists of the string values from each line.
Python's memory limits are determined by how much physical ram and virtual memory disk space your computer and operating system have available. Even if you don't use it all up and your program "works", using it may be impractical because it takes too long.
Anyway, the most obvious way to avoid that is to process each file a single line at a time, which means you have to do the processing incrementally.
To accomplish this, a list of running totals for each of the fields is kept. When that is finished, the average value of each field can be calculated by dividing the corresponding total value by the count of total lines read. Once that is done, these averages can be printed out and some written to one of the output files. I've also made a conscious effort to use very descriptive variable names to try to make it understandable.
try:
from itertools import izip_longest
except ImportError: # Python 3
from itertools import zip_longest as izip_longest
GROUP_SIZE = 4
input_file_names = ["A1_B1_100000.txt", "A2_B2_100000.txt", "A1_B2_100000.txt",
"A2_B1_100000.txt"]
file_write = open("average_generations.txt", 'w')
mutation_average = open("mutation_average", 'w') # left in, but nothing written
for file_name in input_file_names:
with open(file_name, 'r') as input_file:
print('processing file: {}'.format(file_name))
totals = []
for count, fields in enumerate((line.split('\t') for line in input_file), 1):
totals = [sum(values) for values in
izip_longest(totals, map(float, fields), fillvalue=0)]
averages = [total/count for total in totals]
for print_counter, average in enumerate(averages):
print(' {:9.4f}'.format(average))
if print_counter % GROUP_SIZE == 0:
file_write.write(str(average)+'\n')
file_write.write('\n')
file_write.close()
mutation_average.close()
You're reading the entire file into memory (line = u.readlines()) which will fail of course if the file is too large (and you say that some are up to 20 GB), so that's your problem right there.
Better iterate over each line:
for current_line in u:
do_something_with(current_line)
is the recommended approach.
Later in your script, you're doing some very strange things like first counting all the items in a list, then constructing a for loop over the range of that count. Why not iterate over the list directly? What is the purpose of your script? I have the impression that this could be done much easier.
This is one of the advantages of high-level languages like Python (as opposed to C where you do have to do these housekeeping tasks yourself): Allow Python to handle iteration for you, and only collect in memory what you actually need to have in memory at any given time.
Also, as it seems that you're processing TSV files (tabulator-separated values), you should take a look at the csv module which will handle all the splitting, removing of \ns etc. for you.
Python can use all memory available to its environment. My simple "memory test" crashes on ActiveState Python 2.6 after using about
1959167 [MiB]
On jython 2.5 it crashes earlier:
239000 [MiB]
probably I can configure Jython to use more memory (it uses limits from JVM)
Test app:
import sys
sl = []
i = 0
# some magic 1024 - overhead of string object
fill_size = 1024
if sys.version.startswith('2.7'):
fill_size = 1003
if sys.version.startswith('3'):
fill_size = 497
print(fill_size)
MiB = 0
while True:
s = str(i).zfill(fill_size)
sl.append(s)
if i == 0:
try:
sys.stderr.write('size of one string %d\n' % (sys.getsizeof(s)))
except AttributeError:
pass
i += 1
if i % 1024 == 0:
MiB += 1
if MiB % 25 == 0:
sys.stderr.write('%d [MiB]\n' % (MiB))
In your app you read whole file at once. For such big files you should read the line by line.
No, there's no Python-specific limit on the memory usage of a Python application. I regularly work with Python applications that may use several gigabytes of memory. Most likely, your script actually uses more memory than available on the machine you're running on.
In that case, the solution is to rewrite the script to be more memory efficient, or to add more physical memory if the script is already optimized to minimize memory usage.
Edit:
Your script reads the entire contents of your files into memory at once (line = u.readlines()). Since you're processing files up to 20 GB in size, you're going to get memory errors with that approach unless you have huge amounts of memory in your machine.
A better approach would be to read the files one line at a time:
for u in files:
for line in u: # This will iterate over each line in the file
# Read values from the line, do necessary calculations
Not only are you reading the whole of each file into memory, but also you laboriously replicate the information in a table called list_of_lines.
You have a secondary problem: your choices of variable names severely obfuscate what you are doing.
Here is your script rewritten with the readlines() caper removed and with meaningful names:
file_A1_B1 = open("A1_B1_100000.txt", "r")
file_A2_B2 = open("A2_B2_100000.txt", "r")
file_A1_B2 = open("A1_B2_100000.txt", "r")
file_A2_B1 = open("A2_B1_100000.txt", "r")
file_write = open ("average_generations.txt", "w")
mutation_average = open("mutation_average", "w") # not used
files = [file_A2_B2,file_A2_B2,file_A1_B2,file_A2_B1]
for afile in files:
table = []
for aline in afile:
values = aline.split('\t')
values.remove('\n') # why?
table.append(values)
row_count = len(table)
row0length = len(table[0])
print_counter = 4
for column_index in range(row0length):
column_total = 0
for row_index in range(row_count):
number = float(table[row_index][column_index])
column_total = column_total + number
column_average = column_total/row_count
print column_average
if print_counter == 4:
file_write.write(str(column_average)+'\n')
print_counter = 0
print_counter +=1
file_write.write('\n')
It rapidly becomes apparent that (1) you are calculating column averages (2) the obfuscation led some others to think you were calculating row averages.
As you are calculating column averages, no output is required until the end of each file, and the amount of extra memory actually required is proportional to the number of columns.
Here is a revised version of the outer loop code:
for afile in files:
for row_count, aline in enumerate(afile, start=1):
values = aline.split('\t')
values.remove('\n') # why?
fvalues = map(float, values)
if row_count == 1:
row0length = len(fvalues)
column_index_range = range(row0length)
column_totals = fvalues
else:
assert len(fvalues) == row0length
for column_index in column_index_range:
column_totals[column_index] += fvalues[column_index]
print_counter = 4
for column_index in column_index_range:
column_average = column_totals[column_index] / row_count
print column_average
if print_counter == 4:
file_write.write(str(column_average)+'\n')
print_counter = 0
print_counter +=1
here is my code:
def decode(filename):
with open(filename, "rb") as binary_file:
# Read the whole file at once
data = bytearray( binary_file.read())
for i in range(len(data)):
data[i] = 0xff - data[i]
with open("out.log", "wb") as out:
out.write(data)
I have a file around 10MB, and I need to translate this file by flipping every bits, and save it to a new file.
It takes around 1 second using my code to translate a 10MB file, while it only takes less than 1ms using C.
This is my first python script. I don't if it is right to use bytearray. The most time consuming code is loop for bytearray.
If using using the numpy library is an option, then using it would be much★ faster since it can perform the operation on all the bytes via a single statement. Doing byte-level operations in pure Python to relatively large amoont of data is inherently going to be relatively slow as compared to using a module like numpy which is implemented in C and optimized for array processing.
★ Although not by quite as much in Python 2 as in 3 (see results below).
The following is a framework I set up to benchmark using it vs the code in your question. It may seem like a lot of code, but most of it is just part of the scaffolding for making performance comparisons.
I encourage others answering this question to also make use of it.
from __future__ import print_function
from collections import namedtuple
import os
import sys
from random import randrange
from textwrap import dedent
from tempfile import NamedTemporaryFile
import timeit
import traceback
N = 1 # Number of executions of each "algorithm".
R = 3 # Number of repetitions of those N executions.
UNITS = 1024 * 1024 # MBs
FILE_SIZE = 10 * UNITS
# Create test files. Must be done here at module-level to allow file
# deletions at end.
with NamedTemporaryFile(mode='wb', delete=False) as inp_file:
FILE_NAME_IN = inp_file.name
print('Creating temp input file: "{}", length {:,d}'.format(FILE_NAME_IN, FILE_SIZE))
inp_file.write(bytearray(randrange(256) for _ in range(FILE_SIZE)))
with NamedTemporaryFile(mode='wb', delete=False) as out_file:
FILE_NAME_OUT = out_file.name
print('Creating temp output file: "{}"'.format(FILE_NAME_OUT))
# Common setup for all testcases (executed prior to any Testcase specific setup).
COMMON_SETUP = dedent("""
from __main__ import FILE_NAME_IN, FILE_NAME_OUT
""")
class Testcase(namedtuple('CodeFragments', ['setup', 'test'])):
""" A test case is composed of separate setup and test code fragments. """
def __new__(cls, setup, test):
""" Dedent code fragment in each string argument. """
return tuple.__new__(cls, (dedent(setup), dedent(test)))
testcases = {
"user3181169": Testcase("""
def decode(filename, out_filename):
with open(filename, "rb") as binary_file:
# Read the whole file at once
data = bytearray(binary_file.read())
for i in range(len(data)):
data[i] = 0xff - data[i]
with open(out_filename, "wb") as out:
out.write(data)
""", """
decode(FILE_NAME_IN, FILE_NAME_OUT)
"""
),
"using numpy": Testcase("""
import numpy as np
def decode(filename, out_filename):
with open(filename, 'rb') as file:
data = np.frombuffer(file.read(), dtype=np.uint8)
# Applies mathematical operation to entire array.
data = 0xff - data
with open(out_filename, "wb") as out:
out.write(data)
""", """
decode(FILE_NAME_IN, FILE_NAME_OUT)
""",
),
}
# Collect timing results of executing each testcase multiple times.
try:
results = [
(label,
min(timeit.repeat(testcases[label].test,
setup=COMMON_SETUP + testcases[label].setup,
repeat=R, number=N)),
) for label in testcases
]
except Exception:
traceback.print_exc(file=sys.stdout) # direct output to stdout
sys.exit(1)
# Display results.
major, minor, micro = sys.version_info[:3]
bitness = 64 if sys.maxsize > 2**32 else 32
print('Fastest to slowest execution speeds using ({}-bit) Python {}.{}.{}\n'
'({:,d} execution(s), best of {:d} repetition(s)'.format(
bitness, major, minor, micro, N, R))
print()
longest = max(len(result[0]) for result in results) # length of longest label
ranked = sorted(results, key=lambda t: t[1]) # ascending sort by execution time
fastest = ranked[0][1]
for result in ranked:
print('{:>{width}} : {:9.6f} secs, relative speed: {:6,.2f}x, ({:8,.2f}% slower)'
''.format(
result[0], result[1], round(result[1]/fastest, 2),
round((result[1]/fastest - 1) * 100, 2),
width=longest))
# Clean-up.
for filename in (FILE_NAME_IN, FILE_NAME_OUT):
try:
os.remove(filename)
except FileNotFoundError:
pass
Output (Python 3):
Creating temp input file: "T:\temp\tmpw94xdd5i", length 10,485,760
Creating temp output file: "T:\temp\tmpraw4j4qd"
Fastest to slowest execution speeds using (32-bit) Python 3.7.1
(1 execution(s), best of 3 repetition(s)
using numpy : 0.017744 secs, relative speed: 1.00x, ( 0.00% slower)
user3181169 : 1.099956 secs, relative speed: 61.99x, (6,099.14% slower)
Output (Python 2):
Creating temp input file: "t:\temp\tmprk0njd", length 10,485,760
Creating temp output file: "t:\temp\tmpvcaj6n"
Fastest to slowest execution speeds using (32-bit) Python 2.7.15
(1 execution(s), best of 3 repetition(s)
using numpy : 0.017930 secs, relative speed: 1.00x, ( 0.00% slower)
user3181169 : 0.937218 secs, relative speed: 52.27x, (5,126.97% slower)
This week I've been hunting a memory leak in my Python application that has been causing MemoryErrors, and have managed to distill the logic that appears to be causing it down to 20 lines. The following snippet of code consumes steadily increasing amounts of memory when run (even after the calls to gc.collect every 5000th iteration), as observed from the task manager. Why? Is this a bug in numpy?
import gc
import numpy as np
while True:
names = ["POH"] + \
["Head %d Really Long Data Field Name With Many Words" % i for i in xrange(200)]
formats = [">f"] + [">I"] * 200
offsets = range(0, 804, 4)
dt = np.dtype({"names": names, "formats": formats, "offsets": offsets})
array = np.zeros((1000,), dt)
sortedIndices = np.argsort(array, order=["POH"])
n += 1
if n % 1000 == 0:
print n
if n % 5000 == 0:
gc.collect()
I am running numpy 1.11.0 with Python 2.7.11 on 64-bit Windows 7.
I want to write some random sample data in a csv file until it is 1GB big. Following code is working:
import numpy as np
import uuid
import csv
import os
outfile = 'data.csv'
outsize = 1024 # MB
with open(outfile, 'ab') as csvfile:
wtr = csv.writer(csvfile)
while (os.path.getsize(outfile)//1024**2) < outsize:
wtr.writerow(['%s,%.6f,%.6f,%i' % (uuid.uuid4(), np.random.random()*50, np.random.random()*50, np.random.randint(1000))])
How to get it faster?
The problem appears to be mainly IO-bound. You can improve the I/O a bit by writing to the file in larger chunks instead of writing one line at a time:
import numpy as np
import uuid
import os
outfile = 'data-alt.csv'
outsize = 10 # MB
chunksize = 1000
with open(outfile, 'ab') as csvfile:
while (os.path.getsize(outfile)//1024**2) < outsize:
data = [[uuid.uuid4() for i in range(chunksize)],
np.random.random(chunksize)*50,
np.random.random(chunksize)*50,
np.random.randint(1000, size=(chunksize,))]
csvfile.writelines(['%s,%.6f,%.6f,%i\n' % row for row in zip(*data)])
You can experiment with the chunksize (the number of rows written per chunk) to see what works best on your machine.
Here is a benchmark, comparing the above code to your original code, with outsize set to 10 MB:
% time original.py
real 0m5.379s
user 0m4.839s
sys 0m0.538s
% time write_in_chunks.py
real 0m4.205s
user 0m3.850s
sys 0m0.351s
So this is is about 25% faster than the original code.
PS. I tried replacing the calls to os.path.getsize with an estimation of the number of total lines needed. Unfortunately, it did not improve the speed. Since the number of bytes needed to represent the final int varies, the estimation also is inexact -- that is, it does not perfectly replicate the behavior of your original code. So I left the os.path.getsize in place.
Removing all unnecessary stuff, and therefore it should be faster and easier to understand:
import random
import uuid
outfile = 'data.csv'
outsize = 1024 * 1024 * 1024 # 1GB
with open(outfile, 'ab') as csvfile:
size = 0
while size < outsize:
txt = '%s,%.6f,%.6f,%i\n' % (uuid.uuid4(), random.random()*50, random.random()*50, random.randrange(1000))
size += len(txt)
csvfile.write(txt)
This is an update building on unutbu's answer above:
A large % of the time was spent in generating random numbers and checking the file size.
If you generate the rows ahead of time you can assess the raw disc io performance:
import time
from pathlib import Path
import numpy as np
import uuid
outfile = Path('data-alt.csv')
chunksize = 1_800_000
data = [
[uuid.uuid4() for i in range(chunksize)],
np.random.random(chunksize) * 50,
np.random.random(chunksize) * 50,
np.random.randint(1000, size=(chunksize,))
]
rows = ['%s,%.6f,%.6f,%i\n' % row for row in zip(*data)]
t0 = time.time()
with open(outfile, 'a') as csvfile:
csvfile.writelines(rows)
tdelta = time.time() - t0
print(tdelta)
On my standard 860 evo ssd (not nvme), I get 1.43 sec for 1_800_000 rows so that's 1,258,741 rows/sec (not too shabby imo)
I was trying to calculate the correlation between a large set of data read from a text. For extremely large data set the program give a memory error. Can anyone please tell me how to correct this problem. Thanks
The following is my code:
enter code here
import numpy
from numpy import *
from array import *
from decimal import *
import sys
Threshold = 0.8;
TopMostData = 10;
FileName = sys.argv[1]
File = open(FileName,'r')
SignalData = numpy.empty((1, 128));
SignalData[:][:] = 0;
for line in File:
TempLine = line.split();
TempInt = [float(i) for i in TempLine]
SignalData = vstack((SignalData,TempInt))
del TempLine;
del TempInt;
File.close();
TempData = SignalData;
SignalData = SignalData[1:,:]
SignalData = SignalData[:,65:128]
print "File Read | Data Stored" + " | Total Lines: " + str(len(SignalData))
CorrelationData = numpy.corrcoef(SignalData)
The following is the error:
Traceback (most recent call last):
File "Corelation.py", line 36, in <module>
CorrelationData = numpy.corrcoef(SignalData)
File "/usr/lib/python2.7/dist-packages/numpy/lib/function_base.py", line 1824, in corrcoef
return c/sqrt(multiply.outer(d, d))
MemoryError
You run out of memory as the comments show. If that happens because you are using 32-bit Python, even the method below will fail. But for the 64-bit Python and not-so-much-RAM situation we can do a lot as calculating the correlations is easily done piecewise, as you only need two lines in the memory simultaneously.
So, you may split your input into, say, 1000 row chunks, and then the resulting 1000 x 1000 matrices are easy to keep in memory. Then you can assemble your result into the big output matrix which is not necessarily in the RAM. I recommend this approach even if you have a lot of RAM, because this is much more memory-friendly. Correlation coefficient calculation is not an operation where fast random accesses would help a lot if the input can be kept in RAM.
Unfortunately, the numpy.corrcoef does not do this automatically, and we'll have to roll our own correlation coefficient calculation. Fortunately, that is not as hard as it sounds.
Something along these lines:
import numpy as np
# number of rows in one chunk
SPLITROWS = 1000
# the big table, which is usually bigger
bigdata = numpy.random.random((27000, 128))
numrows = bigdata.shape[0]
# subtract means form the input data
bigdata -= np.mean(bigdata, axis=1)[:,None]
# normalize the data
bigdata /= np.sqrt(np.sum(bigdata*bigdata, axis=1))[:,None]
# reserve the resulting table onto HDD
res = np.memmap("/tmp/mydata.dat", 'float64', mode='w+', shape=(numrows, numrows))
for r in range(0, numrows, SPLITROWS):
for c in range(0, numrows, SPLITROWS):
r1 = r + SPLITROWS
c1 = c + SPLITROWS
chunk1 = bigdata[r:r1]
chunk2 = bigdata[c:c1]
res[r:r1, c:c1] = np.dot(chunk1, chunk2.T)
Some notes:
the code above is tested above np.corrcoef(bigdata)
if you have complex values, you'll need to create a complex output array res and take the complex conjugate of chunk2.T
the code garbles bigdata to maintain performance and minimize memory use; if you need to preserve it, make a copy
The above code takes about 85 s to run on my machine, but the data will mostly fit in RAM, and I have a SSD disk. The algorithm is coded in such order to avoid too random access into the HDD, i.e. the access is reasonably sequential. In comparison, the non-memmapped standard version is not significantly faster even if you have a lot of memory. (Actually, it took a lot more time in my case, but I suspect I ran out of my 16 GiB and then there was a lot of swapping going on.)
You can make the actual calculations faster by omitting half of the matrix, because res.T == res. In practice, you can omit all blocks where c > r and then mirror them later on. On the other hand, the performance is most likely limited by the HDD preformance, so other optimizations do not necessarily bring much more speed.
Of course, this approach is easy to make parallel, as the chunk calculations are completely independent. Also the memmapped array can be shared between threads rather easily.