Why does concatenation in Python appear to be getting slower? - python

Why does it appear that concatenation in Python 3 is slower in some cases than in Python 2?
The most impacted method of concatenation appears to be successive concatenation of bytes objects, which has gone from an O(n) to O(n²) operation.
The bulk of my profiling code is here:
#!/usr/bin/env python
from operator import concat
from sys import version, version_info
from timeit import timeit # Compatibility: ver >= 2.6
# ver = version.partition('\n')[0].rstrip()
ver = '.'.join(str(v) for v in version_info[:3])
print(ver)
if version_info[0] == 2:
from StringIO import StringIO
else:
from io import StringIO
from functools import reduce
xrange = range
def build_plus():
output = ''
for _ in xrange(input_len):
output += 'a'
return output
def build_join():
return ''.join('a' for _ in xrange(input_len))
def build_bytes_plus():
output = b''
for _ in xrange(input_len):
output += b'a'
return output
def build_stringio():
output = StringIO()
for _ in xrange(input_len):
output.write('a')
return output.getvalue()
def build_reduce():
return reduce(concat, ('a' for _ in xrange(input_len)))
builds = {'str+': build_plus,
'join': build_join,
'reduce': build_reduce,
'bytes+': build_bytes_plus,
'StringIO': build_stringio}
if version_info[0] == 2:
import cStringIO
def build_cstringio():
output = cStringIO.StringIO()
for _ in xrange(input_len):
output.write('a')
return output.getvalue()
builds['cStringIO'] = build_cstringio
else:
from io import BytesIO
def build_bytesio():
output = BytesIO()
for _ in xrange(input_len):
output.write(b'a')
return output.getvalue()
builds['BytesIO'] = build_bytesio
resfile = open('times.csv', 'a')
size_range = 50 # Number of points over the size axis
min_order = 1.0 # 10^x byte input min
max_order = 5.0 # 10^x byte input max
for allow_gc in (False, True):
setup = 'gc.enable()' if allow_gc else 'pass'
for build_name, build_fun in builds.items():
# For a roughly constant confidence interval, aim for uniform sample density across the
# (logarithmic) input size axis.
for size_index in range(size_range+1):
input_len = int(10**((max_order-min_order)*size_index/size_range + min_order))
# Rather than repeating many measurements at one input size, perform one measurement
# per input size for a continuous range of input sizes and apply smoothing later.
dur = timeit(build_fun, setup, number=1)
resfile.write('"%s",%s,"%s",%d,%.6g\n' % (ver, str(allow_gc).upper(), build_name,
input_len, dur))
Some graphs from my R script shown here:

Concatenating strings with + or += in a loop was never a good idea. It only seemed efficient because there was a weird, controversial special case in the bytecode interpreter loop which would attempt to concatenate strings mutatively if it could prove no one else had a reference to the string it was messing with. There was no efficient resize policy in place; it just called realloc and hoped for the best, so it could still end up O(n^2) if realloc needed to copy.
In Python 3, that weird special case now handles unicode strings instead of bytestrings. Bytestring concatenation goes back to building a new string object each time, so your loop goes back to O(n^2).

Related

Speed up struct.pack over a large list of floats

I have a dataframe containing millions of floats. I want to turn them into bytes and join them in a single line. Iterating over each of them is kinda slow. Is there a way to speed this up?
import struct
import numpy as np
# list of floats [197496.84375, 177091.28125, 140972.3125, 120965.9140625, ...]
# 5M - 20M floats in total
data = df.to_numpy().flatten().tolist()
# too slow
dataline = b''.join([struct.pack('>f', event) for event in data])
I tried another approach, but apart from being slow, it also produces a different result
import struct
import numpy as np
def myfunc(event):
return struct.pack('>f', event)
data = df.to_numpy().flatten()
myfunc_vec = np.vectorize(myfunc)
result = myfunc_vec(data)
dataline = b''.join(result)
UPD: found an example here Fastest way to pack a list of floats into bytes in python, but it doesn't allow me to specify endianess. Putting this '%s>f' instead of '%sf' results in an error:
error: bad char in struct format
import random
import struct
floatlist = [random.random() for _ in range(10**5)]
buf = struct.pack('%sf' % len(floatlist), *floatlist)

Very large pandas dataframe - keeping count

Assume this is a sample of my data: dataframe
the entire dataframe is stored in a csv file (dataframe.csv) that is 40GBs so I can't open all of it at once.
I am hoping to find the most dominant 25 names for all genders. My instinct is to create a for loop that runs through the file (because I can't open it at once), and have a python dictionary that holds the counter for each name (that I will increment as I go through the data).
To be honest, I'm confused on where to even start with this (how to create the dictionary, since to_dict() does not appear to do what I'm looking for). And also, if this is even a good solution? Is there a more efficient way someone can think of?
SUMMARY -- sorry if the question is a bit long:
the csv file storing the data is very big and I can't open it at once, but I'd like to find the top 25 dominant names in the data. Any ideas on what to do and how to do it?
I'd appreciate any help I can get! :)
Thanks for your interesting task! I've implemented pure numpy + pandas solution. It uses sorted array to keep names and counts. Hence algorithm should be around O(n * log n) complexity.
I didn't any hash table in numpy, hash table definitely would be faster (O(n)). Hence I used existing sorting/inserting routines of numpy.
Also I used .read_csv() from pandas with iterator = True, chunksize = 1 << 24 params, this allows reading file in chunks and producing pandas dataframes of fixed size from each chunk.
Note! In the first runs (until program is debugged) set limit_chunks (number of chunks to process) in code to small value (like 5). This is to check that whole program runs correctly on partial data.
Program needs to run one time command python -m pip install pandas numpy to install these 2 packages if you don't have them.
Progress is printed once in a while, total megabytes done plus speed.
Result will be printed to console plus saved to res_fname file name, all constants configuring script are placed in the beginning of script. topk constant controls how many top names will be outputed to file/console.
Interesting how fast is my solution. If it is to slow maybe I devote some time to write nice HashTable class using pure numpy.
You can also try and run next code here online.
import os, math, time, sys
# Needs: python -m pip install pandas numpy
import pandas as pd, numpy as np
import pandas, numpy
fname = 'test.csv'
fname_res = 'test.res'
chunk_size = 1 << 24
limit_chunks = None # Number of chunks to process, set to None if to process whole file
all_genders = ['Male', 'Female']
topk = 1000 # How many top names to output
progress_step = 1 << 23 # in bytes
fsize = os.path.getsize(fname)
#el_man = enlighten.get_manager() as el_man
#el_ctr = el_man.counter(color = 'green', total = math.ceil(fsize / 2 ** 20), unit = 'MiB', leave = False)
tables = {g : {
'vals': np.full([1], chr(0x10FFFF), dtype = np.str_),
'cnts': np.zeros([1], dtype = np.int64),
} for g in all_genders}
tb = time.time()
def Progress(
done, total = min([fsize] + ([chunk_size * limit_chunks] if limit_chunks is not None else [])),
cfg = {'progressed': 0, 'done': False},
):
if not cfg['done'] and (done - cfg['progressed'] >= progress_step or done >= total):
if done < total:
while cfg['progressed'] + progress_step <= done:
cfg['progressed'] += progress_step
else:
cfg['progressed'] = total
sys.stdout.write(
f'{str(round(cfg["progressed"] / 2 ** 20)).rjust(5)} MiB of ' +
f'{str(round(total / 2 ** 20)).rjust(5)} MiB ' +
f'speed {round(cfg["progressed"] / 2 ** 20 / (time.time() - tb), 4)} MiB/sec\n'
)
sys.stdout.flush()
if done >= total:
cfg['done'] = True
with open(fname, 'rb', buffering = 1 << 26) as f:
for i, df in enumerate(pd.read_csv(f, iterator = True, chunksize = chunk_size)):
if limit_chunks is not None and i >= limit_chunks:
break
if i == 0:
name_col = df.columns.get_loc('First Name')
gender_col = df.columns.get_loc('Gender')
names = np.array(df.iloc[:, name_col]).astype('str')
genders = np.array(df.iloc[:, gender_col]).astype('str')
for g in all_genders:
ctab = tables[g]
gnames = names[genders == g]
vals, cnts = np.unique(gnames, return_counts = True)
if vals.size == 0:
continue
if ctab['vals'].dtype.itemsize < names.dtype.itemsize:
ctab['vals'] = ctab['vals'].astype(names.dtype)
poss = np.searchsorted(ctab['vals'], vals)
exist = ctab['vals'][poss] == vals
ctab['cnts'][poss[exist]] += cnts[exist]
nexist = np.flatnonzero(exist == False)
ctab['vals'] = np.insert(ctab['vals'], poss[nexist], vals[nexist])
ctab['cnts'] = np.insert(ctab['cnts'], poss[nexist], cnts[nexist])
Progress(f.tell())
Progress(fsize)
with open(fname_res, 'w', encoding = 'utf-8') as f:
for g in all_genders:
f.write(f'{g}:\n\n')
print(g, '\n')
order = np.flip(np.argsort(tables[g]['cnts']))[:topk]
snames, scnts = tables[g]['vals'][order], tables[g]['cnts'][order]
if snames.size > 0:
for n, c in zip(np.nditer(snames), np.nditer(scnts)):
n, c = str(n), int(c)
if c == 0:
continue
f.write(f'{c} {n}\n')
print(c, n.encode('ascii', 'replace').decode('ascii'))
f.write(f'\n')
print()
import pandas as pd
df = pd.read_csv("sample_data.csv")
print(df['First Name'].value_counts())
The second line will convert your csv into a pandas dataframe and the third line should print the occurances of each name.
https://dfrieds.com/data-analysis/value-counts-python-pandas.html
This doesn't seem to be a case where pandas is really going to be an advantage. But if you're committed to going down that route, change the read_csv chunksize paramater, then filter out the useless columns.
Perhaps consider using a different set of tooling such as a database or even vanilla python using a generator to populate a dict in the form of name:count.

Unable to send multiple arguments to concurrrent.futures.Executor.map()

I am trying to combine the solutions provided in both of these SO answers - Using threading to slice an array into chunks and perform calculation on each chunk and reassemble the returned arrays into one array and Pass multiple parameters to concurrent.futures.Executor.map?. I have a numpy array that I chunk into segments and I want each chunk to be sent to a separate thread and an additional argument to be sent along with the chunk of the original array. This additional argument is a constant and will not change. The performCalc is a function that will take two arguments -one the chunk of the original numpy array and a constant.
First solution I tried
import psutil
import numpy as np
import sys
from concurrent.futures import ThreadPoolExecutor
from functools import partial
def main():
testThread()
def testThread():
minLat = -65.76892
maxLat = 66.23587
minLon = -178.81404
maxLon = 176.2949
latGrid = np.arange(minLat,maxLat,0.05)
lonGrid = np.arange(minLon,maxLon,0.05)
gridLon,gridLat = np.meshgrid(latGrid,lonGrid)
grid_points = np.c_[gridLon.ravel(),gridLat.ravel()]
n_jobs = psutil.cpu_count(logical=False)
chunk = np.array_split(grid_points,n_jobs,axis=0)
x = ThreadPoolExecutor(max_workers=n_jobs)
maxDistance = 4.3
func = partial(performCalc,chunk)
args = [chunk,maxDistance]
# This prints 4.3 twice although there are four cores in the system
results = x.map(func,args)
# This prints 4.3 four times correctly
results1 = x.map(performTest,chunk)
def performCalc(chunk,maxDistance):
print(maxDistance)
return chunk
def performTest(chunk):
print("test")
main()
So performCalc() prints 4.3 twice even though the number of cores in the system is 4. While performTest() prints test four times correctly. I am not able to figure out the reason for this error.
Also I am sure the way I set up the for itertools.partial call is incorrect.
1) There are four chunks of the original numpy array.
2) Each chunk is to be paired with maxDistance and sent to performCalc()
3) There will be four threads that will print maxDistance and will return parts of the total result which will be returned in one array
Where am I going wrong ?
UPDATE
I tried using the lambda approach as well
results = x.map(lambda p:performCalc(*p),args)
but this prints nothing.
Using the solution provided by user mkorvas as shown here - How to pass a function with more than one argument to python concurrent.futures.ProcessPoolExecutor.map()? I was able to solve my problem as shown in the solution here -
import psutil
import numpy as np
import sys
from concurrent.futures import ThreadPoolExecutor
from functools import partial
def main():
testThread()
def testThread():
minLat = -65.76892
maxLat = 66.23587
minLon = -178.81404
maxLon = 176.2949
latGrid = np.arange(minLat,maxLat,0.05)
lonGrid = np.arange(minLon,maxLon,0.05)
print(latGrid.shape,lonGrid.shape)
gridLon,gridLat = np.meshgrid(latGrid,lonGrid)
grid_points = np.c_[gridLon.ravel(),gridLat.ravel()]
print(grid_points.shape)
n_jobs = psutil.cpu_count(logical=False)
chunk = np.array_split(grid_points,n_jobs,axis=0)
x = ThreadPoolExecutor(max_workers=n_jobs)
maxDistance = 4.3
func = partial(performCalc,maxDistance)
results = x.map(func,chunk)
def performCalc(maxDistance,chunk):
print(maxDistance)
return chunk
main()
What apparently one needs to do(and I do not know why and maybe somebody can clarify in another answer) is you need to switch the order of input to the function performCalc()
as shown here -
def performCalc(maxDistance,chunk):
print(maxDistance)
return chunk

How to optimize binary file manipulation?

here is my code:
def decode(filename):
with open(filename, "rb") as binary_file:
# Read the whole file at once
data = bytearray( binary_file.read())
for i in range(len(data)):
data[i] = 0xff - data[i]
with open("out.log", "wb") as out:
out.write(data)
I have a file around 10MB, and I need to translate this file by flipping every bits, and save it to a new file.
It takes around 1 second using my code to translate a 10MB file, while it only takes less than 1ms using C.
This is my first python script. I don't if it is right to use bytearray. The most time consuming code is loop for bytearray.
If using using the numpy library is an option, then using it would be much★ faster since it can perform the operation on all the bytes via a single statement. Doing byte-level operations in pure Python to relatively large amoont of data is inherently going to be relatively slow as compared to using a module like numpy which is implemented in C and optimized for array processing.
★ Although not by quite as much in Python 2 as in 3 (see results below).
The following is a framework I set up to benchmark using it vs the code in your question. It may seem like a lot of code, but most of it is just part of the scaffolding for making performance comparisons.
I encourage others answering this question to also make use of it.
from __future__ import print_function
from collections import namedtuple
import os
import sys
from random import randrange
from textwrap import dedent
from tempfile import NamedTemporaryFile
import timeit
import traceback
N = 1 # Number of executions of each "algorithm".
R = 3 # Number of repetitions of those N executions.
UNITS = 1024 * 1024 # MBs
FILE_SIZE = 10 * UNITS
# Create test files. Must be done here at module-level to allow file
# deletions at end.
with NamedTemporaryFile(mode='wb', delete=False) as inp_file:
FILE_NAME_IN = inp_file.name
print('Creating temp input file: "{}", length {:,d}'.format(FILE_NAME_IN, FILE_SIZE))
inp_file.write(bytearray(randrange(256) for _ in range(FILE_SIZE)))
with NamedTemporaryFile(mode='wb', delete=False) as out_file:
FILE_NAME_OUT = out_file.name
print('Creating temp output file: "{}"'.format(FILE_NAME_OUT))
# Common setup for all testcases (executed prior to any Testcase specific setup).
COMMON_SETUP = dedent("""
from __main__ import FILE_NAME_IN, FILE_NAME_OUT
""")
class Testcase(namedtuple('CodeFragments', ['setup', 'test'])):
""" A test case is composed of separate setup and test code fragments. """
def __new__(cls, setup, test):
""" Dedent code fragment in each string argument. """
return tuple.__new__(cls, (dedent(setup), dedent(test)))
testcases = {
"user3181169": Testcase("""
def decode(filename, out_filename):
with open(filename, "rb") as binary_file:
# Read the whole file at once
data = bytearray(binary_file.read())
for i in range(len(data)):
data[i] = 0xff - data[i]
with open(out_filename, "wb") as out:
out.write(data)
""", """
decode(FILE_NAME_IN, FILE_NAME_OUT)
"""
),
"using numpy": Testcase("""
import numpy as np
def decode(filename, out_filename):
with open(filename, 'rb') as file:
data = np.frombuffer(file.read(), dtype=np.uint8)
# Applies mathematical operation to entire array.
data = 0xff - data
with open(out_filename, "wb") as out:
out.write(data)
""", """
decode(FILE_NAME_IN, FILE_NAME_OUT)
""",
),
}
# Collect timing results of executing each testcase multiple times.
try:
results = [
(label,
min(timeit.repeat(testcases[label].test,
setup=COMMON_SETUP + testcases[label].setup,
repeat=R, number=N)),
) for label in testcases
]
except Exception:
traceback.print_exc(file=sys.stdout) # direct output to stdout
sys.exit(1)
# Display results.
major, minor, micro = sys.version_info[:3]
bitness = 64 if sys.maxsize > 2**32 else 32
print('Fastest to slowest execution speeds using ({}-bit) Python {}.{}.{}\n'
'({:,d} execution(s), best of {:d} repetition(s)'.format(
bitness, major, minor, micro, N, R))
print()
longest = max(len(result[0]) for result in results) # length of longest label
ranked = sorted(results, key=lambda t: t[1]) # ascending sort by execution time
fastest = ranked[0][1]
for result in ranked:
print('{:>{width}} : {:9.6f} secs, relative speed: {:6,.2f}x, ({:8,.2f}% slower)'
''.format(
result[0], result[1], round(result[1]/fastest, 2),
round((result[1]/fastest - 1) * 100, 2),
width=longest))
# Clean-up.
for filename in (FILE_NAME_IN, FILE_NAME_OUT):
try:
os.remove(filename)
except FileNotFoundError:
pass
Output (Python 3):
Creating temp input file: "T:\temp\tmpw94xdd5i", length 10,485,760
Creating temp output file: "T:\temp\tmpraw4j4qd"
Fastest to slowest execution speeds using (32-bit) Python 3.7.1
(1 execution(s), best of 3 repetition(s)
using numpy : 0.017744 secs, relative speed: 1.00x, ( 0.00% slower)
user3181169 : 1.099956 secs, relative speed: 61.99x, (6,099.14% slower)
Output (Python 2):
Creating temp input file: "t:\temp\tmprk0njd", length 10,485,760
Creating temp output file: "t:\temp\tmpvcaj6n"
Fastest to slowest execution speeds using (32-bit) Python 2.7.15
(1 execution(s), best of 3 repetition(s)
using numpy : 0.017930 secs, relative speed: 1.00x, ( 0.00% slower)
user3181169 : 0.937218 secs, relative speed: 52.27x, (5,126.97% slower)

R readBin vs. Python struct

I am attempting to read a binary file using Python. Someone else has read in the data with R using the following code:
x <- readBin(webpage, numeric(), n=6e8, size = 4, endian = "little")
myPoints <- data.frame("tmax" = x[1:(length(x)/4)],
"nmax" = x[(length(x)/4 + 1):(2*(length(x)/4))],
"tmin" = x[(2*length(x)/4 + 1):(3*(length(x)/4))],
"nmin" = x[(3*length(x)/4 + 1):(length(x))])
With Python, I am trying the following code:
import struct
with open('file','rb') as f:
val = f.read(16)
while val != '':
print(struct.unpack('4f', val))
val = f.read(16)
I am coming to slightly different results. For example, the first row in R returns 4 columns as -999.9, 0, -999.0, 0. Python returns -999.0 for all four columns (images below).
Python output:
R output:
I know that they are slicing by the length of the file with some of the [] code, but I do not know how exactly to do this in Python, nor do I understand quite why they do this. Basically, I want to recreate what R is doing in Python.
I can provide more of either code base if needed. I did not want to overwhelm with code that was not necessary.
Deducing from the R code, the binary file first contains a certain number tmax's, then the same number of nmax's, then tmin's and nmin's. What the code does is reading the entire file, which is then chopped up in the 4 parts (tmax's, nmax's, etc..) using slicing.
To do the same in python:
import struct
# Read entire file into memory first. This is done so we can count
# number of bytes before parsing the bytes. It is not a very memory
# efficient way, but it's the easiest. The R-code as posted wastes even
# more memory: it always takes 6e8 * 4 bytes (~ 2.2Gb) of memory no
# matter how small the file may be.
#
data = open('data.bin','rb').read()
# Calculate number of points in the file. This is
# file-size / 16, because there are 4 numeric()'s per
# point, and they are 4 bytes each.
#
num = int(len(data) / 16)
# Now we know how much there are, we take all tmax numbers first, then
# all nmax's, tmin's and lastly all nmin's.
# First generate a format string because it depends on the number points
# there are in the file. It will look like: "fffff"
#
format_string = 'f' * num
# Then, for cleaner code, calculate chunk size of the bytes we need to
# slice off each time.
#
n = num * 4 # 4-byte floats
# Note that python has different interpretation of slicing indices
# than R, so no "+1" is needed here as it is in the R code.
#
tmax = struct.unpack(format_string, data[:n])
nmax = struct.unpack(format_string, data[n:2*n])
tmin = struct.unpack(format_string, data[2*n:3*n])
nmin = struct.unpack(format_string, data[3*n:])
print("tmax", tmax)
print("nmax", nmax)
print("tmin", tmin)
print("nmin", nmin)
If the goal is to have this data structured as a list of points(?) like (tmax,nmax,tmin,nmin), then append this to the code:
print()
print("Points:")
# Combine ("zip") all 4 lists into a list of (tmax,nmax,tmin,nmin) points.
# Python has a function to do this at once: zip()
#
i = 0
for point in zip(tmax, nmax, tmin, nmin):
print(i, ":", point)
i += 1
Here's a less memory-hungry way to do the same. It possibly is a bit faster too. (but that is difficult to check for me)
My computer did not have sufficient memory to run the first program with those huge files. This one does, but I still needed to create a list of ony tmax's first (the first 1/4 of the file), then print it, and then delete the list in order to have enough memory for nmax's, tmin's and nmin's.
But this one too says the nmin's inside the 2018 file are all -999.0. If that doesn't make sense, could you check what the R-code makes of it then? I suspect that it is just what's in the file. The other possibility is of course, that I got it all wrong (which I doubt). However, I tried the 2017 file too, and that one does not have such problem: all of tmax, nmax, tmin, nmin have around 37% -999.0 's.
Anyway, here's the second code:
import os
import struct
# load_data()
# data_store : object to append() data items (floats) to
# num : number of floats to read and store
# datafile : opened binary file object to read float data from
#
def load_data(data_store, num, datafile):
for i in range(num):
data = datafile.read(4) # process one float (=4 bytes) at a time
item = struct.unpack("<f", data)[0] # '<' means little endian
data_store.append(item)
# save_list() saves a list of float's as strings to a file
#
def save_list(filename, datalist):
output = open(filename, "wt")
for item in datalist:
output.write(str(item) + '\n')
output.close()
#### MAIN ####
datafile = open('data.bin','rb')
# Get file size so we can calculate number of points without reading
# the (large) file entirely into memory.
#
file_info = os.stat(datafile.fileno())
# Calculate number of points, i.e. number of each tmax's, nmax's,
# tmin's, nmin's. A point is 4 floats of 4 bytes each, hence number
# of points = file-size / (4*4)
#
num = int(file_info.st_size / 16)
tmax_list = list()
load_data(tmax_list, num, datafile)
save_list("tmax.txt", tmax_list)
del tmax_list # huge list, save memory
nmax_list = list()
load_data(nmax_list, num, datafile)
save_list("nmax.txt", nmax_list)
del nmax_list # huge list, save memory
tmin_list = list()
load_data(tmin_list, num, datafile)
save_list("tmin.txt", tmin_list)
del tmin_list # huge list, save memory
nmin_list = list()
load_data(nmin_list, num, datafile)
save_list("nmin.txt", nmin_list)
del nmin_list # huge list, save memory

Categories

Resources