Is it possible to map a NumPy array in place? If yes, how?
Given a_values - 2D array - this is the bit of code that does the trick for me at the moment:
for row in range(len(a_values)):
for col in range(len(a_values[0])):
a_values[row][col] = dim(a_values[row][col])
But it's so ugly that I suspect that somewhere within NumPy there must be a function that does the same with something looking like:
a_values.map_in_place(dim)
but if something like the above exists, I've been unable to find it.
It's only worth trying to do this in-place if you are under significant space constraints. If that's the case, it is possible to speed up your code a little bit by iterating over a flattened view of the array. Since reshape returns a new view when possible, the data itself isn't copied (unless the original has unusual structure).
I don't know of a better way to achieve bona fide in-place application of an arbitrary Python function.
>>> def flat_for(a, f):
... a = a.reshape(-1)
... for i, v in enumerate(a):
... a[i] = f(v)
...
>>> a = numpy.arange(25).reshape(5, 5)
>>> flat_for(a, lambda x: x + 5)
>>> a
array([[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]])
Some timings:
>>> a = numpy.arange(2500).reshape(50, 50)
>>> f = lambda x: x + 5
>>> %timeit flat_for(a, f)
1000 loops, best of 3: 1.86 ms per loop
It's about twice as fast as the nested loop version:
>>> a = numpy.arange(2500).reshape(50, 50)
>>> def nested_for(a, f):
... for i in range(len(a)):
... for j in range(len(a[0])):
... a[i][j] = f(a[i][j])
...
>>> %timeit nested_for(a, f)
100 loops, best of 3: 3.79 ms per loop
Of course vectorize is still faster, so if you can make a copy, use that:
>>> a = numpy.arange(2500).reshape(50, 50)
>>> g = numpy.vectorize(lambda x: x + 5)
>>> %timeit g(a)
1000 loops, best of 3: 584 us per loop
And if you can rewrite dim using built-in ufuncs, then please, please, don't vectorize:
>>> a = numpy.arange(2500).reshape(50, 50)
>>> %timeit a + 5
100000 loops, best of 3: 4.66 us per loop
numpy does operations like += in place, just as you might expect -- so you can get the speed of a ufunc with in-place application at no cost. Sometimes it's even faster! See here for an example.
By the way, my original answer to this question, which can be viewed in its edit history, is ridiculous, and involved vectorizing over indices into a. Not only did it have to do some funky stuff to bypass vectorize's type-detection mechanism, it turned out to be just as slow as the nested loop version. So much for cleverness!
This is a write-up of contributions scattered in answers and
comments, that I wrote after accepting the answer to the question.
Upvotes are always welcome, but if you upvote this answer, please
don't miss to upvote also those of senderle and (if (s)he writes
one) eryksun, who suggested the methods below.
Q: Is it possible to map a numpy array in place?
A: Yes but not with a single array method. You have to write your own code.
Below a script that compares the various implementations discussed in the thread:
import timeit
from numpy import array, arange, vectorize, rint
# SETUP
get_array = lambda side : arange(side**2).reshape(side, side) * 30
dim = lambda x : int(round(x * 0.67328))
# TIMER
def best(fname, reps, side):
global a
a = get_array(side)
t = timeit.Timer('%s(a)' % fname,
setup='from __main__ import %s, a' % fname)
return min(t.repeat(reps, 3)) #low num as in place --> converge to 1
# FUNCTIONS
def mac(array_):
for row in range(len(array_)):
for col in range(len(array_[0])):
array_[row][col] = dim(array_[row][col])
def mac_two(array_):
li = range(len(array_[0]))
for row in range(len(array_)):
for col in li:
array_[row][col] = int(round(array_[row][col] * 0.67328))
def mac_three(array_):
for i, row in enumerate(array_):
array_[i][:] = [int(round(v * 0.67328)) for v in row]
def senderle(array_):
array_ = array_.reshape(-1)
for i, v in enumerate(array_):
array_[i] = dim(v)
def eryksun(array_):
array_[:] = vectorize(dim)(array_)
def ufunc_ed(array_):
multiplied = array_ * 0.67328
array_[:] = rint(multiplied)
# MAIN
r = []
for fname in ('mac', 'mac_two', 'mac_three', 'senderle', 'eryksun', 'ufunc_ed'):
print('\nTesting `%s`...' % fname)
r.append(best(fname, reps=50, side=50))
# The following is for visually checking the functions returns same results
tmp = get_array(3)
eval('%s(tmp)' % fname)
print tmp
tmp = min(r)/100
print('\n===== ...AND THE WINNER IS... =========================')
print(' mac (as in question) : %.4fms [%.0f%%]') % (r[0]*1000,r[0]/tmp)
print(' mac (optimised) : %.4fms [%.0f%%]') % (r[1]*1000,r[1]/tmp)
print(' mac (slice-assignment) : %.4fms [%.0f%%]') % (r[2]*1000,r[2]/tmp)
print(' senderle : %.4fms [%.0f%%]') % (r[3]*1000,r[3]/tmp)
print(' eryksun : %.4fms [%.0f%%]') % (r[4]*1000,r[4]/tmp)
print(' slice-assignment w/ ufunc : %.4fms [%.0f%%]') % (r[5]*1000,r[5]/tmp)
print('=======================================================\n')
The output of the above script - at least in my system - is:
mac (as in question) : 88.7411ms [74591%]
mac (optimised) : 86.4639ms [72677%]
mac (slice-assignment) : 79.8671ms [67132%]
senderle : 85.4590ms [71832%]
eryksun : 13.8662ms [11655%]
slice-assignment w/ ufunc : 0.1190ms [100%]
As you can observe, using numpy's ufunc increases speed of more than 2 and almost 3 orders of magnitude compared with the second best and worst alternatives respectively.
If using ufunc is not an option, here's a comparison of the other alternatives only:
mac (as in question) : 91.5761ms [672%]
mac (optimised) : 88.9449ms [653%]
mac (slice-assignment) : 80.1032ms [588%]
senderle : 86.3919ms [634%]
eryksun : 13.6259ms [100%]
HTH!
Why not using numpy implementation, and the out_ trick ?
from numpy import array, arange, vectorize, rint, multiply, round as np_round
def fmilo(array_):
np_round(multiply(array_ ,0.67328, array_), out=array_)
got:
===== ...AND THE WINNER IS... =========================
mac (as in question) : 80.8470ms [130422%]
mac (optimised) : 80.2400ms [129443%]
mac (slice-assignment) : 75.5181ms [121825%]
senderle : 78.9380ms [127342%]
eryksun : 11.0800ms [17874%]
slice-assignment w/ ufunc : 0.0899ms [145%]
fmilo : 0.0620ms [100%]
=======================================================
if ufuncs are not possible, you should maybe consider using cython.
it is easy to integrate and give big speedups on specific use of numpy arrays.
This is just an updated version of mac's write-up, actualized for Python 3.x, and with numba and numpy.frompyfunc added.
numpy.frompyfunc takes an abitrary python function and returns a function, which when cast on a numpy.array, applies the function elementwise.
However, it changes the datatype of the array to object, so it is not in place, and future calculations on this array will be slower.
To avoid this drawback, in the test numpy.ndarray.astype will be called, returning the datatype to int.
As side note:
Numba isn't included in Python's basic libraries and has to be downloaded externally if you want to test it. In this test, it actually does nothing, and if it would have been called with #jit(nopython=True), it would have given an error message saying that it can't optimize anything there. Since, however, numba can often speed-up code written in a functional style, it is included for integrity.
import timeit
from numpy import array, arange, vectorize, rint, frompyfunc
from numba import autojit
# SETUP
get_array = lambda side : arange(side**2).reshape(side, side) * 30
dim = lambda x : int(round(x * 0.67328))
# TIMER
def best(fname, reps, side):
global a
a = get_array(side)
t = timeit.Timer('%s(a)' % fname,
setup='from __main__ import %s, a' % fname)
return min(t.repeat(reps, 3)) #low num as in place --> converge to 1
# FUNCTIONS
def mac(array_):
for row in range(len(array_)):
for col in range(len(array_[0])):
array_[row][col] = dim(array_[row][col])
def mac_two(array_):
li = range(len(array_[0]))
for row in range(len(array_)):
for col in li:
array_[row][col] = int(round(array_[row][col] * 0.67328))
def mac_three(array_):
for i, row in enumerate(array_):
array_[i][:] = [int(round(v * 0.67328)) for v in row]
def senderle(array_):
array_ = array_.reshape(-1)
for i, v in enumerate(array_):
array_[i] = dim(v)
def eryksun(array_):
array_[:] = vectorize(dim)(array_)
#autojit
def numba(array_):
for row in range(len(array_)):
for col in range(len(array_[0])):
array_[row][col] = dim(array_[row][col])
def ufunc_ed(array_):
multiplied = array_ * 0.67328
array_[:] = rint(multiplied)
def ufunc_frompyfunc(array_):
udim = frompyfunc(dim,1,1)
array_ = udim(array_)
array_.astype("int")
# MAIN
r = []
totest = ('mac', 'mac_two', 'mac_three', 'senderle', 'eryksun', 'numba','ufunc_ed','ufunc_frompyfunc')
for fname in totest:
print('\nTesting `%s`...' % fname)
r.append(best(fname, reps=50, side=50))
# The following is for visually checking the functions returns same results
tmp = get_array(3)
eval('%s(tmp)' % fname)
print (tmp)
tmp = min(r)/100
results = list(zip(totest,r))
results.sort(key=lambda x: x[1])
print('\n===== ...AND THE WINNER IS... =========================')
for name,time in results:
Out = '{:<34}: {:8.4f}ms [{:5.0f}%]'.format(name,time*1000,time/tmp)
print(Out)
print('=======================================================\n')
And finally, the results:
===== ...AND THE WINNER IS... =========================
ufunc_ed : 0.3205ms [ 100%]
ufunc_frompyfunc : 3.8280ms [ 1194%]
eryksun : 3.8989ms [ 1217%]
mac_three : 21.4538ms [ 6694%]
senderle : 22.6421ms [ 7065%]
mac_two : 24.6230ms [ 7683%]
mac : 26.1463ms [ 8158%]
numba : 27.5041ms [ 8582%]
=======================================================
Related
I'm looking for the fastest way (computationally speaking) to do :
CDAB to ABCD
Say I have this list of strings:
a = ['3412','7895','0042','1122','0001']
And I want my output to be a string b = 12349578420022110100 with something like a 16-bit byte-swap
My code goes (I used the entry as a string but it will be a list soon):
a = '34127895004211220001'
b = ''
i = 0
while (i < len(a)):
b = b + a[i + 2:i + 4] + a[i:i + 2]
i = i + 4
print(b)
b = 12349578420022110100
Do you think this approach is the best one ?
Just two possibilities (which are both based on your code):
a = ['3412', '7895', '0042', '1122', '0001']
def first():
return ''.join([ i[-2:]+i[:2] for i in a ])
def second():
return ''.join(map(lambda i: i[-2:]+i[:2], a))
print(first())
print(second())
import timeit
print(timeit.timeit('first()', globals=globals())) # 1.14
print(timeit.timeit('second()', globals=globals())) # 1.34
If you have several millions of swaps to do, maybe I also would first try to check if the bottleneck is really the swapping. It also quite likely depends on the length of a (if much longer other methods e.g. numpy may be faster).
Like #Ashwini mentioned it's depend on input size.
But my approach is to add how many possible functions so my solution is this:
def swap(string):
return string[-2:] + string[:-2]
a = ['3412','7895','0042','1122','0001']
s = ''.join(swap(a[i]) for i in range(len(a)))
print(s)
It depends on the input size, if your input size is small your current approach is fine. For bigger input, it's recommended to not use + to build large strings, as new memory needs to be allocated every time and lots of copying happens and results in quadratic runtime.
The recommended way is to build a list and join it using str.join:
>>> a = '34127895004211220001'
>>> "".join([a[i + 2:i + 4] + a[i:i + 2] for i in range(0, len(a), 4)])
'12349578420022110100'
Docs:
https://wiki.python.org/moin/PythonSpeed/PerformanceTips#String_Concatenation
https://docs.python.org/3/library/stdtypes.html#common-sequence-operations (Note 6)
I was convinced to save computation time in using lambda function, but it's not that clear. look at this example:
import numpy as np
import timeit
def f_with_lambda():
a = np.array(range(5))
b = np.array(range(5))
A,B = np.meshgrid(a,b)
rst = list(map(lambda x,y : x+y , A, B))
return np.array(rst)
def f_with_for():
a = range(5)
b = np.array(range(5))
rst = [b+x for x in a]
return np.array(rst)
lambda_rst = f_with_lambda()
for_rst = f_with_for()
if __name__ == '__main__':
print(timeit.timeit("f_with_lambda()",setup = "from __main__ import f_with_lambda",number = 10000))
print(timeit.timeit("f_with_for()",setup = "from __main__ import f_with_for",number = 10000))
result is simple:
-lambda function result with time it is 0.3514268280014221 s
- with for loop : 0.10633227700236603 s
How do I write my lambda function to be competitive ? I noticed the list function to get results from de map object is not good in time. Any other way to proceed ? the mesgrid function is certainly not the best as well...
every tip is welcome!
Considering the remark about the list:
import numpy as np
import timeit
def f_with_lambda():
A,B = np.meshgrid(range(150),range(150))
return np.array(map(lambda x,y : x+y , A, B))
def f_with_for():
return np.array([np.array(range(150))+x for x in range(150)])
if __name__ == '__main__':
print(timeit.timeit("f_with_lambda()",setup = "from __main__ import f_with_lambda",number = 10000))
print(timeit.timeit("f_with_for()",setup = "from __main__ import f_with_for",number = 10000))
it is changing a lot of things. This time (lambda vs for)
for 5:
0.30227499100146815 vs 0.2510572589999356 (quite similar)
for 150:
0.6687559890015109 vs 20.31807473200024 ( :) :) :) ) !! great job! thank you!
Memory allocation is taking time (it should call an OS procedure, it might be delayed).
In the lambda version, you allocated a, b, meshgrid, rst (list and array versions) + the return array.
In the for version, you allocated b and rst + the return array. a is a generator so it takes no time to create and load it in memory.
This is why your function using lambda is slower.
Plus, don't use list to handle result of np-array operations to cast it back to np-array.
Just by removing the list() it become faster (from 0.9 to 0.4).
def f_with_lambda():
a = np.array(range(SIZE))
b = np.array(range(SIZE))
A,B = np.meshgrid(a,b)
rst = map(lambda x,y : x+y , A, B)
return np.array(rst)
See https://stackoverflow.com/a/46470401/9453926 for speed comparison.
I compacted the code:
import numpy as np
import timeit
def f_with_lambda():
A,B = np.meshgrid(range(150),range(150))
return np.array(list(map(lambda x,y : x+y , A, B)))
def f_with_for():
return np.array([np.array(range(150))+x for x in range(150)])
if __name__ == '__main__':
print(timeit.timeit("f_with_lambda()",setup = "from __main__ import f_with_lambda",number = 10000))
print(timeit.timeit("f_with_for()",setup = "from __main__ import f_with_for",number = 10000))
This time, for a 5x5, the result is
Lambda vs for
0.38113487999726203 vs 0.24913009200099623
and with 150 it's better:
2.680842614001449 vs 20.176408246999927
But I found no way to integrate the mesgrid inside the lambda function. and the list conversion before the array is sad as well.
I took time to integrate the last remark from politinsa:
import numpy as np
import timeit
def f_with_lambda():
A,B = np.meshgrid(range(150),range(150))
return np.array(list(map(lambda x,y : x+y , A, B)))
def f_with_for():
return np.array([np.array(range(150))+x for x in range(150)])
def f_with_lambda_nolist():
A,B = np.meshgrid(range(150),range(150))
return np.array(map(lambda x,y : x+y , A, B))
if __name__ == '__main__':
print(timeit.timeit("f_with_lambda()",setup = "from __main__ import f_with_lambda",number = 10000))
print(timeit.timeit("f_with_for()",setup = "from __main__ import f_with_for",number = 10000))
print(timeit.timeit("f_with_lambda_nolist()",setup = "from __main__ import f_with_lambda_nolist",number = 10000))
results are:
2.4421722999977646 s
18.75847979998798 s
0.6800016999914078 s -> list conversion has (as explained) a real impact on memory allocation
I programmed class which looks something like this:
import numpy as np
class blank():
def __init__(self,a,b,c):
self.a=a
self.b=b
self.c=c
n=5
c=a/b*8
if (a>b):
y=c+a*b
else:
y=c-a*b
p = np.empty([1,1])
k = np.empty([1,1])
l = np.empty([1,1])
p[0]=b
k[0]=b*(c-1)
l[0]=p+k
for i in range(1, n, 1):
p=np.append(p,l[i-1])
k=np.append(k,(p[i]*(c+1)))
l=np.append(l,p[i]+k[i])
komp = np.zeros(shape=(n, 1))
for i in range(0, n):
pl_avg = (p[i] + l[i]) / 2
h=pl_avg*3
komp[i]=pl_avg*h/4
self.tot=komp+l
And when I call it like this:
from ex1 import blank
import numpy as np
res=blank(1,2,3)
print(res.tot)
everything works well.
BUT I want to call it like this:
res = blank(np.array([1,2,3]), np.array([3,4,5]), 3)
Is there an easy way to call it for each i element of this two arrays without editing class code?
You won't be able to instantiate a class with NumPy arrays as inputs without changing the class code. #PabloAlvarez and #NagaKiran already provided alternative: iterate with zip over arrays and instantiate class for each pair of elements. While this is pretty simple solution, it defeats the purpose of using NumPy with its efficient vectorized operations.
Here is how I suggest you to rewrite the code:
from typing import Union
import numpy as np
def total(a: Union[float, np.ndarray],
b: Union[float, np.ndarray],
n: int = 5) -> np.array:
"""Calculates what your self.tot was"""
bc = 8 * a
c = bc / b
vectorized_geometric_progression = np.vectorize(geometric_progression,
otypes=[np.ndarray])
l = np.stack(vectorized_geometric_progression(bc, c, n))
l = np.atleast_2d(l)
p = np.insert(l[:, :-1], 0, b, axis=1)
l = np.squeeze(l)
p = np.squeeze(p)
pl_avg = (p + l) / 2
komp = np.array([0.75 * pl_avg ** 2]).T
return komp + l
def geometric_progression(bc, c, n):
"""Calculates array l"""
return bc * np.logspace(start=0,
stop=n - 1,
num=n,
base=c + 2)
And you can call it both for sole numbers and NumPy arrays like that:
>>> print(total(1, 2))
[[2.6750000e+01 6.6750000e+01 3.0675000e+02 1.7467500e+03 1.0386750e+04]
[5.9600000e+02 6.3600000e+02 8.7600000e+02 2.3160000e+03 1.0956000e+04]
[2.1176000e+04 2.1216000e+04 2.1456000e+04 2.2896000e+04 3.1536000e+04]
[7.6205600e+05 7.6209600e+05 7.6233600e+05 7.6377600e+05 7.7241600e+05]
[2.7433736e+07 2.7433776e+07 2.7434016e+07 2.7435456e+07 2.7444096e+07]]
>>> print(total(3, 4))
[[1.71000000e+02 3.39000000e+02 1.68300000e+03 1.24350000e+04 9.84510000e+04]
[8.77200000e+03 8.94000000e+03 1.02840000e+04 2.10360000e+04 1.07052000e+05]
[5.59896000e+05 5.60064000e+05 5.61408000e+05 5.72160000e+05 6.58176000e+05]
[3.58318320e+07 3.58320000e+07 3.58333440e+07 3.58440960e+07 3.59301120e+07]
[2.29323574e+09 2.29323590e+09 2.29323725e+09 2.29324800e+09 2.29333402e+09]]
>>> print(total(np.array([1, 3]), np.array([2, 4])))
[[[2.67500000e+01 6.67500000e+01 3.06750000e+02 1.74675000e+03 1.03867500e+04]
[1.71000000e+02 3.39000000e+02 1.68300000e+03 1.24350000e+04 9.84510000e+04]]
[[5.96000000e+02 6.36000000e+02 8.76000000e+02 2.31600000e+03 1.09560000e+04]
[8.77200000e+03 8.94000000e+03 1.02840000e+04 2.10360000e+04 1.07052000e+05]]
[[2.11760000e+04 2.12160000e+04 2.14560000e+04 2.28960000e+04 3.15360000e+04]
[5.59896000e+05 5.60064000e+05 5.61408000e+05 5.72160000e+05 6.58176000e+05]]
[[7.62056000e+05 7.62096000e+05 7.62336000e+05 7.63776000e+05 7.72416000e+05]
[3.58318320e+07 3.58320000e+07 3.58333440e+07 3.58440960e+07 3.59301120e+07]]
[[2.74337360e+07 2.74337760e+07 2.74340160e+07 2.74354560e+07 2.74440960e+07]
[2.29323574e+09 2.29323590e+09 2.29323725e+09 2.29324800e+09 2.29333402e+09]]]
You can see that results are in compliance.
Explanation:
First of all I'd like to note that your calculation of p, k, and l doesn't have to be in the loop. Moreover, calculating k is unnecessary. If you see carefully, how elements of p and l are calculated, they are just geometric progressions (except the 1st element of p):
p = [b, b*c, b*c*(c+2), b*c*(c+2)**2, b*c*(c+2)**3, b*c*(c+2)**4, ...]
l = [b*c, b*c*(c+2), b*c*(c+2)**2, b*c*(c+2)**3, b*c*(c+2)**4, b*c*(c+2)**5, ...]
So, instead of that loop, you can use np.logspace. Unfortunately, np.logspace doesn't support base parameter as an array, so we have no other choice but to use np.vectorize which is just a loop under the hood...
Calculating of komp though is easily vectorized. You can see it in my example. No need for loops there.
Also, as I already noted in a comment, your class doesn't have to be a class, so I took a liberty of changing it to a function.
Next, note that input parameter c is overwritten, so I got rid of it. Variable y is never used. (Also, you could calculate it just as y = c + a * b * np.sign(a - b))
And finally, I'd like to remark that creating NumPy arrays with np.append is very inefficient (as it was pointed out by #kabanus), so you should always try to create them at once - no loops, no appending.
P.S.: I used np.atleast_2d and np.squeeze in my code and it could be unclear why I did it. They are necessary to avoid if-else clauses where we would check dimensions of array l. You can print intermediate results to see what is really going on there. Nothing difficult.
if it is just calling class with two different list elements, loop can satisfies well
res = [blank(i,j,3) for i,j in zip(np.array([1,2,3]),np.array([3,4,5]))]
You can see list of values for res variable
The only way I can think of iterating lists of arrays is by using a function on the main program for iteration and then do the operations you need to do inside the loop.
This solution works for each element of both arrays (note to use zip function for making the iteration in both lists if they have a small size as listed in this answer here):
for n,x in zip(np.array([1,2,3]),np.array([3,4,5])):
res=blank(n,x,3)
print(res.tot)
Hope it is what you need!
Lemme clarify:
What would be the fastest way to get every number with all unique digits between two numbers. For example, 10,000 and 100,000.
Some obvious ones would be 12,345 or 23,456. I'm trying to find a way to gather all of them.
for i in xrange(LOW, HIGH):
str_i = str(i)
...?
Use itertools.permutations:
from itertools import permutations
result = [
a * 10000 + b * 1000 + c * 100 + d * 10 + e
for a, b, c, d, e in permutations(range(10), 5)
if a != 0
]
I used the fact, that:
numbers between 10000 and 100000 have either 5 or 6 digits, but only 6-digit number here does not have unique digits,
itertools.permutations creates all combinations, with all orderings (so both 12345 and 54321 will appear in the result), with given length,
you can do permutations directly on sequence of integers (so no overhead for converting the types),
EDIT:
Thanks for accepting my answer, but here is the data for the others, comparing mentioned results:
>>> from timeit import timeit
>>> stmt1 = '''
a = []
for i in xrange(10000, 100000):
s = str(i)
if len(set(s)) == len(s):
a.append(s)
'''
>>> stmt2 = '''
result = [
int(''.join(digits))
for digits in permutations('0123456789', 5)
if digits[0] != '0'
]
'''
>>> setup2 = 'from itertools import permutations'
>>> stmt3 = '''
result = [
x for x in xrange(10000, 100000)
if len(set(str(x))) == len(str(x))
]
'''
>>> stmt4 = '''
result = [
a * 10000 + b * 1000 + c * 100 + d * 10 + e
for a, b, c, d, e in permutations(range(10), 5)
if a != 0
]
'''
>>> setup4 = setup2
>>> timeit(stmt1, number=100)
7.955858945846558
>>> timeit(stmt2, setup2, number=100)
1.879319190979004
>>> timeit(stmt3, number=100)
8.599710941314697
>>> timeit(stmt4, setup4, number=100)
0.7493319511413574
So, to sum up:
solution no. 1 took 7.96 s,
solution no. 2 (my original solution) took 1.88 s,
solution no. 3 took 8.6 s,
solution no. 4 (my updated solution) took 0.75 s,
Last solution looks around 10x faster than solutions proposed by others.
Note: My solution has some imports that I did not measure. I assumed your imports will happen once, and code will be executed multiple times. If it is not the case, please adapt the tests to your needs.
EDIT #2: I have added another solution, as operating on strings is not even necessary - it can be achieved by having permutations of real integers. I bet this can be speed up even more.
Cheap way to do this:
for i in xrange(LOW, HIGH):
s = str(i)
if len(set(s)) == len(s):
# number has unique digits
This uses a set to collect the unique digits, then checks to see that there are as many unique digits as digits in total.
List comprehension will work a treat here (logic stolen from nneonneo):
[x for x in xrange(LOW,HIGH) if len(set(str(x)))==len(str(x))]
And a timeit for those who are curious:
> python -m timeit '[x for x in xrange(10000,100000) if len(set(str(x)))==len(str(x))]'
10 loops, best of 3: 101 msec per loop
Here is an answer from scratch:
def permute(L, max_len):
allowed = L[:]
results, seq = [], range(max_len)
def helper(d):
if d==0:
results.append(''.join(seq))
else:
for i in xrange(len(L)):
if allowed[i]:
allowed[i]=False
seq[d-1]=L[i]
helper(d-1)
allowed[i]=True
helper(max_len)
return results
A = permute(list("1234567890"), 5)
print A
print len(A)
print all(map(lambda a: len(set(a))==len(a), A))
It perhaps could be further optimized by using an interval representation of the allowed elements, although for n=10, I'm not sure it will make a difference. I could also transform the recursion into a loop, but in this form it is more elegant and clear.
Edit: Here are the timings of the various solutions
2.75808000565 (My solution)
8.22729802132 (Sol 1)
1.97218298912 (Sol 2)
9.659760952 (Sol 3)
0.841020822525 (Sol 4)
no_list=['115432', '555555', '1234567', '5467899', '3456789', '987654', '444444']
rep_list=[]
nonrep_list=[]
for no in no_list:
u=[]
for digit in no:
# print(digit)
if digit not in u:
u.append(digit)
# print(u)
#iF REPEAT IS THERE
if len(no) != len(u):
# print(no)
rep_list.append(no)
#If repeatation is not there
else:
nonrep_list.append(no)
print('Numbers which have no repeatation are=',rep_list)
print('Numbers which have repeatation are=',nonrep_list)
Is there a more elegant way of writing this function?
def reduce(li):
result=[0 for i in xrange((len(li)/2)+(len(li)%2))]
for i,e in enumerate(li):
result[int(i/2)] += e
for i in range(len(result)):
result[i] /= 2
if (len(li)%2 == 1):
result[len(result)-1] *= 2
return result
Here, what it does:
a = [0,2,10,12]
b = [0,2,10,12,20]
reduce(a)
>>> [1,11]
reduce(b)
>>> [1,11,20]
It is taking average of even and odd indexes, and leaves last one as is if list has odd number of elements
what you actually want to do is to apply a moving average of 2 samples trough your list, mathematically you convolve a window of [.5,.5], then take just the even samples. To avoid dividing by two the last element of odd arrays, you should duplicate it, this does not affect even arrays.
Using numpy it gets pretty elegant:
import numpy as np
np.convolve(a + [a[-1]], [.5,.5], mode='valid')[::2]
array([ 1., 11.])
np.convolve(b + [b[-1]], [.5,.5], mode='valid')[::2]
array([ 1., 11., 20.])
you can convert back to list using list(outputarray).
using numpy is very useful if performance matters, optimized C math code is doing the work:
In [10]: %time a=reduce(list(np.arange(1000000))) #chosen answer
CPU times: user 6.38 s, sys: 0.08 s, total: 6.46 s
Wall time: 6.39 s
In [11]: %time c=np.convolve(list(np.arange(1000000)), [.5,.5], mode='valid')[::2]
CPU times: user 0.59 s, sys: 0.01 s, total: 0.60 s
Wall time: 0.61 s
def reduce(li):
result = [(x+y)/2.0 for x, y in zip(li[::2], li[1::2])]
if len(li) % 2:
result.append(li[-1])
return result
Note that your original code had two bugs: [0,1] would give 0 rather than 0.5, and [5] would give [4] instead of [5].
Here's a one-liner:
[(0.5*(x+y) if y != None else x) for x,y in map(None, *(iter(b),) * 2)]
where b is your original list that you want to reduce.
Edit: Here's a variant on the code I have above that maybe is a bit clearer and relies on itertools:
from itertools import izip_longest
[(0.5*(x+y) if y != None else x) for x,y in izip_longest(*[iter(b)]* 2)]
Here's another attempt at it that seems more straightforward to me because it's all one pass:
def reduce(li):
result = []
it = iter(li)
try:
for i in it:
result.append((i + next(it)) / 2)
except StopIteration:
result.append(li[-1])
return result
Here's my try, using itertools:
import itertools
def reduce(somelist):
odds = itertools.islice(somelist, 0, None, 2)
eves = itertools.islice(somelist, 1, None, 2)
for (x,y) in itertools.izip(odds,evens):
yield( (x + y) / 2.0)
if len(somelist) % 2 != 0 : yield(somelist[-1])
>>> [x for x in reduce([0, 2, 10, 12, 20]) ]
[1, 11, 20]
See also: itertools documentation.
Update: Fixed to divide by float rather than int.