Related
I am trying to create a function that compares 2 arrays and creates a new list with the maximum elements of the list without using numpy. I managed to create a manual version, but am having issues implementing this into a function.
Task: Create a function maximum_arrays(a,b) that compares both arrays a and b element-wise and returns a new array containing the larger elements. Use the insert2 function to add new elements to a list.
Example: from applying the function to the arrays a=[12,5,8,19,6] and b=[3,6,2,12,4] the result should be c=[12,6,8,19,6].
Current Code:
list_a = [12,5,8,19,6]
list_b = [3,6,2,12,4]
maximum_arrays = []
for item in list_a:
if list_b[item] > list_a[item]:
maximum_arrays.insert(list_b[item])
else:
maximum_arrays.insert(list_a[item])
print(maximum_arrays)
Manual Version:
list_a = [12,5,8,19,6]
list_b = [3,6,2,12,4]
#answer example
c = [12,6,8,19,6]
#empty list
maximum_arrays = []
#for each part of the list, choose the highest number of the other list and insert
maximum_arrays.insert(0, max(list_a[0],list_b[0]))
maximum_arrays.insert(1, max(list_a[1],list_b[1]))
maximum_arrays.insert(2, max(list_a[2],list_b[2]))
maximum_arrays.insert(3, max(list_a[3],list_b[3]))
maximum_arrays.insert(4, max(list_a[4],list_b[4]))
print(maximum_arrays)
use max in a list comprehension over the zipped lists, or numpy.max.
list_a = [12,5,8,19,6]
list_b = [3,6,2,12,4]
max_array = [max(i) for i in zip(list_a, list_b)]
print(max_array)
The explanation here is: zip turns n iterables into an iterator over tuples, where each tuple has n items. So, in the two-list case, zip([1, 2, 3], [4, 5, 6]) turns into ((1, 4), (2, 5), (3, 6)). Taking the max of all of these tuples gives you your list.
An important caveat, and one that has burned me several times, is that the number of tuples generated is the length of the shortest iterable in the zip. In other words, zip does not throw an exception when passed iterables of different lengths, and just stops when one of the input lists runs out. In this respect it differs from numpy.max, which does throw an error when given lists of different lengths.
Are you looking for something like this :
list_a = [12,5,8,19,6]
list_b = [3,6,2,12,4]
l = []
for i,j in enumerate(zip(list_a, list_b)):
l.insert(i, max(j))
print(l)
Other way using itertools.starmap:
list(starmap(max, zip(list_a, list_b)))
Output:
[12, 6, 8, 19, 6]
This is about 1.4x faster than list comprehension:
%timeit list(starmap(max, zip(list_a, list_b)))
# 1.19 µs ± 49.3 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
%timeit [max(i) for i in zip(list_a, list_b)]
# 1.69 µs ± 213 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
You can solve many ways , Same as your function or list comprehension using zip function.
Below solution on your way, If both length is same then take one list length and iterate and use the "append"/"insert" option to add the value to list.
list_a = [12,5,8,19,6]
list_b = [3,6,2,12,4]
maximum_arrays = []
list_length = len(list_a)
for item in range(list_length):
if list_b[item] > list_a[item]:
maximum_arrays.append(list_b[item])
else:
maximum_arrays.append(list_a[item])
print(maximum_arrays)
Consider the following setup:
import numpy as np
import itertools as it
A = np.random.rand(3,3,3,16,3,3,3,16) # sum elements of A to arrive at...
B = np.zeros((4,4)) # a 4x4 array (output)
I have a large array 'A' that I want to sum over, but in a very specific way. 'A' has a shape of (x,x,x,16,x,x,x,16) where the 'x' is some integer.
The desired result is a 4x4 matrix 'B', which I can calculate via a for-loop like so:
%%timeit
for x1,y1,z1,s1 in it.product(range(3), range(3), range(3), range(16)):
for x2,y2,z2,s2 in it.product(range(3), range(3), range(3), range(16)):
B[s1%4, s2%4] += A[x1,y1,z1,s1,x2,y2,z2,s2]
>> 134 ms ± 1.27 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
where the elements of B are "modulo-4" of the two axes with 16 elements in that dimension in 'A', here indexed by s1 and s2.
How can I achieve the same by broadcasting, or otherwise? Obviously with larger 'x' (dimensions in 'A'), the for-loop will get exponentially longer to compute, which is not ideal.
EDIT:
C = np.zeros((4,4))
for i,j in it.product(range(4), range(4)):
C[i,j] = A[:,:,:,i::4,:,:,:,j::4].sum()
This seems to work as well. But still involves 1 for-loop. Is there a way to make this any faster?
Here are a cleaner and a faster solution. Unfortunately, they are not the same ...
def clean(A):
return A.reshape(4*n*n*n, 4, 4*n*n*n, 4).sum(axis=(0, 2))
def fast(A):
return np.bincount(np.tile(np.arange(16).reshape(4, 4), (4, 4)).ravel(), A.sum((0,1,2,4,5,6)).ravel(), minlength=16).reshape(4, 4)
At n==6 fast is about three times faster.
I have a list of 3D arrays that are all different shapes, but I need them to all be the same shape. Also, that shape needs to be the smallest shape in the list.
For example my_list with three arrays have the shapes (115,115,3), (111,111,3), and (113,113,3) then they all need to be (111,111,3). They are all square color images so they will be of shape (x,x,3).
So I have two main problems:
How do I find the smallest shape array without looping or keeping a variable while creating the list?
How do I efficiently set all arrays in a list to the smallest shape?
Currently I am keeping a variable for smallest shape while creating my_list so I can do this:
for idx, img in enumerate(my_list):
img = img[:smallest_shape,:smallest_shape]
my_list[idx] = img
I just feel like this is not the most efficient way, and I do realize I'm losing values by slicing, but I expect that.
I constructed a sample list with
In [513]: alist=[np.ones((512,512,3)) for _ in range(100)]
and did some timings.
Collecting shapes is fast:
In [515]: timeit [a.shape for a in alist]
10000 loops, best of 3: 31.2 µs per loop
Taking the min takes more time:
In [516]: np.min([a.shape for a in alist],axis=0)
Out[516]: array([512, 512, 3])
In [517]: timeit np.min([a.shape for a in alist],axis=0)
1000 loops, best of 3: 344 µs per loop
slicing is faster
In [518]: timeit [a[:500,:500,:] for a in alist]
10000 loops, best of 3: 133 µs per loop
now try to isolate the min step.
In [519]: shapes=[a.shape for a in alist]
In [520]: timeit np.min(shapes, axis=0)
The slowest run took 5.75 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 136 µs per loop
When you have lists of objects, iteration is the only way to deal with all elements. Look at the code for np.hstack and np.vstack (and others). They do one or more list comprehensions to massage all the input arrays into the correct shape. Then they do np.concatenate which iterates too, but in compiled code.
I have a list of tuples in Python, and I have a conditional where I want to take the branch ONLY if the tuple is not in the list (if it is in the list, then I don't want to take the if branch)
if curr_x -1 > 0 and (curr_x-1 , curr_y) not in myList:
# Do Something
This is not really working for me though. What have I done wrong?
The bug is probably somewhere else in your code, because it should work fine:
>>> 3 not in [2, 3, 4]
False
>>> 3 not in [4, 5, 6]
True
Or with tuples:
>>> (2, 3) not in [(2, 3), (5, 6), (9, 1)]
False
>>> (2, 3) not in [(2, 7), (7, 3), "hi"]
True
How do I check if something is (not) in a list in Python?
The cheapest and most readable solution is using the in operator (or in your specific case, not in). As mentioned in the documentation,
The operators in and not in test for membership. x in s evaluates to
True if x is a member of s, and False otherwise. x not in s returns
the negation of x in s.
Additionally,
The operator not in is defined to have the inverse true value of in.
y not in x is logically the same as not y in x.
Here are a few examples:
'a' in [1, 2, 3]
# False
'c' in ['a', 'b', 'c']
# True
'a' not in [1, 2, 3]
# True
'c' not in ['a', 'b', 'c']
# False
This also works with tuples, since tuples are hashable (as a consequence of the fact that they are also immutable):
(1, 2) in [(3, 4), (1, 2)]
# True
If the object on the RHS defines a __contains__() method, in will internally call it, as noted in the last paragraph of the Comparisons section of the docs.
... in and not in,
are supported by types that are iterable or implement the
__contains__() method. For example, you could (but shouldn't) do this:
[3, 2, 1].__contains__(1)
# True
in short-circuits, so if your element is at the start of the list, in evaluates faster:
lst = list(range(10001))
%timeit 1 in lst
%timeit 10000 in lst # Expected to take longer time.
68.9 ns ± 0.613 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
178 µs ± 5.01 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
If you want to do more than just check whether an item is in a list, there are options:
list.index can be used to retrieve the index of an item. If that element does not exist, a ValueError is raised.
list.count can be used if you want to count the occurrences.
The XY Problem: Have you considered sets?
Ask yourself these questions:
do you need to check whether an item is in a list more than once?
Is this check done inside a loop, or a function called repeatedly?
Are the items you're storing on your list hashable? IOW, can you call hash on them?
If you answered "yes" to these questions, you should be using a set instead. An in membership test on lists is O(n) time complexity. This means that python has to do a linear scan of your list, visiting each element and comparing it against the search item. If you're doing this repeatedly, or if the lists are large, this operation will incur an overhead.
set objects, on the other hand, hash their values for constant time membership check. The check is also done using in:
1 in {1, 2, 3}
# True
'a' not in {'a', 'b', 'c'}
# False
(1, 2) in {('a', 'c'), (1, 2)}
# True
If you're unfortunate enough that the element you're searching/not searching for is at the end of your list, python will have scanned the list upto the end. This is evident from the timings below:
l = list(range(100001))
s = set(l)
%timeit 100000 in l
%timeit 100000 in s
2.58 ms ± 58.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
101 ns ± 9.53 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)
As a reminder, this is a suitable option as long as the elements you're storing and looking up are hashable. IOW, they would either have to be immutable types, or objects that implement __hash__.
One can also use the count method of list class:
lets say we have a list:
x = [10,20,30,40,50]
To confirm if we have an element(i.e 10) in the list or Not and the frequency of its occurrence:
if x.count(10):
print(x.count(10))
else:
print(10," Not in the list")
I know this is a very old question but in the OP's actual question of "What have I done wrong?", the problem seems to be in how to code:
take the branch ONLY if the tuple is not in the list
This is logically equivalent to (as OP observes)
IF tuple in list THEN don't take the branch
It's, however, entirely silent on what should happen IF tuple not in list. In particular, it doesn't follow that
IF tuple not in list THEN take the branch
So OP's rule never mentions what to do IF tuple not in list. Apart from that, as the other answers have noted, not in is the correct syntax to check if an object is in a list (or any container really).
my_tuple not in my_list # etc.
I have a numpy array, say, [a,b,c,d,e,...], and would like to compute an array that would look like [x*a+y*b, x*b+y*c, x*c+y*d,...]. The idea that I have is to first split the original array into something like [[a,b],[b,c],[c,d],[d,e],...] and then attack this creature with np.average specifying the axis and weights (x+y=1 in my case), or even use np.dot. Unfortunately, I don't know how to create such array of [a,b],[b,c],... pairs. Any help, or completely different idea even to accomplish the major task, are much appreciated :-)
The quickest, simplest would be to manually extract two slices of your array and add them together:
>>> arr = np.arange(5)
>>> x, y = 10, 1
>>> x*arr[:-1] + y*arr[1:]
array([ 1, 12, 23, 34])
This will turn into a pain if you want to generalize it to triples, quadruples... But you can create your array of pairs from the original array with as_strided in a much more general form:
>>> from numpy.lib.stride_tricks import as_strided
>>> arr_pairs = as_strided(arr, shape=(len(arr)-2+1,2), strides=arr.strides*2)
>>> arr_pairs
array([[0, 1],
[1, 2],
[2, 3],
[3, 4]])
Of course the nice thing about using as_strided is that, just like with the array slices, there is no data copying involved, just messing with the way memory is viewed, so creating this array is virtually costless.
And now probably the fastest is to use np.dot:
>>> xy = [x, y]
>>> np.dot(arr_pairs, xy)
array([ 1, 12, 23, 34])
This looks like a correlate problem.
a
Out[61]: array([0, 1, 2, 3, 4, 5, 6, 7])
b
Out[62]: array([1, 2])
np.correlate(a,b,mode='valid')
Out[63]: array([ 2, 5, 8, 11, 14, 17, 20])
Depending on array size and BLAS dot can be faster, your milage will vary greatly:
arr = np.random.rand(1E6)
b = np.random.rand(2)
np.allclose(jamie_dot(arr,b),np.convolve(arr,b[::-1],mode='valid'))
True
%timeit jamie_dot(arr,b)
100 loops, best of 3: 16.1 ms per loop
%timeit np.correlate(arr,b,mode='valid')
10 loops, best of 3: 28.8 ms per loop
This is with an intel mkl BLAS and 8 cores, np.correlate will likely be faster for most implementations.
Also an interesting observation from #Jamie's post:
%timeit b[0]*arr[:-1] + b[1]*arr[1:]
100 loops, best of 3: 8.43 ms per loop
His comment also eliminated the use of np.convolve(a,b[::-1],mode=valid) to the simpler correlate syntax.
If you have a small array, I would create a shifted copy:
shifted_array=numpy.append(original_array[1:],0)
result_array=x*original_array+y*shifted_array
Here you have to store your array twice in memory, so this solution is very memory inefficient, but you can get rid of the for loops.
If you have large arrays, you really need a loop (but much rather a list comprehension):
result_array=[x*original_array[i]+y*original_array[i+1] for i in xrange(len(original_array)-1)]
It gives you the same result as a python list, except for the last item, which should be treated differently anyway.
Based on some random trials, for arrays smaller than 2000 items. the first solution seems to be faster than the second one, but runs into MemoryError even for relatively small arrays (a few 10s of thousands on my PC).
So generally, use a list comprehension, but if you surely know that you will run this only on small (max. 1-2 thousand) arrays, you have a better shot.
Creating a new list like [[a,b],[b,c],[c,d],[d,e],...] would be both memory and time inefficient, as you also need a for loop (or similar) to create it, and you have to store every old value in a new array twice, so you would end up with storing your original array three times.
Another way is to create the right pairs in the array a = np.array([a,b,c,d,e,...]), reshape according to the size of array b = np.array([x, y, ...]) and then take advantage of numpy broadcasting rules:
a = np.arange(8)
b = np.array([1, 2])
a = a.repeat(2)[1:-1]
ans = a.reshape(-1, b.shape[0]).dot(b)
Timings (on my computer):
#Ophion's solution:
# 100000 loops, best of 3: 4.67 µs per loop
This solution:
# 100000 loops, best of 3: 9.78 µs per loop
So, it is slower. #Jaime's solution is better since it does not copy the data like this one.